Modern and Ethical Implications of Biased Artificial Intelligence

Last summer, when I read the article, Artificial Intelligence’s White Guy Problem, it blew my mind. Since then, it has been a topic that I think about on a regular basis and have continued to see more articles concerning the topic.

When I saw that cases were due for the Fordham University Undergrad Business Ethics Case Competition I was insistent on pursuing this topic. My friends, Joe, Corinne, Alex, and I then submitted the following case. Unfortunately, we did not make it to the new round (which I was fairly bummed out about) but our case was commended with an Honorary Mention. ¯\_(ツ)_/¯

I’m interested in receiving feedback/thoughts concerning our case and continuing this conversation because this is a topic that I feel real strongly about. Our case was meant to answer specific questions but it stresses the ethical dilemma and the alarming global implications concerning bias in artificial intelligence.

Check it out.

Modern and Ethical Implications of Biased Artificial Intelligence

An Overview of the Rise and Popular Application of Artificial Intelligence

The term, artificial intelligence (AI) has assimilated into modern vernacular. It is a term and an abbreviation used so often that it is commonplace amongst modern terms like, www or Wifi. The modern understanding of AI was introduced by John McCarthy in 1956 at a conference where he demonstrated and defined AI as “the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable” (McCarthy).

Recently, AI has become more commonplace with examples like Siri, Google Now, Amazon Alexa, IBM Watson, and more. It has become integrated into everyday life and continues to be applied to problems throughout business. However, due to the pervasive application and regular use of AI, it has been scrutinized by a world of diverse and heterogeneous people. It is evident that the people who have built modern AI is not as representative as the consumers who interact with it. It became evident, initially, in many small ways; a key example being, “Google’s photo app, which applies automatic labels to pictures…was classifying images of black people as gorillas” (Crawford). It soon became clear that, “Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many ‘intelligent’ systems that shape how we are categorized and advertised to” (Crawford). This issue needs to be addressed—however, where and how it needs to be addressed is the question. The channel that leads to the development of unbiased AI by more diverse people can begin as early as faulty primary education and as late as an AI workplace that is not welcoming and inclusive.

This is an ethical dilemma because AI will continue to have a more pertinent role in everyday life and continue to touch and effect a global audience; however, if this technology is meant to serve a world of diverse people it needs to be developed by a representative group of people as well. If not, avoidable and racists mistakes and biases will be programmed into a technology that will affect tomorrow’s technology. Therefore, alternatives would include focusing on finding, hiring, and developing people from marginalized groups into technical roles who will have a direct impact in it’s development.


Explanation of ethical dilemma and identification of alternatives

As AI continues to be a part of everyday life, three key stakeholders become the crux of this dilemma: consumers, employees, and investors. The stakeholder with the most to lose in this scenario is that of the consumer group. As this technology becomes the foundation for new advancements, we must understand that biases coded into this foundation can create stratifying problems for consumer groups that are so diverse by nature.

The second stakeholder group is employees. In recent years, developing a diverse workforce has become a staple in many companies’ mission statements. This network of diverse employees is what drives consumer market share growth, innovation, and above all else provides a holistic well-educated group of individuals who want to better the company, industry, and lifestyle of others. However, there is still much work to be done and in areas like AI development, growth in diversity has stagnated. A glaring example of this is how in 2016, the number of women graduating with a degree in computer science is only eighteen percent, or half of what it was in 1984 when it peaked at thirty-seven percent (Patel).

The final stakeholder group is that of investors. Soon, AI will play an integral role in most technological processes. Therefore, investors must take an extremely careful look at how biases can become a viable or even a severe threat to investments. An example of this can be examined with Microsoft’s Tay as well as IBM’s Watson (Madrigal and Reese). In both cases, these AIs were developed without any form of artificial emotional intelligence and brought unwanted media coverage. Biases in AI can create a plethora of problems for companies and in both examples mentioned they were quickly taken down because they provoked negative branding for their respective companies. Overall, as AI becomes a cornerstone of technology we must consider these three large stakeholder groups.

Concerning this issue there needs to be an identification of key moral principles, relevant laws, and major business principles. Firstly, there is a major concern of a phenomenon known as groupthink. Groups with homogenous backgrounds and experiences tend to influence each other and discourage new ideas and practices.

“Diversity of opinions and discussions can result in better solutions than what would occur individually. If members are encouraged to speak up and create checks and balances in the team, then they are better able to hold other group members accountable…[Discouraging groupthink means a] willingness to seek common ground, explore different options, and commit to finding the most ethical solution” (Guffey).

Therefore, to avoid groupthink, there needs to be a strong promotion and support of a workplace that fosters the “diversity of opinions” so that groupthink will not be able to take root and erode those positive environments.

Similar to affirmative action standards being upheld at universities in the United States there are requirements for employment standards. “For federal contractors and subcontractors, affirmative action must be taken by covered employers to recruit and advance qualified minorities, women, persons with disabilities, and covered veterans. The affirmative action requirements include training programs, outreach efforts, and other positive steps. These procedures should be incorporated into the company’s written personnel policies. Employers with written affirmative action programs must implement them, keep them on file and update them annually” (“Department of Labor—Affirmative Action”). There are several laws that address this topic most notably, E.O. 11246 Executive Order 11246, 29 USC §793 Rehabilitation Act of 1973, and Nondiscrimination Obligations of Contractors and Subcontractors Regarding Individuals with Disabilities (“Department of Labor Affirmative Action”). These laws address issues concerning veterans, differently abled people, and the need to not discriminate. Following the identification of key moral principles, relevant laws, and major business principles, the three-pronged solution can be introduced.


Solution and Stakeholder Analysis

The ethical issues surrounding diversity, bias, and AI are incredibly nuanced and therefore require a three-pronged approach, addressing the following: basic access and education; recruitment and mentoring; and the implementation of AI in business. Given the stunning lack of diversity within Silicon Valley and other technical roles, the issues must first be addressed through basic education and access to STEM-related fields (Wiener). In order to increase the level of diversity, especially within the field of AI, businesses must work to increase access to and inclusion within STEM fields. In supporting such efforts, businesses will increase diversity within their future applicant pools.

In supporting education and the increasing diversity among STEM graduates, it is then crucial for businesses to focus on recruiting and building a diverse workforce across all levels and areas of the company—from entry-level engineers to c-suite executives. This effort must move beyond quotas to become an integral part of a company’s culture and mission, where all members of the company work to build meaningful relationships with diverse candidates and hold others accountable for ensuring such actions are taken.

Even with the most serious of recruitment efforts, it will not always be possible to find available, diverse talent that are capable for every role—especially highly-technical, senior roles. When a company is unable to find a diverse candidate for a position internally or externally, they must focus on mentorship and development. A company should identify diverse talent within the company and focus on mentoring and developing that person so that the next time someone looks to hire for that position, a diverse candidate will be available (Huet).

Lastly, while a more diverse workforce is necessary to solving the ethical issues of AI bias, it will not solve the problem entirely. To fully address the issue, companies must develop specific guidelines for eliminating bias within AI systems and ensure they are implemented and evaluated with great concern for potential bias. While having a diverse set of perspectives building the programs is key to building a culture concerned with and aware of bias, AI is only as good as the data they are trained on. Therefore, companies must work to identify potential areas of bias within their own historical data or they risk building biased, harmful AI systems. For example, when building an AI application to track and evaluation job applicants, a company must remove identifying characteristics from their historical data or risk building bias into the system. If the historical data used to train the models includes bias (as is likely) a company risks building a system to automate bias rather than solve it, thereby creating a cyclical problem.

The ethical issue we are examining in response to AI is at its core a human issue. The biases the designers have, and the discrimination that grows from them set an alarming precedent as AI becomes more integrated into our lives. Our solution focuses increasing diversity among teams and companies creating AI and improving the overall culture to foster a more inclusive environment. Through laws and studies it’s been proven again and again that greater diversity leads to better work. National laws dealing with civil rights protections in the workplace, most notably the Civil Rights Act of 1964 and national laws looking at the importance of affirmative action (“Department of Labor—Affirmative Action”). Additional studies have supported the importance of diversity in the workplace. Studies for explicit ethical benefits look at the positive correlation between diverse boards and greater social responsibility (Hafsi). Looking at a broader perspective also highlights the positive correlation between diversity and innovative culture (Lambert).

Lastly, the stakeholder analysis on how each stakeholder is affected will focus on consumers, employees, and investors. With our course of action, consumers will be most affected with the end product, an emotionally intelligent AI that is built by a diverse workforce. In doing so we will be able to eliminate bias within AI and create a foundational technology that supports a growing globalized consumer ecosystem. Not only will consumers interact with a technology that can resonate more with themselves, but this technology will become the forefront of new innovations and therefore it will reduce the invasive nature of bias within new developing technologies.

In regards to employees, our solution will help introduce and temper a diverse workforce that will develop emotionally intelligent AI as well as innovative technological advancements that resonate with our globalized economic ecosystem. As technology companies implement diverse leadership mentoring programs as well as incremental changes in recruiting for diverse talent, there is a reduced risk of a homogenous workforce that cannot disrupt the current mindset of AI.

Finally, there is the investor stakeholder group. By investing in a diverse workforce as well as an emotionally intelligent AI, investors will not only mitigate the risks of AI, but they will dominate the market as a key player in creating foundational technology for the future benefit of a globalized economy. Therefore, from a managerial perspective, our solutions play an important role in developing a strong investment opportunity for the investor stakeholder group.

In conclusion, our three-pronged approach tackles an enormous ethical dilemma that must be confronted to create a bright innovative and representative future for the global community. By engaging in education, corporate mentorship, and developing emotionally intelligent technologies AI will become the cornerstone of industry, technology, and our globalized economy. While there are many steps that must be taken and many obstacles that must be faced head on, with a diverse workforce the ethical dilemma with bias in AI can be dealt.

Works Cited

  • “AITopics.” Brief History. AI Topics, n.d. Web.
  • “Department of Labor—Affirmative Action.” United States Department of Labor. N.p., 29  June 2016. Web.
  • Guffey, Mary Ellen. Business Communication: Process and Product. Cincinnati: South-Western College Pub., 2000. Print.
  • Hafsi, Taïeb and Turgut, Gokhan, “Boardroom Diversity and its Effect on Social  Performance: Conceptualization and Empirical Evidence,” Journal of Business Ethics 112, 3 (2013): 463–4
  • Huet, Ellen. “Slack Makes Diversity a Priority While Quadrupling Head Count.” Bloomberg, 08 Dec. 2016. Web.
  • Lambert, Jason. “Cultural Diversity as a Mechanism for Innovation: Workplace Diversity and the Absorptive Capacity Framework.” Journal of Organizational Culture, Communications & Conflict. 2016, Vol. 20 Issue 1, p68-77. 10p. , Database: Business Source Complete
  • Madrigal, Alexis C. “IBM’s Watson Memorized the Entire ‘Urban Dictionary,’ Then His Overlords Had to Delete It.” The Atlantic. Atlantic Media Company, 10 Jan. 2013. Web.
  • McCarthy, John. “What Is Artificial Intelligence.” Stanford University Computer Science Department. Stanford University, n.d. Web
  • Patel, Prachi. “Computer Vision Leader Fei-Fei Li on Why AI Needs Diversity.” IEEE Spectrum: Technology, Engineering, and Science News. N.p., 19 Oct. 2016. Web.
  • Reese, Hope. “Why Microsoft’s ‘Tay’ AI Bot Went Wrong.” TechRepublic. N.p., 01 Apr. Web.
  • Wells, Georgia. “Facebook Blames Lack of Available Talent for Diversity Problem.” The  Wall Street Journal. Dow Jones & Company, 14 July 2016. Web.
  • Wiener, Anna. “Why Can’t Silicon Valley Solve Its Diversity Problem?” The New Yorker. The New Yorker, 23 Nov. 2016. Web.

Leave a comment

Your email address will not be published. Required fields are marked *

The reCAPTCHA verification period has expired. Please reload the page.