16 Uncomfortable Questions Everyone Needs to Ask About Artificial Intelligence
When building AI, we need to consider our own motivations and biases, and the social implications of the tools we create.
PHOTO CREDIT: Getty Images
Just nine giant tech companies in the U.S. and China are behind the vast majority of advancements in artificial intelligence worldwide. In her new book, The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity (PublicAffairs, March 5), Amy Webb envisions three possible futures, ranging from optimistic to apocalyptic, that could result from the actions we take--or don't take--to control the development of AI and shape its global impact. In this excerpt, she puts forth a series of tough ethical questions that the humans building AI systems should use to guide their work.
The rules--the algorithm--by which every culture, society, and nation lives, and has ever lived, were always created by just a few people. Democracy, communism, socialism, religion, veganism, nativism, colonialism--these are constructs we've developed throughout history to help guide our decisions. Even in the best cases, they aren't future-proof. Technological, social, and economic forces always intervene and cause us to adapt.
The Ten Commandments make up an algorithm intended to create a better society for humans alive more than 5,000 years ago. One of the commandments is to take a full day of rest a week and not to do any work at all that day. In modern times, most people don't work the exact same days or hours from week to week, so it would be impossible not to break the rule. As a result, people who follow the Ten Commandments as a guiding principle are flexible in their interpretation, given the realities of longer workdays, soccer practice, and email. Adapting is fine--it works really well for us, and for our societies, allowing us to stay on track. Agreeing on a basic set of guidelines allows us to optimize for ourselves.
There would be no way to create a set of commandments for AI. We couldn't write out all of the rules to correctly optimize for humanity, and that's because while thinking machines may be fast and powerful, they lack flexibility. There isn't an easy way to simulate exceptions, or to try and think through every single contingency in advance. Whatever rules might get written, there would always be a circumstance in the future in which some people might want to interpret the rules differently, or to ignore them completely, or to create amendments in order to manage an unforeseen circumstance.
Knowing that we cannot possibly write a set of strict commandments to follow, should we, instead, focus our attention on the humans building the systems? These people--AI's tribes--should be asking themselves uncomfortable questions, beginning with:
- What is our motivation for AI? Is it aligned with the best long-term interests of humanity?
- What are our own biases? What ideas, experiences, and values have we failed to include in our tribe? Who have we overlooked?
- Have we included people unlike ourselves for the purpose of making the future of AI better--or have we simply included diversity on our team to meet certain quotas?
- How can we ensure that our behavior is inclusive?
- How are the technological, economic, and social implications of AI understood by those involved in its creation?
- What fundamental rights should we have to interrogate the data sets, algorithms, and processes being used to make decisions on our behalf?
- Who gets to define the value of human life? Against what is that value being weighed?
- When and why do those in AI's tribes feel that it's their responsibility to address social implications of AI?
- Does the leadership of our organization and our AI tribes reflect many different kinds of people?
- What role do those commercializing AI play in addressing the social implications of AI?
- Should we continue to compare AI to human thinking, or is it better for us to categorize it as something different?
- Is it OK to build AI that recognizes and responds to human emotion?
- Is it OK to make AI systems capable of mimicking human emotion, especially if it's learning from us in real time?
- What is the acceptable point at which we're all OK with AI evolving without humans directly in the loop?
- Under what circumstances could an AI simulate and experience common human emotions? What about pain, loss, and loneliness? Are we OK causing that suffering?
- Are we developing AI to seek a deeper understanding of ourselves? Can we use AI to help humanity live a more examined life?
There are nine big tech companies--six American, and three Chinese--that are overwhelmingly responsible for the future of artificial intelligence. In the U.S. they are Google, Amazon, Microsoft, Apple, IBM and Facebook ("G-MAFIA"). In China it's the BAT: Baidu, Alibaba and Tencent.
The G-MAFIA has started to address the problem of guiding principles through various research and study groups. Within Microsoft is a team called FATE--for Fairness, Accountability, Transparency, and Ethics in AI. In the wake of the Cambridge Analytica scandal, Facebook launched an ethics team that was developing software to make sure that its AI systems avoided bias. (Notably, Facebook did not go so far as to create an ethics board focused on AI.) DeepMind created an ethics and society team. IBM publishes regularly about ethics and AI. In the wake of a scandal at Baidu--the search engine prioritized misleading medical claims from a military-run hospital, where a treatment resulted in the death of a 21-year-old student--Baidu CEO Robin Li admitted that employees had made compromises for the sake of Baidu's earnings growth and promised to focus on ethics in the future.
The Big Nine produces ethics studies and white papers, it convenes experts to discuss ethics, and it hosts panels about ethics--but that effort is not intertwined enough with the day-to-day operations of the various teams working on AI.
The Big Nine's AI systems are increasingly accessing our real-world data to build products that show commercial value. The development cycles are quickening to keep pace with investors' expectations. We've been willing--if unwitting--participants in a future that's being created hastily and without first answering all those questions. As AI systems advance and more of everyday life gets automated, the less control we actually have over the decisions being made about and for us.
Amy Webb will appear at the Inc. Founders House in Austin on March 11.