Three AI Myths Busted
While the impact of AI and automation is profound, it’s not all doom and gloom
PHOTO CREDIT: Getty Images
Artificial intelligence (AI)’s multi-tentacled encroachment into the global economy of the future continues. Once the stuff of science fiction, AI has become an indispensable part of technological innovation in science, medicine, finance, business, and even military warfare.
Globally, AI has the potential to contribute around $15.7 trillion to the global economy by 2030, according to a study by PricewaterhouseCoopers, if smart strategic investment in various AI technologies are first made.
Singapore, for one, is ramping up investments and initiatives in AI and data science. In May 2017, the National Research Foundation (NRF) Singapore announced a S$150-million ($113 million) investment over five years in AI.SG, a national program in artificial intelligence to boost the country’s AI capabilities.
What are these vast sums in AI research intended to achieve? First, it may be critical to describe what AI is intended not to achieve — or to dispel three commonly held myths.
Myth 1: AI will soon take over all human jobs
Last year, a team of researchers from Yale University and Oxford’s Future of Humanity Institute published a study saying that AI will be capable of performing as well as or even better than humans by 2060. The findings are based on 352 industry leaders and academics that the researchers polled during May and June 2016.
Experts suggested that AI would surpass humans in a number of jobs, including driving trucks (2027), translating languages (2024), performing surgeries (2053), and even writing a New York Times bestseller (2049).
If truck driving appears to be a more obviously vulnerable position — self-driving lorries are already being tested in Japan by Toyota, amongst other auto manufacturers — then language translation, medicine, and writing books would appear to be more secure from the encroachment of AI. Yes and no. AI-assisted translation is already at work on rendering simple tasks like translating Japanese characters into English, for instance. Anticipating advances over the next decade that would allow a more seamless translation of handbooks and manuals between different languages isn’t hard to imagine.
Medicine is more tricky. Singapore start-up KroniKare is already using AI to help nurses diagnose their patient’s wounds by snapping photos using their smartphone. If the present trajectory of AI-driven medical innovation holds, it is not hard to envisage AI advancing from a diagnostic tool to a more active partner in medical treatment.
Does all this mean human jobseekers should start to panic now?
“[W]e need to think not just about jobs being taken away but also broaden our horizons and think about the skills and talents that people already have, and how they can be applied to this world,” entrepreneur and venture investor Christopher Schroeder said at a panel discussion last year.
For CEO and x.ai founder Dennis Mortensen, AI is not as smart as we think. He writes in an article in Quartz that AI is still not good at doing many of the things associated with human jobs.
“AI isn’t very good at jobs that require creativity, empathy, critical thinking, leadership, artistic expression, and a whole host of other qualities we traditionally think of as ‘human,’” he says.
Automation, explains Mortensen, will release humans from the need to perform specific tasks — the non-creative, non-personal ones — so we can be “better at being human.”
“I believe there will be plenty of challenging work for humans because of AI, not in spite of it,” writes Mortensen.
Myth 2: AI is blind to biases
As objective as computers seem to be, AI is not completely free of biases. For a system that has been designed to learn, its output is dependent on whatever data it receives.
“The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” John Giannandrea, Google’s artificial intelligence chief, says at a Google conference on the relationship between humans and AI systems.
According to an article from MIT Technology Review, where Giannandrea is quoted, “the problem of bias in machine learning is likely to become more significant as the technology spreads to areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.”
As such, Giannandrea suggests, “It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems.” At Google, for instance, academic researchers work with engineers in an effort to promote fairness and unbiased data.
Myth 3: AI closes the consumer to new experiences
In an age when Netflix and Spotify effectively decide what we want to watch and listen to based on data mining our past consumption patterns, AI is replacing human curation with data-backed judgment.
At Zalora, a regional e-commerce player, AI is being harnessed to give a more personalized customer experience by suggesting or showing items a specific customer may likely go for.
“We have hundreds of thousands of products, so [each person’s] experience is going to be very different…I think that’s something we need to continue to invest in terms of making sure that when you come on to the site, you get what’s most relevant to you, and being able to filter out the stuff that you don’t want to see,” Zalora group CEO Parker Gundersen says in an interview with Inc. Southeast Asia.
Is AI, therefore, blocking our path to new discovery? As with any technology, AI will still require a human touch. While it has the potential to narrow our field of vision, that does not mean that humans are incapable of seeking new experiences on their own. Humans love the familiar, but they will always venture out to discover new and interesting ideas, often as a direct consequence of being bombarded with the familiar.
For more stories on how deep technologies impact the way we live, work, and think, check out the SGInnovate blog.