Close Button
Newsletter Button

Sign up for our newsletter

The latest from Inc. Southeast Asia delivered to your inbox.

By signing up for newsletters, you are agreeing to our Terms of Use and Privacy Policy.
TECHNOLOGY

3 Ways to Address AI’s More Frightening Implications

Making AI work for you, not against you.

Share on
BY Jeff Barrett - 05 Dec 2018

ais implications

PHOTO CREDIT: Getty Images

AI has vast potential. The technology is being touted as a solution to some of humanity's most vexing problems, and rightly so. In China, where there aren't enough radiologists to review the 1.4 billion annual CT scans for lung cancer, algorithms can accurately and efficiently diagnose patients.

Around the world, the next generation of automobiles will be driven quite literally by AI, and removing angry, distracted, or drunk humans from behind the wheel will likely make the road a far safer place. Still, for every positive AI implementation, there's a downside just waiting to be uncovered.

One of the principal concerns about AI stems from its potential to spread misinformation. Social media platforms such as Facebook and Twitter are already having to deal with this practice, and they've taken to shutting down bots designed to spread hate speech and inflame public opinion.

According to Greg McBeth, head of revenue at Node.io, what's currently an annoyance will only continue to escalate: "I believe there's potential for an AI-driven misinformation crisis in our lifetime. AI can already convincingly manipulate images and video," McBeth noted, citing as an example instances in which some actresses' faces were superimposed onto inappropriate photos. Unfortunately, fake news is just the beginning.

In addition to faking images and videos, programmers with malicious intentions will use AI to commit other crimes, from the forging of financial documents that impact credit to the fabrication of phony evidence to produce wrongful convictions. In order to combat these efforts, we need to take the following steps.

1. Start the conversation now.

Advancements in AI are accelerating, and the use of the technology for nefarious purposes will as well. While news coverage seems to emphasize the revolutionary possibilities of AI, we must not shy away from the potential consequences. Earlier this year, Facebook's Mark Zuckerberg warned that it could be a decade before AI is able to recognize the nuances that allow it to red flag hate speech or false information.

AI's more nefarious uses could greatly outpace AI-based countermeasures. As a business leader, you can help guide this conversation. For instance, you could set up a roundtable discussion at an industry conference to share information and increase awareness of how AI may be used to spread misinformation -- and where the tech comes up short in detecting it.

2. Create safeguards for defense.

Science fiction author Isaac Asimov thought up his three laws of robotics with android servants in mind. We still don't have robots doing our household chores (not counting vacuums), but the laws have withstood the test of time.

To prevent AI from causing harm, the business community needs similar, universally accepted safeguards that apply to AI development. For instance, if you intend to use robots, you may be tempted to simplify the interface required to control them in order to make them more user-friendly for your employees. But be careful to balance those efforts with attention to cybersecurity. It's imperative to address system vulnerabilities that could make it easier for hackers to gain access to your robot and network.

3. Arm citizens with AI education.

In order to mitigate the damages of misinformation, the business community needs to educate the public about what AI is capable of, both good and bad. Include educational resources on this subject on your blog, website, email newsletter, and social media accounts. Inform your customers about your use of AI and what safeguards or policies you have in place to prevent cyberattacks.

When people are aware of ways AI can be used maliciously, they're more likely to recognize the red flags. For instance, if someone knows how to recognize signs that a Twitter account is potentially a political bot, they may think twice before retweeting something it shared.

On the other hand, when people are ignorant of AI's misuse, they won't hesitate to propagate misinformation. This needs to be a society-wide effort, but you can start by working with your team and your customers so they know what AI can do.

AI is responsible for exciting developments, but it's a powerful tool that can be used to do harm as well. Ultimately, it's impossible to prevent bad actors from developing AI for their own ill-intentioned purposes. But by taking these steps, business leaders can help minimize the impact of AI's downsides.

inc-logo Join Our Newsletter!
The news all entrepreneurs need to know now.

READ MORE

4 Huge Career Mistakes I Made That You Can Avoid

Read Next

How A Failed Sprinkler System Idea Turned Into A Wildly Successful SaaS Company

Read Next