Close Button
Newsletter Button

Sign up for our newsletter

The latest from Inc. Southeast Asia delivered to your inbox.

By signing up for newsletters, you are agreeing to our Terms of Use and Privacy Policy.
TECHNOLOGY

This A.I. Bot Can Convincingly ‘Write’ Entire Articles. It’s So Dangerously Good, the Creators Are Scared to Release It

Could this technology intended for good end up being bad? That’s what its creators are worried about.

Share on
BY Betsy Mikel - 24 Feb 2019

a.i. bot write articles

PHOTO CREDIT: Getty Images

You'd think you be able to tell the difference between a real news article written by a human and a made-up one written by an algorithm.

You'd think.

Open AI, a technology nonprofit that Elon Musk co-founded, found themselves in an ethical conundrum. The research company aims to build artificial intelligence tools that can be used for good. Usually, Open AI then shares its research and code widely for anyone to use (hence the word "open" in its name.)

Technology that's so good, OpenAI won't release it

OpenAI has been experimenting with an A.I. text generator they built called GPT-2. But they're not sharing it with the public as they've done with previous projects.

Because of fear of misuse, OpenAI is locking it down for further research. The public will not be allowed to access the code.

"Due to our concerns about malicious applications of the technology, we are not releasing the trained model," the company announced last week.

The artificial intelligence powers of GPT-2

GPT-2 "studies" a single line of text to learn the patterns of human language. It can then generate full paragraphs of text and mimic the writing style. It can even write full articles.

OpenAI quickly discovered a big problem with GPT-2 . The algorithm-generated texts were good. So good, that you couldn't tell a robot wrote them. It began to generate paragraphs of text that were eerily too human.

Fake news that looks (and reads) shockingly too real

The Guardian's Alex Hern fed GPT-2 a few sentences about Brexit. It spit out a full-length artificial article that even generated fake quotes using real names.

Hern points out that other text generators have obvious "tells" that signal their texts were not written by humans. GPT-2 shares none of those quirks. "When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject."

The fake news implications are obvious. This is the main reason OpenAI decided not to release their code. They fear it's too dangerous.

Malicious users could outsource writing any misinformation under the sun to GPT-2. All they'd need is a sample line of text to get started to generate a plausible-sounding article, complete with sources and quotations that sound legit. And it's easy to make any website "look" official online, given the democratization of web design.

Elon Musk wants you to know he's not to blame.

When OpenAI announced they wouldn't be releasing GPT2, Musk tweeted to clarify his relationship with the company.

Even though he was one of its co-founders, Musk said he parted ways with with the non-profit last year. Musk said he needed to focus on Tesla and SpaceX projects, didn't agree with everything OpenAI wanted to do, and the two organizations were competing for talent.

 

inc-logo Join Our Newsletter!
The news all entrepreneurs need to know now.

READ MORE

If You Want to Be Successful, Stop Asking for Permission

Read Next

5 Unbelievably Simple Ways to Make Your Content More Engaging

Read Next