Experts warn about possible misuse of new AI tool ChatGPT

Published: Jan. 24, 2023 at 7:44 AM HST
Email This Link
Share on Pinterest
Share on LinkedIn

(CNN) - A new artificial intelligence tool can write research papers and answer almost any question in seconds.

The powerful new technology, known as ChatGPT, is gaining popularity and has extraordinary potential, but there are also warnings about the huge risk of misuse.

ChatGPT, short for “Chat Generative Pre-Training Transformer,” is a machine-learning model that can generate human-like text.

It’s been trained on a massive amount of data, allowing it to understand and respond to a wide range of questions and prompts.

ChatGPT has exploded in popularity in recent months. CEOs are now using it to write emails and it even passed a Wharton School of Business exam.

Now experts are questioning whether people should be excited about ChatGPT or more fearful of it.

“I think we should have a mixed view,” said Gary Marcus, professor emeritus of psychology and neural science at NYU.

OpenAI, which owns ChatGPT, says the technology is still in its research phase and can produce inaccurate information.

“Artificial intelligence is sort of like a teenager right now,” Marcus said. “It’s exciting to see the teenager get its footing, but it’s also not there yet and we can’t trust it.”

Microsoft thinks it’s a good bet, even with some risks. They are investing billions of dollars in OpenAI.

Jack Po, CEO of Ansible Health, had ChatGPT take three versions of the U.S. medical licensing test. It passed all three.

“Not only can it answer very complex questions, it can also modulate its answer,” Po said.

Po and his team of 30 doctors started using the platform to help with treatment for their patients who have a pulmonary disease called COPD.

“What this technology could really enable, and has already started enabling us, is to suddenly suggest things that we might not be thinking of at all. It will absolutely save lives,” Po said.

Jake Heller is a lawyer who founded the company Casetext, which helps its clients comb through documents using AI like ChatGPT.

“You can have it read police reports. You can have it see if witnesses gave contradictory testimony. You can almost certainly help find information that is pertinent to guilt or innocence,” Heller said.

However, both Po and Heller say that human oversight of ChatGPT is still necessary. Even OpenAI says the platform can produce harmful instructions.

“In law, there absolutely is right and wrong answers. And that’s why, ChatGPT alone is not going to be enough to handle some of the most important questions in fields like law,” Heller said.

Then there’s the question of plagiarism.

Public schools in New York City banned ChatGPT on school network devices “due to concerns about negative impacts on student learning and concerns regarding the safety and accuracy of content.”

“It’s incredible innovation. At the same time it’s like opening a Pandora’s Box,” said Edward Tian, founder of GPTZero.

Tian, a 22-year-old Princeton student, spent his winter break building the app GPTZero, which he says can detect whether something is likely written by a human or ChatGPT.

He says teachers use it to check their students’ papers.

Tian said that it is a similar idea to an AI cross-checking another AI. He said that it could help with the possible spread of misinformation ChatGPT could cause, notifying if something has been AI-generated.

“So as opposed to misinformation, it’s more of like, it can only spot if something is AI-generated or human-generated,” Tian said.

That appears to be the greatest fear of all, the use of ChatGPT to spread misinformation.

“People who want to manipulate elections and things like that, instead of like writing one thing at a time, you’re going to be able to write thousands of things to give, for example, vaccine denialism more oxygen than it deserves,” Marcus said.

While ChatGPT is a tool designed to help humanity, it could also ultimately hurt it.

Marcus says we’re about 75 years away from AI becoming truly human-like.