
Here comes ChatGPT, the OpenAI-designed conversation artificial intelligence program stirring the tech-savvy world, and Johan Bollen is in full proactive mode.
The Luddy School of Informatics, Computing and Engineering professor and University of Amsterdam colleague Claudi Bockting wrote an article in the scientific journal Nature, ChatGPT: five priorities for research, that calls ChatGPT a science game-changer and lists five ways to respond.
The bottom-line message: embrace the opportunity, manage the risk.
“These algorithms can do things with text and images we were not able to do before,” Bollen says. “That can accelerate the rate of innovation and discovery tremendously.”
Or cause major problems.
ChatGPT debuted in November of 2022 with the academic force of a punch. Trained on massive amounts of text, it writes and takes tests almost like a human, but is vulnerable to bias, misinformation and potential academic fraud.
While there is no danger of robots taking over the world, ChatGPT and other AI systems have weapons-of-mass-disruption potential to manipulate people and disrupt society.
Bollen calls it evolution over revolution, but whatever the term, caution is required. Even ChatGPT creator and OpenAI Chief Executive Sam Altman has tweeted, "We have lots of work to do on robustness and truthfulness.”
ChatGPT has been compared positively to a calculator, a tool that saves time on doing math calculations, and negatively to a player piano, which plays songs without passion.
A Google programmer has used ChatGPT to score 70 percent on a lawyer’s bar exam, 70 percent on the United States Medical Licensing Examination, and rank in the 40th percentile on the Law School Admission Test. A Twitter user tried it on a mock SAT exam and scored a 1,020 out of 1,600, just slightly below the human average according to College Simply.
AI disruption concerns include Deepfakes, in which a person in an existing video or image is replaced with the image of someone else, like putting Mick Jagger’s face on George Washington’s image as he crosses the Delaware River.
Facebook once set the new-technology tone by urging its engineers to release new systems first and fix problems later. That full-speed-ahead approach has morphed into proceed with caution for AI company CEOs such as DeepMind’s Demis Hassabis.
Bollen and Bockting’s Nature article pushes the scientific community to be proactive in addressing ChatGPT-generated challenges. That means:
Fact check and verify. Humans are accountable for scientific accuracy.
Make rules for accountability to ensure integrity and transparency.
Invest in systems so everyone has access.
Embrace the benefits. Envision an “intelligent partnership” between people and AI that could do far more than people alone
Robustly debate ChatGPT’s pros and cons on an international level. Include under-represented groups and communities.
ChatGPT is a game changer in the academic world of essays and writing assignments. The results are moral dilemmas involving potential academic fraud.
“Plagiarism used to be that you copied text,” Bollen says. “This goes much further than that. The whole notion of authorship has changed.”
Submitting ChatGPT-produced assignments as your own is academic fraud.
“I have failed students for plagiarism,” Bollen says.
The problem comes in proving rather than suspecting fraud.
“If I know the students well and have seen their prose compared to what’s submitted,” Bollen says, “I could suspect there’s no feasible learning curve that connects one to the other, but I can’t penalize students because their work is too good or too eloquent or too well-referenced.”
Algorithms are being designed to detect AI-produced texts (OpenAI is working on a system called Classifer), which will lead to new algorithms designed to avoid detection.
Bollen isn’t impressed.
“The solution is not an arms race. That’s futile. The best thing we can do for our students is to focus on the things that really matter, things that cannot be automated -- originality, creativity, insight. That’s what we should train our students for.”
Across the country, some school districts have banned ChatGPT. Others embrace it.
One thing is clear -- it’s not going away. Students will use it.
To adjust, Bollen seeks to reduce fraud risk. He wants students to be transparent when using it. He’s restructured assignments and changed the way he gives tests. Assignments now require students to focus more on a deeper learning experience. They must show they can read and think about the subject matter.
“Think more deeply about the problems you’re trying to solve and why you’re trying to solve them,” Bollen says. “Why do they matter? What’s interesting and what’s not?”
As far as research, Bollen says this technology will help solve problems they’ve dealt with for a decade and more. His focus is on mental health.
In computational social science, researchers use large amounts of data to study collective and individual behavior.
Bollen says research indicates the language and writing of people who suffer from mental disorders are different from those who do not. The goal is to develop algorithms that can swiftly make inferences about the mental state of people from what they write online.
But language, with its nuances of sarcasm, irony and vagueness, is currently too sophisticated for algorithms to detect mental disorders.
But not for long.
“The hope is that it can lead to algorithms that can understand and evaluate language at levels that were not imaginable,” Bollen says.
That could mean earlier detection and better treatment and prevention options.
As far as academics, where could this lead?
Bollen expects that within two years, AI systems will be generally available. The systems will look at what students have written, ask them questions and help to complete the work in their own style. Students won’t have to worry about grammar or how to structure a paragraph. That will be automated. Bollen says this will free them to focus on what really matters.
“There’s no end game,” he says. “All of this gravitates to the higher-level skills we’re trying to develop in our students.”