Will artificial intelligence programs like ChatGPT make plagiarism so common as to become no big deal? (Photo illustration via Canva)
In 1999, Scott McNealy, CEO of Sun Microsystems, told reporters and technology analysts concerned about internet algorithms that people have “zero privacy anyway. Get over it!”
The comment shocked people. With the emergence of ChatGPT (Generative Pre-trained Transformer) — a free online application that dialogues with users — teachers are in “near panic” with concerns about cheating, specifically, plagiarism.
It will take a while for us to get over it. But we will.
When McNealy made his privacy comment, eBay, PayPal and Amazon were in their infancy. Facebook would be founded five years later. Twitter, two years after that.
Google Maps came online in 2005. Street View not only showcased property but also occasionally caught people doing assorted embarrassing things.
In 2007, an attorney complained that a Google van can violate privacy by photographing “you in an embarrassing state of undress, as you close your blinds, for example.” (Google had caught him smoking, and he was hiding that from his family.)
The public was shocked about Street View for about a year. Then it wore off. People gave up privacy for the convenience of car directions.
Terms of surrender
In 2010, my Iowa State colleague Daniela Dimitrova and I published a book titled “Vanishing Act: The Erosion of Online Footnotes and the Implications for Scholarship.” We traced the history of convenience from a caveman’s rock to an influencer’s blog.
Communication has four basic features: durability, storage, portability and convenience. An inscribed rock can last for centuries. But you can’t write much on it or easily tote it. Clay tablets, scrolls and books provided more storage and portability.
Then came Internet, the ultimate in convenience. We don’t have to leave our home. We order in, pay bills, stream content and work in pajamas.
People will give up anything for convenience, risking privacy and identity theft.
This was McNealy’s message more than two decades ago.
At the time, artificial intelligence was almost a half century old, making tremendous strides. Between 1957-74, scientists developed algorithms that would lead, ultimately, to ChatGPT and other bots that now write essays and pass law and business exams.
They even fool developers into believing they are sentient.
Take my word
Prose isn’t dead; we just won’t be doing much of it in a variety of jobs. Chatbots have infiltrated the writing professions, customer support, programming, media planning and buying, judicial filings, and consulting.
Artificial intelligence operates on theft. Consider the definition of plagiarism: presenting someone else’s work or ideas as your own by incorporating that into your own content without full acknowledgement.
Computer scientists call that “machine learning.”
Chatbots analyze what you ask them, evaluate responses, swipe content by others with similar requests, prompt for more information, scour the web for answers (without citation), and access data on your device if you agreed to the app’s terms of service.
And you’re worrying about plagiarism?
Getting over it
Consumers will interact with chatbots at all hours, without having to wait for retailers and banks to open. People can complain vociferously about inferior products and services without the chatbot losing composure or calling you a Karen or Ken.
School systems will try to ban chatbots, purchasing services to detect cheating. But results will be unreliable as AI content improves and digital natives find workarounds.
Gen Z discovered how to cheat while remote learning during the Covid pandemic. They’re loving ChatGPT.
Eventually, plagiarism will morph from failing grade to reprimand.
The public will become bored with the slush pile of mediocre machine prose, patronizing authors with insight into the human condition. Their copyrighted works will continue to sell.
Infringement will remain on the books. Content owners will decide who, when, how and where original material may be used. If they can document any monetary loss, their attorneys can sue the offending parties.
A chatbot will write the legal brief and file it with the court.
Interviewing the chatbot
To test my ideas about plagiarism and chatbots, I asked ChatGPT to write my column based on preliminary information. Then I asked questions, as a reporter would do, to challenge what the AI bot created. It’s a fascinating exchange between an author and a machine programmed to defend itself against allegations of plagiarism.
The chatbot has been programmed already to defend plagiarism, because school districts are concerned about that. Gradually, with question after question, I eventually got the answers I was looking for concerning machine learning and plagiarism.
This applications is going to be used by schools, business and commerce. Plagiarism remains at the moment a serious offense. But when our machines routinely pilfer content from a variety of sources in the name of machine learning, eventually we will allow that because of convenience.
We will follow the trajectory that Scott McNealy prophesied with privacy. And we will get over it. Convenience trumps values, as we have seen repeatedly with technology and social change.
Read the “interview” with the chatbot here.
— Michael Bugeja
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics.