Will artificial intelligence replace human authors in the near future? – The New Indian Express


About a year ago the British newspaper The Guardian published an article titled A robot wrote this whole article. Are you still afraid, human ?, written by an artificial intelligence (AI) robot called GPT-3 (Generative Pre-trained Transformer 3). It is an autoregressive language model that uses deep learning to produce human-like text. GPT-3 was given a brief introduction and tasked with writing an editorial of around 500 words in plain language, focusing on why humans have nothing to fear from AI. In response, he produced eight different essays. The Guardian picked the best parts of each and ran the edited part. GPT-3 even quoted Mahatma Gandhi in his article.

A rapid revolution in AI and natural language processing (NLP) is underway. While the world’s very first novel written by AI was released in Russia in 2008, the first full-length Korean novel, written by an AI named “Birampung,” was released in August. Birampung refers to a violent storm that strikes at the beginning and end of the creation of the universe. The 560-page novel was “made” by novelist and mathematician Kim Tae-yon. Kim was reluctant to share details of the technology involved. But 1,000 books were loaded into Birampung’s operating system and it was equipped with the most advanced deep autonomous learning algorithm. Like a real director, Kim chose the script, the background and the characters, but the writing process and composition was done by Birampung. The novel, whose name was translated into English by The world from now on, lasted for seven years and consists of five stories in which the protagonists – a disabled amateur mathematician, a math teacher and entrepreneur, a psychiatrist, an astrophysicist and a Buddhist monk – were drawn to each other in their individual quests to understand the meaning of human existence.

Is there an existential threat to writers now? Consider GPT-3, the third-generation language prediction model in the series created by OpenAI, an artificial intelligence research company founded among others by Tesla billionaire Elon Musk. What exactly is going on in GPT-3? a MIT Technology Review article said: “What it seems to be good is the text synthesis that it has found elsewhere on the internet, making it sort of like a vast eclectic album created from millions upon millions of excerpts. of text that he then sticks in a strange and wonderful way.
on demand.”

GPT-3 can also produce pastiches of particular writers. For example, when given the title, author’s name and the initial word “It”, the AI ​​produced a short story titled The importance of being on Twitter, written in the style of Jerome K Jerome. He even wrote a reasonably informative article on GPT-3.

“Playing with GPT-3 is like seeing the future” is what some experts think. However, AIs have many shortcomings. Their language is not always polite. And a lot of people noticed a lack of depth, as the text read more like cut and paste work. Some experts felt that the GPT-3 program only matches words and phrases based on statistical correlations between those in its database. In a March 2021 article published in the journal Nature, Matthew Hutson discusses the rise and risks of language-generating AI. Hutson believes that great AI can write like humans, but it still lacks the common sense to understand how the world works, both physically and socially. For example, when asked, “How many rainbows does it take to jump from Hawaii to seventeen?” GPT-3 replied, “It takes two rainbows to jump from Hawaii to seventeen.”

In The Guardian piece, GPT-3 wrote, “I’m just a bunch of code, governed by lines of code that encompass my mission statement.” GPT-3 had been formed in about 200 billion words, at an estimated cost of tens of millions of dollars. AI therefore always needs a human editor to link its writings to reality. In fact, within days of posting the editorial written by GPT-3, a follow-up letter titled A human wrote this article. You shouldn’t be afraid of GPT-3 was published in The Guardian. The author, Albert Fox Cahn, argued that while GPT-3 is “quite impressive … it is useless without human intervention and modifications”. “GPT-3 is just the latest example of computer-aided fatherhood, the process by which human authors use technology to improve the writing process,” wrote Cahn. American poet-programmer Allison Parrish also noted: “Assign (The Guardian article) to AI is a bit like attributing the pyramids to the Pharaoh. Pharaoh did not do this. The workers did.

GPT-3 is an artificial neural network with over 175 billion parameters that uses only 0.12% of its cognitive capacity. This is certainly a big step up from GPT-2 which had 1.5 billion settings. When GPT-4 or GPT-5 arrives in the future, should human writers really be afraid? Will the AI ​​live up to JK Rowling or Kazuo Ishiguro, or will it report on Afghanistan? In his Nature paper, Hutson wrote, “It’s possible that a larger model would do better, with more parameters, more training data, more time to learn. But it will become more and more expensive and cannot continue indefinitely. Another limitation is the opaque complexity of language models. Still, would a GPT-n or equivalent AI be able to produce a Tagore song or a Shakespeare play in the near future? A new technological angst would invariably evolve around it, however.

PS: This article was written entirely by a human being, not an AI.

Atanu Biswas
Professor of Statistics, Indian Statistical Institute, Kolkata
([email protected]


Comments are closed.