Unknown's avatar

In an AI world we need to teach students how to work with robot writers


Shutterstock

Lucinda McKnight, Deakin University

Robots are writing more of what we read on the internet. And artificial intelligence (AI) writing tools are becoming freely available for anyone, including students, to use.

In a period of rapid change, there are enormous ethical implications for post-human authorship — in which humans and machines collaborate. The study of AI ethics needs to be central to education as we increasingly use machine-generated content to communicate with others.

Robots can write, too

AI robot writers, such as GPT-3 (Generative Pre-trained Transformer) take seconds to create text that seems like it was written by humans. In September, 2020 GPT-3 wrote an entire essay in The Guardian to convince people not to fear artificial intelligence.

AI does this through what’s called natural language processing and generation. This involves converting human language to computer commands and vice versa. To do this, machine algorithms study millions of text samples, words, sentences and paragraphs humans use to gain a general understanding of the context of human language. Then machines use that knowledge to put together text.

As recently as 2019, this kind of technology seemed a long way off. But today, it is readily available.

For example, the free Zyro content generator provides text for websites. I can select the categories “Health and Living” and “Personal Trainer” (I happen to be one, as well as an academic). In two seconds, the following text appears:

I am experienced with personal training, athletic performance, and nutrition. This includes working with children, pregnant women, older adults, people with disabilities, athletes, and anyone looking to achieve a competitive edge, improve their overall health, and overall performance. I also have an interest in alternative health, including wellness coaching and nutrition counselling. In my spare time, I enjoy spending time with my family, the outdoors, and writing.

This sums me up (despite the program knowing nothing about me) and would be fine for my website, if I had one. I have the option to generate more responses, and in two seconds, the program offers:

I am experienced in working with adults and children to improve their overall physical and emotional well-being. I have a strong desire to help you reach your goal […] I love helping others achieve their goals. I love working with others to achieve their goals. I am a mother of two healthy children and a wife to two incredible people.

While this is repetitive and Zyro does not notice the apparent error in the last sentence, these issues would be easy to correct. Text, even for niche purposes, can now be generated in a few clicks.

There are other digital tools such as paraphrasers and rewriters that can generate up to 1,000 articles from a single seed article, each of them substantially unique. Quillbot and WordAI, for instance, can rapidly rewrite text and make it difficult to detect plagiarism. WordAI boasts “unlimited human quality content at your fingertips”.

Questions for schools and universities

So what does this mean for education, writing, and society?

Of course, there’s the issue of cheating on essays and other assignments. School and university leaders need to have difficult conversations about what constitutes “authorship” and “editorship” in the post-human age. We are all (already) writing with machines, even just via spelling and grammar checkers.

Tools such as Turnitin — originally developed for detecting plagiarism — are already using more sophisticated means of determining who wrote a text by recognising a human author’s unique “fingerprint”. Part of this involves electronically checking a submitted piece of work against a student’s previous work.

Many student writers are already using AI writing tools. Perhaps, rather than banning or seeking to expose machine collaboration, it should be welcomed as “co-creativity”. Learning to write with machines is an important aspect of the workplace “writing” students will be doing in the future.




Read more:
OK computer: to prevent students cheating with AI text-generators, we should bring them into the classroom


AI writers work lightning fast. They can write in multiple languages and can provide images, create metadata, headlines, landing pages, Instagram ads, content ideas, expansions of bullet points and search-engine optimised text, all in seconds. Students need to exploit these machine capabilities, as writers for digital platforms and audiences.

Perhaps assessment should focus more on students’ capacities to use these tools skilfully instead of, or at least in addition to, pursuing “pure” human writing.

But is it fair?

Yet the question of fairness remains. Students who can access better AI writers (more “natural”, with more features) will be able to produce and edit better text.

Better AI writers are more expensive and are available on monthly plans or high one-off payments wealthy families can afford. This will exacerbate inequality in schooling, unless schools themselves provide excellent AI writers to all.

We will need protocols for who gets credit for a piece of writing. We will need to know who gets cited. We need to know who is legally liable for content and potential harm it may create. We need transparent systems for identifying, verifying and quantifying human content.




Read more:
When does getting help on an assignment turn into cheating?


And most importantly of all, we need to ask whether the use of AI writing tools is fair to all students.

For those who are new to the notion of AI writing, it is worthwhile playing and experimenting with the free tools available online, to better understand what “creation” means in our robot future.The Conversation

Lucinda McKnight, Senior Lecturer in Pedagogy and Curriculum, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Unknown's avatar

Can robots write? Machine learning produces dazzling results, but some assembly is still required



Shutterstock

Alexandra Louise Uitdenbogerd, RMIT University

You might have seen a recent article from The Guardian written by “a robot”. Here’s a sample:

I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

Read the whole thing and you may be astonished at how coherent and stylistically consistent it is. The software used to produce it is called a “generative model”, and they have come a long way in the past year or two.

But exactly how was the article created? And is it really true that software “wrote this entire article”?

How machines learn to write

The text was generated using the latest neural network model for language, called GPT-3, released by the American artificial intelligence research company OpenAI. (GPT stands for Generative Pre-trained Transformer.)

OpenAI’s previous model, GPT-2, made waves last year. It produced a fairly plausible article about the discovery of a herd of unicorns, and the researchers initially withheld the release of the underlying code for fear it would be abused.

But let’s step back and look at what text generation software actually does.

Machine learning approaches fall into three main categories: heuristic models, statistical models, and models inspired by biology (such as neural networks and evolutionary algorithms).

Heuristic approaches are based on “rules of thumb”. For example, we learn rules about how to conjugate verbs: I run, you run, he runs, and so on. These approaches aren’t used much nowadays because they are inflexible.




Read more:
From Twitterbots to VR: 10 of the best examples of digital literature


Writing by numbers

Statistical approaches were the state of the art for language-related tasks for many years. At the most basic level, they involve counting words and guessing what comes next.

As a simple exercise, you could generate text by randomly selecting words based on how often they normally occur. About 7% of your words would be “the” – it’s the most common word in English. But if you did it without considering context, you might get nonsense like “the the is night aware”.

More sophisticated approaches use “bigrams”, which are pairs of consecutive words, and “trigrams”, which are three-word sequences. This allows a bit of context and lets the current piece of text inform the next. For example, if you have the words “out of”, the next guessed word might be “time”.

This happens with the auto-complete and auto-suggest features when we write text messages or emails. Based on what we have just typed, what we tend to type and a pre-trained background model, the system predicts what’s next.

While bigram- and trigram-based statistical models can produce good results in simple situations, the best recent models go to another level of sophistication: deep learning neural networks.

Imitating the brain

Neural networks work a bit like tiny brains made of several layers of virtual neurons.

A neuron receives some input and may or may not “fire” (produce an output) based on that input. The output feeds into neurons in the next layer, cascading through the network.

The first artificial neuron was proposed in 1943 by US neuroscientists Warren McCulloch and Walter Pitts, but they have only become useful for complex problems like generating text in the past five years.

To use neural networks for text, you put words into a kind of numbered index. You can use the number to represent a word, so for example 23,342 might represent “time”.

Neural networks do a series of calculations to go from sequences of numbers at the input layer, through the interconnected “hidden layers” inside, to the output layer. The output might be numbers representing the odds for each word in the index to be the next word of the text.

In our “out of” example, number 23,432 representing “time” would probably have much better odds than the number representing “do”.




Read more:
Friday essay: a real life experiment illuminates the future of books and reading


What’s so special about GPT-3?

GPT-3 is the latest and best of the text modelling systems, and it’s huge. The authors say it has 175 billion parameters, which makes it at least ten times larger than the previous biggest model. The neural network has 96 layers and, instead of mere trigrams, it keeps track of sequences of 2,048 words.

The most expensive and time-consuming part of making a model like this is training it – updating the weights on the connections between neurons and layers. Training GPT-3 would have used about 262 megawatt-hours of energy, or enough to run my house for 35 years.

GPT-3 can be applied to multiple tasks such as machine translation, auto-completion, answering general questions, and writing articles. While people can often tell its articles are not written by human authors, we are now likely to get it right only about half the time.

The robot writer

But back to how the article in The Guardian was created. GPT-3 needs a prompt of some kind to start it off. The Guardian’s staff gave the model instructions and some opening sentences.

This was done eight times, generating eight different articles. The Guardian’s editors then combined pieces from the eight generated articles, and “cut lines and paragraphs, and rearranged the order of them in some places”, saying “editing GPT-3’s op-ed was no different to editing a human op-ed”.

This sounds about right to me, based on my own experience with text-generating software. Earlier this year, my colleagues and I used GPT-2 to write the lyrics for a song we entered in the AI Song Contest, a kind of artificial intelligence Eurovision.

AI song Beautiful the World, by Uncanny Valley.

We fine-tuned the GPT-2 model using lyrics from Eurovision songs, provided it with seed words and phrases, then selected the final lyrics from the generated output.

For example, we gave Euro-GPT-2 the seed word “flying”, and then chose the output “flying from this world that has gone apart”, but not “flying like a trumpet”. By automatically matching the lyrics to generated melodies, generating synth sounds based on koala noises, and applying some great, very human, production work, we got a good result: our song, Beautiful the World, was voted the winner of the contest.

Co-creativity: humans and AI together

So can we really say an AI is an author? Is it the AI, the developers, the users or a combination?

A useful idea for thinking about this is “co-creativity”. This means using generative tools to spark new ideas, or to generate some components for our creative work.

Where an AI creates complete works, such as a complete article, the human becomes the curator or editor. We roll our very sophisticated dice until we get a result we’re happy with.




Read more:
Computing gives an artist new tools to be creative


The Conversation


Alexandra Louise Uitdenbogerd, Senior Lecturer in Computer Science, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Unknown's avatar

Author Avatars and Audiobooks


The link below is to an article that takes a look at Chinese advances in Artificial Intelligence, with the use of Author Avatars in audiobooks.

For more visit:
https://lithub.com/today-in-ai-will-replace-us-all-author-avatars-can-now-read-their-books-to-you/