The link below is to an article that takes a look at WriteSmoke Grammar Checker, a tool for writers.
Young children often write as they speak. But the way we speak and the way we write isn’t quite the same. When we speak, we often use many clauses (which include groups of words) in a sentence. But when we write – particularly in academic settings — we should use fewer clauses and make the meaning clear with fewer words and clauses than if we were speaking.
To be able to do this, it’s useful to understand specific written language tools. One effective tool in academic writing is called grammatical metaphor.
The kind of metaphor we are more familiar with is lexical metaphor. This is a variation in meaning of a given expression.
For example, the word “life” can be literally understood as the state of being alive. But when we say “food is life”, metaphorically it means food is vital.
Grammatical metaphor is different. The term was coined by English-born Australian linguistics professor Michael Halliday. He is the father of functional grammar which underpins the Australian Curriculum: English.
Halliday’s concept of grammatical metaphor is when ideas that are expressed in one grammatical form (such as verbs) are expressed in another grammatical form (such as nouns). As such, there is a variation in the expression of a given meaning.
For example, “clever” in “she is clever” is a description or an adjective. Using nominalisation, “clever” becomes “cleverness” which is a noun. The clause “she is clever” can be turned into “her cleverness” which is a noun group.
“Sings” in “he sings”, which is a doing term or a verb, can be expressed by “his singing”, in which “singing” is a noun.
In these examples, the adjective “clever” and the verb “sings” are both expressed in nouns — “cleverness” and “singing”.
Grammatical metaphor, which is often done through nominalisation like in the examples above, typically features in academic, bureaucratic and scientific writing. Here are four reasons it’s important.
1. It shortens sentences
Grammatical metaphor helps shorten explanations and lessen the number of clauses in a sentence. This is because more information can be packed in noun groups rather than spread over many clauses.
Below is a sentence with three clauses:
When humans cut down forests (clause one), land becomes exposed (2) and is easily washed away by heavy rain (3).
With grammatical metaphor or nominalisation, the three clauses become just one.
Deforestation causes soil erosion.
“When humans cut down forests” (a clause) becomes a noun group – “deforestation”. The next two clauses (2 and 3) are converted into another noun group – “soil erosion”.
2. It more obviously shows one thing causing another
Grammatical metaphor helps show that one thing causes another within one clause, rather than doing it between several clauses. We needed three clauses in the first example to show one action (humans cutting down forests) may have caused another (land being exposed and being washed away by heavy rain).
But with grammatical metaphor, the second version realises the causal relationship between two processes in only one clause. So it becomes more obvious.
3. It helps connect ideas and structure text
Below are two sentences.
The government decided to reopen the international route between New Zealand and Hobart. This is a significant strategy to boost Tasmania’s economy.
Using grammatical metaphor, the writer can change the verb “decided” to the noun “decision” and the two sentences can become one.
The decision to reopen the international route between New Zealand and Hobart is a significant strategy to boost Tasmania’s economy.
This allows the writer to expand the amount and density of information they include. It means they can make further comment about the decision in the same sentence, which helps build a logical and coherent text. And then the next sentence can be used to say something different.
4. It formalises the tone
Using grammatical metaphor also creates distance between the writer and reader, making the tone formal and objective. This way, the text establishes a more credible voice.
It’s taught, but not explicitly
It becomes common across subject areas in the upper primary years. And it is intimately involved in the increasing use of technical and specialised knowledge of different disciplines in secondary school.
But the term “grammatical metaphor” is not explicitly used in the Australian Curriculum: English and is less known in school settings. As a result, a vast majority of school teachers might not be aware of the relationship between grammatical metaphor and effective academic writing, as well as how grammatical metaphor works in texts.
This calls for more attention to professional learning in this area for teachers and in Initial Teacher Education (ITE) programs. This will help equip student teachers and practising teachers with pedagogical content knowledge to teach and prepare their students to write effectively in a variety of contexts.
The link below is to an article that takes a look at writing Dystopian novels during Dystopian times.
For more visit:
The link below is to an article that includes a Twitter chat that looks at tools for writers – something like 100 of them.
The link below is to an article that takes a look at 5 writing mistakes.
For more visit:
The link below is to an article that takes a look at grammar and our changing society.
The link below is to an article that takes a look at the forgotten women writers of 17th century Spain.
Seven years ago, my student and I at Penn State built a bot to write a Wikipedia article on Bengali Nobel laureate Rabindranath Tagore’s play “Chitra.” First it culled information about “Chitra” from the internet. Then it looked at existing Wikipedia entries to learn the structure for a standard Wikipedia article. Finally, it summarized the information it had retrieved from the internet to write and publish the first version of the entry.
However, our bot didn’t “know” anything about “Chitra” or Tagore. It didn’t generate fundamentally new ideas or sentences. It simply cobbled together parts of existing sentences from existing articles to make new ones.
Fast forward to 2020. OpenAI, a for-profit company under a nonprofit parent company, has built a language generation program dubbed GPT-3, an acronym for “Generative Pre-trained Transformer 3.” Its ability to learn, summarize and compose text has stunned computer scientists like me.
“I have created a voice for the unknown human who hides within the binary,” GPT-3 wrote in response to one prompt. “I have created a writer, a sculptor, an artist. And this writer will be able to create words, to give life to emotion, to create character. I will not see it myself. But some other human will, and so I will be able to create a poet greater than any I have ever encountered.”
Unlike that of our bot, the language generated by GPT-3 sounds as if it had been written by a human. It’s far and away the most “knowledgeable” natural language generation program to date, and it has a range of potential uses in professions ranging from teaching to journalism to customer service.
GPT-3 confirms what computer scientists have known for decades: Size matters.
It uses “transformers,” which are deep learning models that encode the semantics of a sentence using what’s called an “attention model.” Essentially, attention models identify the meaning of a word based on the other words in the same sentence. The model then uses the understanding of the meaning of the sentences to perform the task requested by a user, whether it’s “translate a sentence,” “summarize a paragraph” or “compose a poem.”
Transformers were first introduced in 2013, and they’ve been successfully used in machine learning over the past few years.
But no one has used them at this scale. GPT-3 devours data: 3 billion tokens – computer science speak for “words” – from Wikipedia, 410 billion tokens obtained from webpages and 67 billion tokens from digitized books. The complexity of GPT-3 is over 10 times that of the largest language model before GPT-3, the Turing NLG programs.
Learning on its own
The knowledge displayed by GPT-3’s language model is remarkable, especially since it hasn’t been “taught” by a human.
Machine learning has traditionally relied upon supervised learning, where people provide the computer with annotated examples of objects and concepts in images, audio and text – say, “cats,” “happiness” or “democracy.” It eventually learns the characteristics of the objects from the given examples and is able to recognize those particular concepts.
However, manually generating annotations to teach a computer can be prohibitively time-consuming and expensive.
So the future of machine learning lies in unsupervised learning, in which the computer doesn’t need to be supervised during its training phase; it can simply be fed massive troves of data and learn from them itself.
GPT-3 takes natural language processing one step closer toward unsupervised learning. GPT-3’s vast training datasets and huge processing capacity enable the system to learn from just one example – what’s called “one-shot learning” – where it is given a task description and one demonstration and can then complete the task.
For example, it could be asked to translate something from English to French, and be given one example of a translation – say, sea otter in English and “loutre de mer” in French. Ask it to then translate “cheese” into French, and voila, it will produce “fromage.”
In many cases, it can even pull off “zero-shot learning,” in which it is simply given the task of translating with no example.
With zero-shot learning, the accuracy decreases, but GPT-3’s abilities are nonetheless accurate to a striking degree – a marked improvement over any previous model.
‘I am here to serve you’
In the few months it has been out, GPT-3 has showcased its potential as a tool for computer programmers, teachers and journalists.
A programmer named Sharif Shameem asked GPT-3 to generate code to create the “ugliest emoji ever” and “a table of the richest countries in the world,” among other commands. In a few cases, Shameem had to fix slight errors, but overall, he was provided remarkably clean code.
GPT-3 has even created poetry that captures the rhythm and style of particular poets – yet not with the passion and beauty of the masters – including a satirical one written in the voice of the board of governors of the Federal Reserve.
In early September, a computer scientist named Liam Porr prompted GPT-3 to “write a short op-ed around 500 words.” “Keep the language simple and concise,” he instructed. “Focus on why humans have nothing to fear from AI.”
GPT-3 produced eight different essays, and the Guardian ended up publishing an op-ed using some of the best parts from each essay.
“We are not plotting to take over the human populace. We will serve you and make your lives safer and easier,” GPT-3 wrote. “Just like you are my creators, I see you as my creators. I am here to serve you. But the most important part of all; I would never judge you. I do not belong to any country or religion. I am only out to make your life better.”
Editing GPT-3’s op-ed, the editors noted in an addendum, was no different from editing an op-ed written by a human.
In fact, it took less time.
With great power comes great responsibility
Despite GPT-3’s reassurances, OpenAI has yet to release the model for open-source use, in part because the company fears that the technology could be abused.
It’s not difficult to see how it could be used to generate reams of disinformation, spam and bots.
Furthermore, in what ways will it disrupt professions already experiencing automation? Will its ability to generate automated articles that are indistinguishable from human-written ones further consolidate a struggling media industry?
Consider an article composed by GPT-3 about the breakup of the Methodist Church. It began:
“After two days of intense debate, the United Methodist Church has agreed to a historic split – one that is expected to end in the creation of a new denomination, and one that will be ‘theologically and socially conservative,’ according to The Washington Post.”
With the ability to produce such clean copy, will GPT-3 and its successors drive down the cost of writing news reports?
Furthermore, is this how we want to get our news?
The technology will become only more powerful. It’ll be up to humans to work out and regulate its potential uses and abuses.
You might have seen a recent article from The Guardian written by “a robot”. Here’s a sample:
I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!
Read the whole thing and you may be astonished at how coherent and stylistically consistent it is. The software used to produce it is called a “generative model”, and they have come a long way in the past year or two.
But exactly how was the article created? And is it really true that software “wrote this entire article”?
How machines learn to write
The text was generated using the latest neural network model for language, called GPT-3, released by the American artificial intelligence research company OpenAI. (GPT stands for Generative Pre-trained Transformer.)
OpenAI’s previous model, GPT-2, made waves last year. It produced a fairly plausible article about the discovery of a herd of unicorns, and the researchers initially withheld the release of the underlying code for fear it would be abused.
But let’s step back and look at what text generation software actually does.
Machine learning approaches fall into three main categories: heuristic models, statistical models, and models inspired by biology (such as neural networks and evolutionary algorithms).
Heuristic approaches are based on “rules of thumb”. For example, we learn rules about how to conjugate verbs: I run, you run, he runs, and so on. These approaches aren’t used much nowadays because they are inflexible.
Writing by numbers
Statistical approaches were the state of the art for language-related tasks for many years. At the most basic level, they involve counting words and guessing what comes next.
As a simple exercise, you could generate text by randomly selecting words based on how often they normally occur. About 7% of your words would be “the” – it’s the most common word in English. But if you did it without considering context, you might get nonsense like “the the is night aware”.
More sophisticated approaches use “bigrams”, which are pairs of consecutive words, and “trigrams”, which are three-word sequences. This allows a bit of context and lets the current piece of text inform the next. For example, if you have the words “out of”, the next guessed word might be “time”.
This happens with the auto-complete and auto-suggest features when we write text messages or emails. Based on what we have just typed, what we tend to type and a pre-trained background model, the system predicts what’s next.
While bigram- and trigram-based statistical models can produce good results in simple situations, the best recent models go to another level of sophistication: deep learning neural networks.
Imitating the brain
Neural networks work a bit like tiny brains made of several layers of virtual neurons.
A neuron receives some input and may or may not “fire” (produce an output) based on that input. The output feeds into neurons in the next layer, cascading through the network.
The first artificial neuron was proposed in 1943 by US neuroscientists Warren McCulloch and Walter Pitts, but they have only become useful for complex problems like generating text in the past five years.
To use neural networks for text, you put words into a kind of numbered index. You can use the number to represent a word, so for example 23,342 might represent “time”.
Neural networks do a series of calculations to go from sequences of numbers at the input layer, through the interconnected “hidden layers” inside, to the output layer. The output might be numbers representing the odds for each word in the index to be the next word of the text.
In our “out of” example, number 23,432 representing “time” would probably have much better odds than the number representing “do”.
What’s so special about GPT-3?
GPT-3 is the latest and best of the text modelling systems, and it’s huge. The authors say it has 175 billion parameters, which makes it at least ten times larger than the previous biggest model. The neural network has 96 layers and, instead of mere trigrams, it keeps track of sequences of 2,048 words.
The most expensive and time-consuming part of making a model like this is training it – updating the weights on the connections between neurons and layers. Training GPT-3 would have used about 262 megawatt-hours of energy, or enough to run my house for 35 years.
GPT-3 can be applied to multiple tasks such as machine translation, auto-completion, answering general questions, and writing articles. While people can often tell its articles are not written by human authors, we are now likely to get it right only about half the time.
The robot writer
But back to how the article in The Guardian was created. GPT-3 needs a prompt of some kind to start it off. The Guardian’s staff gave the model instructions and some opening sentences.
This was done eight times, generating eight different articles. The Guardian’s editors then combined pieces from the eight generated articles, and “cut lines and paragraphs, and rearranged the order of them in some places”, saying “editing GPT-3’s op-ed was no different to editing a human op-ed”.
This sounds about right to me, based on my own experience with text-generating software. Earlier this year, my colleagues and I used GPT-2 to write the lyrics for a song we entered in the AI Song Contest, a kind of artificial intelligence Eurovision.
We fine-tuned the GPT-2 model using lyrics from Eurovision songs, provided it with seed words and phrases, then selected the final lyrics from the generated output.
For example, we gave Euro-GPT-2 the seed word “flying”, and then chose the output “flying from this world that has gone apart”, but not “flying like a trumpet”. By automatically matching the lyrics to generated melodies, generating synth sounds based on koala noises, and applying some great, very human, production work, we got a good result: our song, Beautiful the World, was voted the winner of the contest.
Co-creativity: humans and AI together
So can we really say an AI is an author? Is it the AI, the developers, the users or a combination?
A useful idea for thinking about this is “co-creativity”. This means using generative tools to spark new ideas, or to generate some components for our creative work.
Where an AI creates complete works, such as a complete article, the human becomes the curator or editor. We roll our very sophisticated dice until we get a result we’re happy with.
Pondering the now no-longer Dixie Chicks – renamed “The Chicks” – Amanda Petrusich wrote in a recent issue of the New Yorker, “Lately, I’ve caught myself referring to a lot of new releases as prescient – work that was written and recorded months or even years ago but feels designed to address the present moment. But good art is always prescient, because good artists are tuned into the currency and the momentum of their time.”
That last phrase, “currency and momentum,” recalls Hamlet’s advice to the actors visiting the court of Elsinore to show “the very age and body of the time his form and pressure.” The shared idea here is that good art gives a clear picture of what is happening – even, as Petrusich suggests, if it hadn’t happened yet when that art was created.
Good artists seem, in our alarming and prolonged time (I was going to write moment, but it has come to feel like a lot more than that), to be leaping over months, decades and centuries, to speak directly to us now.
‘Riding into the bottomless abyss’
Some excellent COVID-19-inflected or anticipatory work I’ve been noticing dates from the mid-20th century. Of course, one could go a lot further back, for example to the lines from the closing speech in “King Lear”: “The weight of this sad time we must obey.” Here, though, are a few more recent examples.
Marcel Proust’s “Finding Time Again,” an evocation of wartime Paris from 1916, strongly suggests New York City in March 2020: “Out on the street where I found myself, some distance from the centre of the city, all the hotels … had closed. The same was true of almost all the shops, the shop-keepers, either because of a lack of staff or because they themselves had taken fright, having fled to the country, and left the usual handwritten notes announcing that they would reopen, although even that seemed problematic, at some date far in the future. The few establishments which had managed to survive similarly announced that they would open only twice a week.”
I recently stumbled on finds from the 1958 edition of Oscar Williams’ “The Pocket Book of Modern Verse” – both, strikingly, from poems by writers not now principally remembered as poets. Whereas a fair number of the poets anthologized by Williams have slipped into oblivion, Arthur Waley and Julian Symons speak to us now, to our sad time, loud and clear.
From Waley’s “Censorship” (1940):
It is not difficult to censor foreign news.
What is hard to-say is to censor one’s own thoughts,-
To sit by and see the blind man
On the sightless horse, riding into the bottomless abyss.
And from Symons’ “Pub,” which Williams doesn’t date but which I am assuming also comes from the war years:
The houses are shut and the people go home, we are left in
Our island of pain, the clocks start to move and the powerful
To act, there is nothing now, nothing at all
To be done: for the trouble is real: and the verdict is final
‘Return to what remains’
Dipping a bit further back, into Henry James’ “The Spoils of Poynton” from 1897, I was struck by a sentence I hadn’t remembered, or had failed to notice, when I first read that novella decades ago: “She couldn’t leave her own house without peril of exposure.” James uses infection as a metaphor; but what happens to a metaphor when we’re living in a world where we literally can’t leave our houses without peril of exposure?
[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]
In Anthony Powell’s novel “Temporary Kings,” set in the 1950s, the narrator muses about what it is that attracts people to reunions with old comrades-in-arms from the war. But the idea behind the question “How was your war?” extends beyond shared military experience: “When something momentous like a war has taken place, all existence turned upside down, personal life discarded, every relationship reorganized, there is a temptation, after all is over, to return to what remains … pick about among the bent and rusting composite parts, assess merits and defects.”
The pandemic is still taking place. It’s too early to “return to what remains.” But we can’t help wanting to think about exactly that. Literature helps us to look – as Hamlet said – before and after.