The link below is to an article that comments on the ‘grammar police.”
The link below is to an article that provides a defence of puns.
If you want to write a good sentence, you must learn to love the full stop. Love it above all other punctuation marks, and see it as the goal towards which the words in your sentence adamantly move.
A sentence, once begun, demands its own completion. As pilots say: take-off is optional, landing is compulsory. A sentence throws a thought into the air and leaves the reader vaguely dissatisfied until that thought has come in to land.
We read a sentence with the same part of our brains that processes music. Like music, a sentence arranges its elements into an order that should seem fresh and alive and yet shaped and controlled. A good sentence will often frustrate readers just a little, and put them faintly on edge, without ever suggesting that it has lost control of what is being said. As it runs its course, it will assuage some of the frustration and may create more. But by the end, things should have resolved themselves in a way that allows something to be said.
Only when the full stop arrives can the meaning of a sentence be fulfilled. The full stop offers the reader relief, allowing her to close the circle of meaning and take a mental breath.
Full stops also give writing its rhythm. They come in different places, cutting off short and long groups of words, varying the cadences – those drops in pitch at the sentence’s end which signal that the sentence, and the sentiment, are done.
A sentence wields more power with a strong stress at the end, where it sticks in the mind and sends a backwash over the words that went before. Weak sentences have weak predicates that come to the full stop with an unresounding phhtt. If you say that something is “an interesting factor to consider” or “should be borne in mind”, then the end of your sentence is just mumbly noise, because those things could be said about anything. A sentence with a strong end-stress says that its maker cared how its words fell on the reader’s ear. It feels fated to end thus, not just strung out to fill the word count.
A good trick, when drafting a piece, is to press enter after every sentence, as if you were writing a poem and each full stop marked a line break. This renders the varied (or unvaried) lengths of your sentences instantly visible. And it foregrounds the full stop, reminding you of its power as the destination and final rest of each sentence. Winston Churchill wrote his speeches like this, in single-sentence lines, to more easily adjust his Augustan rhythms. If you keep pressing enter after every full stop, the music of your writing is easier to hear because now it can also be seen.
We live in an age when the full stop is losing its power. The talky, casual prose of texting and online chat often manages without it. A single-line text needs no punctuation to show that it has ended. Instead of a full stop, we press send. Omitting the full stop gives off an extempore air, making replies seem insouciant and jokes unrehearsed.
But writing is not conversation, nor a speech-balloon text awaiting a response. A written sentence must give words a finished form that awaits no clarification. It must be its own small island of sense, from which the writer has been airlifted and on which no one else need live. We write alone, as an act of faith in words as a way of speaking to others who are elsewhere. So a sentence must be self-supporting. It must go out into the world without the author leaning over the reader to clarify its meaning. Hence the full stop.
A sentence is also a social animal; it feeds off its neighbours to form higher units of sense. It needs a full stop not just to be a sentence, but so the next one can begin. Studies have shown that young people tend to read a full stop in a text as curt or passive-aggressive. On social media, a full stop is often used between every word to sound angrily emphatic: End. Of. Story. But in writing, a full stop is not meant to be the final word in an argument like this. It is a satisfying little click that moves the dial along so the next sentence can pick up where it left off. Its end is also a beginning.
More language-related articles, written by academic experts:
For more evidence-based articles by academics, subscribe to our newsletter.
The link below is to an article that takes a look at the misuse of some common words.
For more visit:
This is an article from Curious Kids, a series for children. The Conversation is asking kids to send in questions they’d like an expert to answer. All questions are welcome – serious, weird or wacky! You might also like the podcast Imagine This, a co-production between ABC KIDS listen and The Conversation, based on Curious Kids.
Why does English have so many different spelling rules? – Melania P, age 12, Strathfield.
English spelling has been evolving for over a thousand years and the muddle we’re in today is the fall-out of many different events that have taken place over this time.
A bad start
It was a rocky beginning for English spelling. Quite simply, the 23-letter Roman alphabet has never been adequate — even Old English (spoken 450-1150) had 35 or so sounds, and our sound system is now even bigger.
More spelling problems came in when French scribes introduced new spelling conventions — their own of course, and not always helpful. Using “c” instead of “s” for words like city was messy because “c” also represented the “k” sound in words like cat.
And then printing arrived in the 15th century — and with it more mess. William Caxton (who set up the presses in the first place) liked Dutch spellings and so established the “gh” in ghost and ghastly. Some printers were European and they introduced favourite spellings too from their own languages. Not terribly helpful either!
Those pesky silent letters
One of the biggest problems for English spelling has always been changes in pronunciation. Printing helped to stablise the spelling of words, but then some sounds changed their shape, and others even disappeared altogether. Think of those silent letters in words such as walk, through, write, right, sword, know, gnat — these were once pronounced.
If only the printer Caxton had been born a couple of centuries later, or if these sound changes had occurred a couple of centuries earlier, our spelling would be much truer to pronunciation.
And now comes another little wrinkle in this story – there’s a bunch of silent letters that were never actually pronounced. They appeared because of linguistic busybodies who wanted to make the language look more respectable. This caused some serious mess.
Take how we spell the word rhyme. When we swiped the word from French, it had a much more sensible look — rime. But this was changed to rhyme to give it a more classy classical look (like rhythm) – an interesting idea, but hardly helpful for someone trying to spell the word!
The 16th and 17th centuries saw many extra letters introduced in this way. Think of the “b” added to debt to make a link to Latin debitum. Now, the “b” might be justified in the word debit that we stole directly from Latin, but it was the French who gave us dette.
The “b” consonant was a mistake, and now we accuse poor old debt of having lost it through sloppy pronunciation!
Let’s make spelling more sensible
And so it is from this haphazard evolution that we end up with the spelling system we have.
But you know, there are in fact over 80% of words spelled according to regular patterns. So wholesale change is not what we want. However simple improvements could certainly be made without any major upheaval.
We could iron out inconsistencies such as humOUr versus humOrous. To introduce uniform -or spellings would be a painless reform (well, perhaps not painless, since many people are quite attached to the -our in words like humour)
We could also restore earlier spellings like rime and dette, and while we’re at it give psychology and philosophy a sensible look by spelling them sykology and filosofy.
So now, you can see the problem. No matter how silly spellings are, people get attached to them, and new spellings – even sensible ones – never seem to get a foot in the door.
Curious Kids: Who made the alphabet song?
Hello, curious kids! Have you got a question you’d like an expert to answer? Ask an adult to send your question to us. You can:
Please tell us your name, age and which city you live in. You can send an audio recording of your question too, if you want. Send as many questions as you like! We won’t be able to answer every question but we will do our best.
The link below is to an article that takes a look at the use of the semicolon and the rules of writing.
Like Dr. Seuss’ Star-Belly Sneetches and Plain-Belly Sneetches, there are two types of creatures — haitchers with H on their 8th letter name and aitchers with “none upon thars”.
That H isn’t so big. It’s really so small
You might think such a thing wouldn’t matter at all.
But it does — the tiny H on “(h)aitch” divides the nation. The pronunciation has become something of a social password, a spoken shibboleth distinguishing in-groupers from out-groupers. Those with social clout set the standards for what’s “in” and what’s “out” — no H has the stamp of approval.
The best kind of people are people without!
Shibboleths die hard — the opprobrium attached to haitch probably derives from its long association with Irish Catholic Education. There’s no real evidence for this, mind, as Sue Butler points out, but never let facts get in the way of a good shibboleth.
Aitchers’ reactions are often visceral. Someone once reported to us an encounter with haitch is like an encounter with fire ants. We’ve no doubt that psycho-physiological testing would show that haitch can raise goosebumps. Linguistic pinpricks are established early on in the acquisition process (“Don’t say ‘haitch’”!) and they arouse emotions like other childhood reprimands (including swearwords).
The ins and outs of H
The story of the weakly articulated H is murkily entwined with the story of its name. Long gone from Old English words like hring “ring”, hnecca “neck” and hlūd “loud”, it would have disappeared entirely if writing hadn’t thrown out a lifejacket.
It was once usual for speakers to drop aspirates at the beginning of words — in fact up until the 1700s, it was fashionable to do so. But a spelling-obsessed 18th century stigmatised the loss of many consonants, including H.
R-less pronunciations of arm and car might have snuck under the radar, but H-dropping fell well and truly from grace.
In 1873, Thomas Laurence Kington-Oliphant wrote about this “revolting habit” in his chapter “Good and Bad English”, advising:
Few things will the English youth find in after-life more pro-fitable than the right use of the aforesaid letter.
And so, the English youth restored H to words like hat, and even at the start of many French words like humble, which had entered English H-less (the Romans pronounced their Hs, but the French dropped theirs). Spellers who weren’t quite sure whether or not to include H added a few extras along the way — umble pie (“offal pie”) turned into humble pie.
Haitch has the pedigree
There’s an ironic wrinkle to this story. The name aitch might be a sign of high education in some circles, but is itself an example of H-dropping. Deriving from medieval French hache or “axe” (hatchet and hashtag are relatives), it also arrived in English H-less (like humble and herb).
It’s a curious letter name being, as the Oxford English Dictionary describes, “so remote from any connection with the sound”. In fact there’s solid evidence supporting haitch as the better option. To understand why, we need to appreciate the primacy of initial letter sounds in words.
Learning and alliteration
English speakers find it easiest to attend to and manipulate the beginning sounds of words. For example, it’s easier for us (orally, that is – by sound, not spelling) to take away the “b” sound in beat (to make it eat) or to replace the “b” with a “p” to make it Pete than it is to take away the “t” sound in beat (to make it be) or to replace it with a “k” to make it beak.
It’s more natural for us to focus on initial sounds, especially for children.
We often make use of alliteration in names and tongue twisters. Dr. Seuss (think Aunt Annie’s Alligator or The Butter Battle Book), Walt Disney (such as Donald Duck; Mickey Mouse), and J.K. Rowling (Godric Gryffindor; Helga Hufflepuff; Rowena Ravenclaw; Salazar Slytherin) all capitalised on this phenomenon.
Tongue twisters highlight the special quality of alliteration for learning as well; who can forget Peter Piper and his pickled peppers, Silly Sally and her sheep, or Betty Botter and her butter?
The ABCs of the ABC
Many letters of the alphabet are phonetically iconic; their names represent the sound they make. In places where letter names are learned before letter sounds, such as Australia and the US, these letter names can facilitate children in learning letter sounds and, ultimately, word reading. The letter sounds that are easiest to remember are those that begin with their corresponding letter, such as B, D, J, K, P, or T.
Research shows it’s more difficult to learn sounds made by letters that end with their letter sound, such as F, L, and M. Those that have no correspondences to the letter sound are the most difficult. Logically, W should make the “d” sound (or change its name to wubble-u).
Haitch vs. aitch, round 2
Whatever your visceral reaction to pronouncing H one way or the other, haitch has definite benefits for letter sound learning.
So it’s not surprising it’s taking off in some parts of the English-speaking world. When the letter H is pronounced beginning with the letter sound it makes, children have an easier time learning its correspondence as they learn to read.
Dr. Seuss implicitly understood this. We suggest that a follow-up primer for young readers will one day include Horton hearing a Haitch.
Kate Burridge, Senior Fellow at the Freiburg Institute for Advanced Studies and Professor of Linguistics, Monash University and Catherine McBride, Marie Curie Fellow of the European Union Freiburg Institute for Advanced Studies, University of Freiburg, and Professor of Psychology, Chinese University of Hong Kong
English has achieved prime status by becoming the most widely spoken language in the world – if one disregards proficiency – ahead of Mandarin Chinese and Spanish. English is spoken in 101 countries, while Arabic is spoken in 60, French in 51, Chinese in 33, and Spanish in 31. From one small island, English has gone on to acquire lingua franca status in international business, worldwide diplomacy, and science.
But the success of English – or indeed any language – as a “universal” language comes with a hefty price, in terms of vulnerability. Problems arise when English is a second language to either speakers, listeners, or both. No matter how proficient they are, their own understanding of English, and their first (or “native”) language can change what they believe is being said.
When someone uses their second language, they seem to operate slightly differently than when they function in their native language. This phenomenon has been referred to as the “foreign language effect”. Research from our group has shown that native speakers of Chinese, for example, tended to take more risks in a gambling game when they received positive feedback in their native language (wins), when compared to negative feedback (losses). But this trend disappeared – that is, they became less impulsive – when the same positive feedback was given to them in English. It was as if they are more rational in their second language.
While reduced impulsiveness when dealing in a second language can be seen as a positive thing, the picture is potentially much darker when it comes to human interactions. In a second language, research has found that speakers are also likely to be less emotional and show less empathy and consideration for the emotional state of others.
For instance, we showed that Chinese-English bilinguals exposed to negative words in English unconsciously filtered out the mental impact of these words. And Polish-English bilinguals who are normally affected by sad statements in their native Polish appeared to be much less disturbed by the same statements in English.
In another recent study by our group, we found that second language use can even affect one’s inclination to believe the truth. Especially when conversations touch on culture and intimate beliefs.
Since second language speakers of English are a huge majority in the world today, native English speakers will frequently interact with non-native speakers in English, more so than any other language. And in an exchange between a native and a foreign speaker, the research suggests that the foreign speaker is more likely to be emotionally detached and can even show different moral judgements.
And there is more. While English provides a phenomenal opportunity for global communication, its prominence means that native speakers of English have low awareness of language diversity. This is a problem because there is good evidence that differences between languages go hand-in-hand with differences in conceptualisation of the world and even perception of it.
In 2009, we were able to show that native speakers of Greek, who have two words for dark blue and light blue in their language, see the contrast between light and dark blue as more salient than native speakers of English. This effect was not simply due to the different environment in which people are brought up in either, because the native speakers of English showed similar sensitivity to blue contrasts and green contrasts, the latter being very common in the UK.
On the one hand, operating in a second language is not the same as operating in a native language. But, on the other, language diversity has a big impact on perception and conceptions. This is bound to have implications on how information is accessed, how it is interpreted, and how it is used by second language speakers when they interact with others.
We can come to the conclusion that a balanced exchange of ideas, as well as consideration for others’ emotional states and beliefs, requires a proficient knowledge of each other’s native language. In other words, we need truly bilingual exchanges, in which all involved know the language of the other. So, it is just as important for English native speakers to be able to converse with others in their languages.
The US and the UK could do much more to engage in rectifying the world’s language balance, and foster mass learning of foreign languages. Unfortunately, the best way to achieve near-native foreign language proficiency is through immersion, by visiting other countries and interacting with local speakers of the language. Doing so might also have the effect of bridging some current political divides.
When learning a new language, what’s the first thing most of us do? If you are like me, you flick through the dictionary to find all the naughty words. And a quick glance on Amazon will reveal a veritable library dedicated to the rigorous pursuit of insulting around the world. We seem to be just a little obsessed – and why the hell not?
But we actually don’t need to reach for the nearest Collins dictionary to pick up some polyglot profanities. Many English swear words have come from different languages over the centuries. For example, the classics – “fuck”, “shit” and “cunt” – are words the language shares with older Germanic and Scandinavian languages. Fuck is likely to be cognate with the Dutch “fokken”, which in the 15th century meant “to mock” – and may also be related to Middle High German “ficken”, meaning “to rub”’. Both words began to be related to sexual intercourse in the 16th century.
The earliest mention we have in English for fuck (in the sense of copulation) is in a Latin-English sermon from 1500. That’s right, a sermon (find it here on page 91). What is particularly fascinating here is the encryption – with each letter representing the one before it in the alphabet, suggesting some level of aversion to the word:
Non sunt in cœli, quia gxddbov xxkxzt pg ifmk.
Decrypting the last four rather incriminating words gives us: “fvccant vvjvys of heli” – which, when translated means: “They [monks] are not in heaven because they fuck the wives of Ely.”. To decipher the code, we have to bear in mind the differences in both the alphabet and spelling between then and now: the letter “w” did not exist, instead, one could use “vv” to represent this sound. You could also use “j” in the place of “i”, and “v” in the place of “u”.
Fuck also appears in Middle English names and place names, often meaning “to strike”. Hence Henry Fuckebegger (on the record in 1286) most likely beat the poor, rather than shagging them. Shit and cunt, which both have cognates in earlier Germanic and Scandinavian languages, have also been used in placenames from the Middle Ages. Skidbrooke in Lincolnshire, for example, appears in the Domesday book as “Schitebroc” – that is: “Shit-brook”. In fact, if you ever walk down a Grape or Grove Lane, chances are it used to be one of the many “Gropecuntlanes”, denoting a medieval red-light district.
Mind your language
So we know that a lot of our favourite swears are loans from the Germanic and Scandinavian language families. Well, yes and no. The words may come from these origins, but where their use comes from is where it can get interesting. I’m about to make the case that one of the most quintessentially British swear words is, in fact, kind of French. The word I’m talking about is “bloody”, as in, every time Harry Potter’s Ron Weasley exclaims “Bloody hell, Harry!”
The origins of this one seem consistent with the rest of the swear words – it’s a Germanic word that appears in Old and Middle English as an adjective meaning “bloodthirsty”, “cruel” and “murderous” alongside the more obvious “bloodstained” sense. But nowhere is it used as a swear word. One could make the case that the swear use comes from contact with Anglo Norman, which was a variety of French that came with the Normans in 1066. This is because it is in Anglo Norman that we find the French word “sanglant” (meaning “bloody”) being used as a swear word.
Sanglant appears twice in a 1396 version of a conversation manual called the Manières de langage, which was essentially the textbook for learning French at the turn of the 15th century. It appears in insults such as “senglant merdous garcion” (“bloody filthy rogue”), and “senglent filz de putaigne” (“bloody son of a whore”). Indeed, sanglant as a swear word seems to have enjoyed a particularly Anglo-Norman flavour.
Pour épater les Anglais
In the continental French farce Pathelin (1457), the eponymous character attempts to avoid repaying a debt by babbling in various French dialects in an attempt to appear mad. He utters the words “sanglant paillart” (“bloody bastard”) while speaking in the Norman dialect. Moreover, in the Chronique de Charles VII, the French call the “Angloiz et Normans” (Angles and Normans) by the insult “senglans puans mezeaulx porriz” (“bloody putrid rotting lepers”). Here, the French are ironically insulting the English in one of their own tongues, which was at that time a dialect of French.
It is only after the appearance of “sanglant” that we then get “bloody” as a swear word – which means that it is very likely that this seemingly Germanic word has assumed a Francophone character.
Research into the English language reveals that the UK shares more with Europe than many realise. The language contact situation is particularly diverse for Britain, with heavy influence from Germanic/Scandinavian languages. As the evolution of the word “bloody” suggests, Anglo Norman also played a fundamental role in how English speakers use words.
But for Anglo Norman, the key message is that this was a variety of French that was viewed in the Middle Ages as a British language. Hence, English has evolved from a background of significant linguistic diversity that has formed part of the country’s identity for centuries. And the traces of that are sometimes hiding in plain sight.
My grammar checker and I are on a break. Due to irreconcilable differences, we are no longer on speaking terms.
It all started when it became dead set on putting commas before every single “which”. Despite all the angry underlining, “this is a habit which seems prevalent” does not need a comma before “which”. Take it from me, I am a linguist.
This is just one of many challenging cases where grammar is slippery and hard to pin down. To make matters worse, it appears that the grammar we use while speaking is slightly different to the grammar we use while writing. Speech and writing seem similar enough – so much so that for centuries, people (linguists included) were blind to the differences.
There’s issues to consider
Let me give you an example. Take sentences like “there is X” and “there are X”. You may have been taught that “there is” occurs with singular entities because “is” is the present singular form of “to be” – as in “there is milk in the fridge” or “there is a storm coming”.
Conversely, “there are” is used with plural entities: “there are twelve months in a year” or “there are lots of idiots on the road”.
What about “there’s X”? Well, “there’s” is the abbreviated version of “there is”. That makes it the verb form of choice when followed by singular entities.
Nice theory. It works for standard, written language, formal academic writing, and legal documents. But in speech, things are very different.
It turns out that spoken English favours “there is” and “there’s” over “there are”, regardless of what follows the verb: “there is five bucks on the counter” or “there’s five cars all fighting for that Number 10 spot”.
A question of planning
This is not because English is going to hell in a hand basket, nor because young people can’t speak “proper” English anymore.
Linguists Jen Hay and Daniel Schreier scrutinised examples of old recordings of New Zealand English to see what happens in cases where you might expect “there” followed by plural, (or “there are” or “there were” for past events) but where you find “there” followed by singular (“there is”, “there’s”, “there was”).
They found that the contracted form “there’s” is a go-to form which seems prevalent with both singular and plural entities. But there’s more. The greater the distance between “be” and the entity following it, the more likely speakers are to ignore the plural rule.
“There is great vast fields of corn” is likely to be produced because the plural entity “fields” comes so far down the expression, that speakers do not plan for it in advance with a plural form “are”.
Even more surprisingly, the use of the singular may not always necessarily have much to do with what follows “there is/are”. It can simply be about the timing of the event described. With past events, the singular form is even more acceptable. “There was dogs in the yard” seems to raise fewer eyebrows than “there is dogs in the yard”.
Nothing new here
The disregard for the plural form is not a new thing (darn, we can’t even blame it on texting). According to an article published last year by Norwegian linguist Dania Bonneess, the change towards the singular form “there is” has been with us in New Zealand English ever since the 19th century. Its history can be traced at least as far back as the second generation of the Ulster family of Irish emigrants.
Editors, language commissions and prescriptivists aside, everyday New Zealand speech has a life of its own, governed not so much by style guides and grammar rules, but by living and breathing individuals.
It should be no surprise that spoken language is different to written language. The most spoken-like form of speech (conversation) is very unlike the most written-like version of language (academic or other formal or technical writing) for good reason.
Speech and writing
In conversation, there is no time for planning. Expressions come out more or less off the cuff (depending on the individual), with no ability to edit, and with immediate need for processing. We hear a chunk of language and at the same time as parsing it, we are already putting together a response to it – in real time.
This speed has consequences for the kind of language we use and hear. When speaking, we rely on recycled expressions, formulae we use over and over again, and less complex structures.
For example, we are happy enough writing and reading a sentence like:
That the human brain can use language is amazing.
But in speech, we prefer:
It is amazing that the human brain can use language.
Both are grammatical, yet one is simpler and quicker for the brain to decode.
And sometimes, in speech we use grammatical crutches to help the brain get the message quicker. A phrase like “the boxes I put the files into” is readily encountered in writing, but in speech we often say and hear “the boxes I put the files into them”.
We call these seemingly unnecessary pronouns (“them” in the previous example) “shadow pronouns”. Even linguistics professors use these latter expressions no matter how much they might deny it.
Speech: a faster ride
There is another interesting difference between speech and writing: speech is not held up on the same rigid prescriptive pedestal as writing, nor is it as heavily regulated in the same way that writing is scrutinised by editors, critics, examiners and teachers.
This allows room in speech for more creativity and more language play, and with it, faster change. Speech is known to evolve faster than writing, even though writing will eventually catch up (at least for some changes).
I would guess that by now, most editors are happy enough to let the old “whom” form rest and “who” take over (“who did you give that book to?”).