Science fiction has struggled to achieve the same credibility as highbrow literature. In 2019, the celebrated author Ian McEwan dismissed science fiction as the stuff of “anti-gravity boots” rather than “human dilemmas”. According to McEwan, his own book about intelligent robots, Machines Like Me, provided the latter by examining the ethics of artificial life – as if this were not a staple of science fiction from Isaac Asimov’s robot stories of the 1940s and 1950s to TV series such as Humans (2015-2018).
Psychology has often supported this dismissal of the genre. The most recent psychological accusation against science fiction is the “great fantasy migration hypothesis”. This supposes that the real world of unemployment and debt is too disappointing for a generation of entitled narcissists. They consequently migrate to a land of make-believe where they can live out their grandiose fantasies.
The authors of a 2015 study stress that, while they have found evidence to confirm this hypothesis, such psychological profiling of “geeks” is not intended to be stigmatising. Fantasy migration is “adaptive” – dressing up as Princess Leia or Darth Vader makes science fiction fans happy and keeps them out of trouble.
But, while psychology may not exactly diagnose fans as mentally ill, the insinuation remains – science fiction evades, rather than confronts, disappointment with the real world.
The case of ‘Kirk Allen’
The psychological accusation that science fiction evades real life goes back to the 1950s. In 1954, the psychoanalyst Robert Lindner published his case study of the pseudonymous “Kirk Allen”, a patient who maintained an extraordinary fantasy life modelled on pulp science fiction.
Allen believed he was at once a scientist on Earth – and simultaneously an interplanetary emperor. He believed he could enter his other life by mental time travel into the far-off future, where his destiny awaited in scenes of power, respect, and conquest – both military and sexual.
Lindner explained Allen’s condition as an escape from overwhelming mental anguish rooted in childhood trauma. But Lindner, himself a science fiction fan, remarked also on the seductive attraction of Allen’s second life, which began to offer, as he put it, a “fatal fascination”. The message was clear. Allen’s psychosis was extreme, but it showed in stark clarity what drew readers to science fiction: an imagined life of power and status that compensated for the readers’ own deficiencies and disappointments.
Lindner’s words mattered. He was an influential cultural commentator, who wrote for US magazines such as Time and Harper’s. The story of Kirk Allen was published in the latter, and in Lindner’s book of case studies, The Fifty-Minute Hour, which became a successful popular paperback.
Psychology had very publicly diagnosed science fiction as a literature of evasion – an “escape hatch” for the mentally troubled. Science fiction answered back, decisively changing the genre in the following decades.
To take one example: Norman Spinrad’s The Iron Dream (1972) purports to reprint a prize-winning 1954 science fiction novel. The novel is apparently written, in an alternate history timeline, by Adolf Hitler, who gave up politics, emigrated to the US, and became a successful science fiction author and illustrator. A fictional critical afterword explains that Hitler’s novel, with its “fetishistic military displays and orgiastic bouts of unreal violence”, is just a more extreme version of the “pathological literature” that dominates the genre.
In her review of The Iron Dream, the now-celebrated science fiction author Ursula Le Guin – daughter of the distinguished anthropologist Alfred Kroeber – wrote that the “essential gesture of SF” is “distancing, the pulling back from ‘reality’ in order to see it better”, including “our desires to lead, or to be led”, and “our righteous wars”. Le Guin wanted science fiction to make strange the North American society of her time, showing afresh its peculiar psychology, culture, and politics.
In 1972, the US was still fighting the Vietnam War. In the same year, Le Guin offered her own “distanced” version of social reality in The Word for World is Forest, which depicts the attempted colonisation of an inhabited alien planet by a macho, militaristic Earth society intent on conquering and violating the natural world – a semi-allegory for what the USA was doing at the time in south-east Asia.
As well as repudiating the worst parts of the genre, such responses implied a positive model for science fiction. Science fiction wasn’t about evading reality – it was a literary anthropology which made our own society into a foreign culture which we could stand back from, reflect on, and change.
Rather than ask us to pull on our anti-gravity boots, open the escape hatch and leap into fantasy, science fiction typically aspires to be a literature that faces up to social reality. It owes this ambition, in part, to psychology’s repeated accusation that the genre markets escapism to the marginalised and disaffected.
The future isn’t what it used to be, at least according to the Canadian science fiction novelist William Gibson. In a interview with the BBC, Gibson said people seemed to be losing interest in the future. “All through the 20th century we constantly saw the 21st century invoked,” he said. “How often do you hear anyone invoke the 22nd century? Even saying it is unfamiliar to us. We’ve come to not have a future”.
Gibson thinks that during his lifetime the future “has been a cult, if not a religion”. His whole generation was seized by “postalgia”. This is a tendency to dwell on romantic, idealised visions of the future. Rather than imagining the past as an ideal time (as nostalgics do), postalgics think the future will be perfect. For example, a study of young consultants found many suffered from postalgia. They imagined their life would be perfect once they were promoted to partner.
“The Future, capital-F, be it crystalline city on the hill or radioactive post-nuclear wasteland, is gone”, Gibson said in 2012. “Ahead of us, there is merely … more stuff … events”. The upshot is a peculiarly postmodern malaise. Gibson calls it “future fatigue”. This is a condition where we have grown weary of an obsession with romantic and dystopian visions of the future. Instead, our focus is on now.
Gibson’s diagnosis is supported by international attitude surveys. One found that most Americans rarely think about the future and only a few think about the distant future. When they are forced to think about it, they don’t like what they see. Another poll by the Pew Research Centre found that 44% of Americans were pessimistic about what lies ahead.
But pessimism about the future isn’t just limited to the US. One international poll of over 400,000 people from 26 countries found that people in developed countries tended to think that the lives of today’s children will be worse than their own. And a 2015 international survey by YouGov found that people in developed countries were particularly pessimistic. For instance, only 4% of people in Britain thought things were improving. This contrasted with 41% of Chinese people who thought things were getting better.
But other research suggests that this widespread pessimism as irrational. People who support this view, point out that on many measures the world is actually improving. And an Ipsos poll found that people who are more informed tend to be less pessimistic about the future.
Although there may be some objective reasons to be pessimistic, it is likely that other factors may explain future fatigue. Researchers who have studied forecasting say there are good reasons why we might avoid making predictions about the distant future.
For one, forecasting is always a highly uncertain activity. The longer the time frame one is making predictions about and the more complicated the prediction, the more room there is for error. This means that while it might be rational to make a projection about something simple in the near future, it is probably pointless to make projections about something complex in the very distant future.
Economists have known for many years that people tend to discount the future. That means we put a greater value on something which we can get immediately than something we have to wait for. More attention is paid to pressing short-term needs while longer-term investments go unheeded.
Psychologists have also found that futures that are close at hand seem concrete and detailed while those that are further away seem abstract and stylised. Near futures were more likely to be based on personal experience, while the distance future was shaped by ideologies and theories.
When a future seems to be closer and more concrete, people tend to think it is more likely to occur. And studies have shown that near and concrete futures are also more likely to spark us into action. So the preference for concrete, close-at-hand futures mean people tend to put off thinking about more abstract and distant possibilities.
The human aversion to thinking about the future is partially hardwired. But there are also particular social conditions that make us more likely to give up on the future. Sociologists have argued that for people living in fairly stable societies, it is possible to generate stories about what the future might be like. But in moments of profound social dislocation and upheaval, these stories stop making sense and we lose a sense of the future and how to prepare for it.
This is what happened in many native American communities during colonialism. This is how Plenty Coups, the leader of the Crow people, described it: “When the buffalo went away the hearts of my people fell to the ground, and they could not lift them up again. After this nothing happened.”
But instead of being thrown into a sense of despair by the future, Gibson thinks we should be a little more optimistic. “This new found state of No Future is, in my opinion, a very good thing … It indicates a kind of maturity, an understanding that every future is someone else’s past, every present is someone else’s future”.
In 2007 the then President of China, Hu Jintao, delivered a speech to South Africans acknowledging the benefits of a strategic partnership. He also stressed that the connection is not merely pragmatic. It must, he argued, serve to honour and deepen the countries’ long abiding friendship in the future.
The idea of friendship has undoubtedly informed the nature of Sino-African engagement. But if we use contemporary science fiction as a barometer, African sentiment towards China appears more inclined towards dystopian forecasts.
Science fiction writing often serves as a thought experiment that explores shared and hidden beliefs whose material and political reverberations lie further in the future. Various short stories portray how China’s economic ascension, operating under the guise of African development, uses technology as a means to invade and control Africa.
Narratives of this kind surface neo-colonial fears that a “new scramble for Africa” seems imminent. But they also provide a speculative arena to interrogate how we ultimately perceive the value, use and future of Sino-African political friendship.
As I’ve explored in my research, this means that science fiction can serve as an imaginative production of political theory. It intercedes in ways that international relations cannot because of the confines of diplomacy.
My research focused on three short science fiction stories from Africa.
In the first, Tendai Huchu’s “The Sale”, China has taken control of Zimbabwe through the production of a corporatised state called CorpGov. It’s a surveillance state that leaves no room for political dissension. Zimbabwe has been purchased by China in a piecemeal fashion. It is now set to lose its last free portion of land in a final sale. When a young Zimbabwean man fails to prevent the sale of this remaining plot of land, he succumbs to despair and puts himself in the path of a Chinese bulldozer.
His suicide evokes a sense of profound helplessness and warns that China will need to be vehemently counteracted in the near future to protect Zimbabwe’s already breached borders. Huchu’s narrative provides a sharp sense of clarity that makes the story incredibly impactful.
The pathos of “The Sale” holds a mirror up to China. It communicates an earnest appeal for more humane engagement. Yet the heaviness of its dystopian narrative also breeds a spirit of nihilism or Afropessimism. This overrides any sense of African accountability in the degenerative state of future Sino-Zimbabwean relations.
Abigail Godsell’s “Taal” (an Afrikaans word meaning “language”) is self-conscious in this regard. It’s set in the year 2050, after a nuclear war between China and America has left the entire globe in a state of desolation. As a result, the South African government willingly signed over ownership of the country to China in exchange for protection.
The central protagonist, an especially resentful young woman named Callie, has joined a militant rebel group in a covert attempt to overthrow the Chinese. But after injuring a soldier, she pulls off his helmet and is surprised that he converses in Afrikaans because, to all other appearances, he is Chinese. The fact that he speaks Afrikaans implies he is a South African. She is stupefied by the exchange: it highlights her simplistic understanding of what the enemy should look like.
This uncanny revelation undoubtedly draws attention to the spectral presence of Chinese-South Africans who have not received due recognition as bona fide citizens.
Callie, who is initially critical of Chinese propaganda, begins to read her positionality as a South African freedom fighter on equally problematic terms. Her defensiveness drops and she confesses that South Africa was caught off-guard amid a global crisis. The country did not have a sufficient national security plan; China has offered significantly more protection than the South African government was capable of at the time.
Godsell’s introspective narrative shift focus away from Chinese agitation. It allows the reader to consider the nature of South African apathy by conveying that the country may not lack a fighting spirit but, unlike China, lacks the necessary foresight and organisation to bolster the nation.
Negative representations of China in the African imaginary gesture at the idea that a certain amount of envy informs the continent’s responses to China. They also suggest that African countries can benefit from emulating China’s uncompromising nationalistic and commercial drive. This possibility is more fully explored in Mandisi Nkomo’s “Heresy”.
Nkomo’s narrative is set in the year 2040. South-South interactions challenge the global status quo. China has risen in global economic rankings. But South Africa has not fallen under its sway: the nations are caught up in a highly competitive space race. South Africa is determined to not be outdone by the Chinese and channels its resources in meeting this goal.
“Heresy” conveys how Africans can construct an invisible enemy out of China by exponentially accelerating South African development. This light-hearted narrative assumes the challenge of imagining the current tension of Sino-African relations otherwise. It shows how friendly rivalry can inadvertently lead to African progress.
In their book Friendship and International Relations, academics Andrea Oelsner and Simon Koschut write that it is:
necessary to think of international friendship not as something that is merely being performed at the intergovernmental level but as something that is being enacted in the day-to-day activities and imaginations at all levels of society.
This certainly includes science fiction narratives that present us with a “succession of literary experiments, each one examining a small part of a much larger image and each equally necessary to the greater vision”.
Through these short stories, it immediately becomes possible to consider how China-Africa relations need not result in Chinese neocolonialism and African exploitation. They offer us more creative approaches to political friendship by reinventing and reinterpreting the roles of both parties in their narratives.
Similarly, pursued in this way, the future of China-Africa relations need not be seen as a singular act of solidarity that demands repeating. Instead it could be viewed as a more fluid encounter that allows for mutual investment in world-building projects while also providing enough objective distance to nurture difference and autonomy.
I count myself lucky. Weird, I know, in this day and age when all around us the natural and political world is going to hell in a handbasket. But that, in fact, may be part of it.
Back when I started writing, realism had such a stranglehold on publishing that there was little room for speculative writers and readers. (I didn’t know that’s what I was until I read it in a reader’s report for my first novel. And even then I didn’t know what it was, until I realised that it was what I read, and had always been reading; what I wrote, and wanted to write.) Outside of the convention rooms, that is, which were packed with less-literary-leaning science-fiction and fantasy producers and consumers.
Realism was the rule, even for those writing non-realist stories, such as popular crime and commercial romance. Perhaps this dominance was because of a culture heavily influenced by an Anglo-Saxon heritage. Richard Lea has written in The Guardian of “non-fiction” as a construct of English literature, arguing other cultures do not distinguish so obsessively between stories on the basis of whether or not they are “real”.
Regardless of the reason, this conception of literary fiction has been widely accepted – leading self-described “weird fiction” novelist China Miéville to identify the Booker as a genre prize for specifically realist literary fiction; a category he calls “litfic”. The best writers Australia is famous for producing aren’t only a product of this environment, but also role models who perpetuate it: Tim Winton and Helen Garner write similarly realistically, albeit generally fiction for one and non-fiction for the other.
Today, realism remains the most popular literary mode. Our education system trains us to appreciate literatures of verisimilitude; or, rather, literature we identify as “real”, charting interior landscapes and emotional journeys that generally represent a quite particular version of middle-class life. It’s one that may not have much in common these days with many people’s experiences – middle-class, Anglo or otherwise – or even our exterior world(s).
Like other kinds of biases, realism has been normalised, but there is now a growing recognition – a re-evaluation – of different kinds of “un-real” storytelling: “speculative” fiction, so-called for its obviously invented and inventive aspects.
a much larger collective conviction about who’s entitled to tell stories, what stories are worth telling, and who among the storytellers gets taken seriously … not only in terms of race and gender, but in terms of what has long been labelled “genre” fiction.
Rawson’s latest book, From the Wreck, intertwines the story of her ancestor George Hills, who was shipwrecked off the coast of South Australia and survived eight days at sea, with the tale of a shape-shifting alien seeking refuge on Earth. In an Australian first, it was long-listed for the Miles Franklin, our most prestigious literary award, after having won the niche Aurealis Award for Speculative Fiction.
The Aurealis awards were established in 1995 by the publishers of Australia’s longest-running, small-press science-fiction and fantasy magazine of the same name. As well as recognising the achievements of Australian science-fiction, fantasy and horror writers, they were designed to distinguish between those speculative subgenres.
Last year, five of the six finalists for the Aurealis awards were published, promoted and shelved as literary fiction.
A broad church
Perhaps what counts as speculative fiction is also changing. The term is certainly not new; it was first used in an 1889 review, but came into more common usage after genre author Robert Heinlein’s 1947 essay On the Writing of Speculative Fiction.
Whereas science fiction generally engages with technological developments and their potential consequences, speculative fiction is a far broader, vaguer term. It can be seen as an offshoot of the popular science-fiction genre, or a more neutral umbrella category that simply describes all non-realist forms, including fantasy and fairytales – from the epic of Gilgamesh through to The Handmaid’s Tale.
While critic James Wood argues that “everything flows from the real … it is realism that allows surrealism, magic realism, fantasy, dream and so on”, others, such as author Doris Lessing, believe that everything flows from the fantastic; that all fiction has always been speculative. I am not as interested in which came first (or which has more cultural, or commercial, value) as I am in the fact that speculative fiction – “spec-fic” – seems to be gaining literary respectability.
(Next step, surely, mainstream popularity! After all, millions of moviegoers and television viewers have binge-watched the rise of fantastic forms, and audiences are well versed in unreal onscreen worlds.)
One reason for this new interest in an old but evolving form has been well articulated by author and critic James Bradley: climate change. Writers, and publishers, are embracing speculative fiction as an apt form to interrogate what it means to be human, to be humane, in the current climate – and to engage with ideas of posthumanism too.
These are the sorts of existential questions that have historically driven realist literature.
According to the World Wildlife Fund’s 2018 Living Planet Report, 60% of the world’s wildlife disappeared between 1970 and 2012. The year 2016 was declared the hottest on record, echoing the previous year and the one before that. People under 30 have never experienced a month in which average temperatures are below the long-term mean. Hurricanes register on the Richter scale and the Australian Bureau of Meteorology has added a colour to temperature maps as the heat keeps on climbing.
There is an infographic doing the rounds on Facebook that shows sister countries with comparable climates to (warming) regions of Australia. But it doesn’t reflect the real issue. Associate Professor Michael Kearney, Research Fellow in Biosciences at the University of Melbourne, points out that no-one anywhere in the world has any experience of our current CO2 levels. The changed environment is, he says – using a word that is particularly appropriate for my argument – a “novel” situation.
Elsewhere, biologists are gathering evidence of algae that carbon dioxide has made carbohydrate-rich but less nutritious. So the plankton that rely on them to survive might eat more and more and yet still starve.
Fiction focused on the inner lives of a limited cross-section of people no longer seems the best literary form to reflect, or reflect on, our brave new outer world – if, indeed, it ever was.
Whether it’s a creative response to catastrophic climate change, or an empathic, philosophical attempt to express cultural, economic, neurological – or even species – diversification, the recognition works such as Rawson’s are receiving surely shows we have left Modernism behind and entered the era of Anthropocene literature.
And her book is not alone. Other wild titles achieving similar success include Krissy Kneen’s An Uncertain Grace, shortlisted for the Aurealis, the Stella prize and the Norma K. Hemming award – given to mark excellence in the exploration of themes of race, gender, sexuality, class or disability in a speculative fiction work.
Kneen’s book connects five stories spanning a century, navigating themes of sexuality – including erotic explorations of transgression and transmutation – against the backdrop of a changing ocean.
Earlier, more realist but still speculative titles (from 2015) include Mireille Juchau’s The World Without Us and Bradley’s Clade. These novels fit better with Miéville’s description of “litfic”, employing realistic literary techniques that would not be out of place in Winton’s books, but they have been called “cli-fi” for the way they put climate change squarely at the forefront of their stories (though their authors tend to resist such generic categorisation).
Both novels, told across time and from multiple points of view, are concerned with radically changed and catastrophically changing environments, and how the negative consequences of our one-world experiment might well – or, rather, ill – play out.
Catherine McKinnnon’s Storyland is a more recent example that similarly has a fantastic aspect. The author describes her different chapters set in different times, culminating – Cloud Atlas–like, in one futuristic episode – as “timeslips” or “time shifts” rather than time travel. Yet it has been received as speculative – and not in a pejorative way, despite how some “high-art” literary authors may feel about “low-brow” genre associations.
Kazuo Ishiguro, for instance, told The New York Times when The Buried Giant was released in 2015 that he was fearful readers would not “follow him” into Arthurian Britain. Le Guin was quick to call him out on his obvious attempt to distance himself from the fantasy category. Michel Faber, around the same time, told a Wheeler Centre audience that his Book of Strange New Things, where a missionary is sent to convert an alien race, was “not about aliens” but alienation. Of course it is the latter, but it is also about the other.
All these more-and-less-speculative fictions – these not-traditionally-realist literatures – analyse the world in a way that it is not usually analysed, to echo Tim Parks’s criterion for the best novels. Interestingly, this sounds suspiciously like science-fiction critic Darko Suvin’s famous conception of the genre as a literature of “cognitive estrangement”, which inspires readers to re-view their own world, think in new ways, and – most importantly – take appropriate action.
A new party
Perhaps better case studies of what local spec-fic is or does – when considering questions of diversity – are Charlotte Wood’s The Natural Way of Things and Claire Coleman’s Terra Nullius.
The first is a distinctly Aussie Handmaid’s Tale for our times, where “girls” guilty by association with some unspecified sexual scenario are drugged, abducted and held captive in a remote outback location.
The latter is another idea whose time has come: an apocalyptic act of colonisation. Not such an imagined scenario for Noongar woman Coleman. It’s a tricky plot to tell without giving away spoilers – the book opens on an alternative history, or is it a futuristic Australia? Again, the story is told through different points of view, which prioritises collective storytelling over the authority of a single voice.
“The entire purpose of writing Terra Nullius,” Coleman has said, “was to provoke empathy in people who had none.”
This connection of reading with empathy is a case Neil Gaiman made in a 2013 lecture when he told of how China’s first party-approved science-fiction and fantasy convention had come about five years earlier.
The Chinese had sent delegates to Apple and Google etc to try to work out why America was inventing the future, he said. And they had discovered that all the programmers, all the entrepreneurs, had read science fiction when they were children.
“Fiction can show you a different world,” said Gaiman. “It can take you somewhere you’ve never been.”
And when you come back, you see things differently. And you might decide to do something about that: you might change the future.
Perhaps the key to why speculative fiction is on the rise is the ways in which it is not “hard” science fiction. Rather than focusing on technology and world-building to the point of potential fetishism, as our “real” world seems to be doing, what we are reading today is a sophisticated literature engaging with contemporary cultural, social and political matters – through the lens of an “un-real” idea, which may be little more than a metaphor or errant speculation.
On January 17 1803, a young man named George Forster was hanged for murder at Newgate prison in London. After his execution, as often happened, his body was carried ceremoniously across the city to the Royal College of Surgeons, where it would be publicly dissected. What actually happened was rather more shocking than simple dissection though. Forster was going to be electrified.
The experiments were to be carried out by the Italian natural philosopher Giovanni Aldini, the nephew of Luigi Galvani, who discovered “animal electricity” in 1780, and for whom the field of galvanism is named. With Forster on the slab before him, Aldini and his assistants started to experiment. The Times newspaper reported:
On the first application of the process to the face, the jaw of the deceased criminal began to quiver, the adjoining muscles were horribly contorted, and one eye was actually opened. In the subsequent part of the process, the right hand was raised and clenched, and the legs and thighs were set in motion.
It looked to some spectators “as if the wretched man was on the eve of being restored to life.”
By the time Aldini was experimenting on Forster the idea that there was some peculiarly intimate relationship between electricity and the processes of life was at least a century old. Isaac Newton speculated along such lines in the early 1700s. In 1730, the English astronomer and dyer Stephen Gray demonstrated the principle of electrical conductivity. Gray suspended an orphan boy on silk cords in mid air, and placed a positively charged tube near the boy’s feet, creating a negative charge in them. Due to his electrical isolation, this created a positive charge in the child’s other extremities, causing a nearby dish of gold leaf to be attracted to his fingers.
In France in 1746 Jean Antoine Nollet entertained the court at Versailles by causing a company of 180 royal guardsmen to jump simultaneously when the charge from a Leyden jar (an electrical storage device) passed through their bodies.
It was to defend his uncle’s theories against the attacks of opponents such as Alessandro Volta that Aldini carried out his experiments on Forster. Volta claimed that “animal” electricity was produced by the contact of metals rather than being a property of living tissue, but there were several other natural philosophers who took up Galvani’s ideas with enthusiasm. Alexander von Humboldt experimented with batteries made entirely from animal tissue. Johannes Ritter even carried out electrical experiments on himself to explore how electricity affected the sensations.
The idea that electricity really was the stuff of life and that it might be used to bring back the dead was certainly a familiar one in the kinds of circles in which the young Mary Wollstonecraft Shelley – the author of Frankenstein – moved. The English poet, and family friend, Samuel Taylor Coleridge was fascinated by the connections between electricity and life. Writing to his friend the chemist Humphry Davy after hearing that he was giving lectures at the Royal Institution in London, he told him how his “motive muscles tingled and contracted at the news, as if you had bared them and were zincifying the life-mocking fibres”. Percy Bysshe Shelley himself – who would become Wollstonecraft’s husband in 1816 – was another enthusiast for galvanic experimentation.
Aldini’s experiments with the dead attracted considerable attention. Some commentators poked fun at the idea that electricity could restore life, laughing at the thought that Aldini could “make dead people cut droll capers”. Others took the idea very seriously. Lecturer Charles Wilkinson, who assisted Aldini in his experiments, argued that galvanism was “an energising principle, which forms the line of distinction between matter and spirit, constituting in the great chain of the creation, the intervening link between corporeal substance and the essence of vitality”.
In 1814 the English surgeon John Abernethy made much the same sort of claim in the annual Hunterian lecture at the Royal College of Surgeons. His lecture sparked a violent debate with fellow surgeon William Lawrence. Abernethy claimed that electricity was (or was like) the vital force while Lawrence denied that there was any need to invoke a vital force at all to explain the processes of life. Both Mary and Percy Shelley certainly knew about this debate – Lawrence was their doctor.
By the time Frankenstein was published in 1818, its readers would have been familiar with the notion that life could be created or restored with electricity. Just a few months after the book appeared, the Scottish chemist Andrew Ure carried out his own electrical experiments on the body of Matthew Clydesdale, who had been executed for murder. When the dead man was electrified, Ure wrote, “every muscle in his countenance was simultaneously thrown into fearful action; rage, horror, despair, anguish, and ghastly smiles, united their hideous expression in the murderer’s face”.
Ure reported that the experiments were so gruesome that “several of the spectators were forced to leave the apartment, and one gentleman fainted”. It is tempting to speculate about the degree to which Ure had Mary Shelley’s recent novel in mind as he carried out his experiments. His own account of them was certainly quite deliberately written to highlight their more lurid elements.
Frankenstein might look like fantasy to modern eyes, but to its author and original readers there was nothing fantastic about it. Just as everyone knows about artificial intelligence now, so Shelley’s readers knew about the possibilities of electrical life. And just as artificial intelligence (AI) invokes a range of responses and arguments now, so did the prospect of electrical life – and Shelley’s novel – then.
The science behind Frankenstein reminds us that current debates have a long history – and that in many ways the terms of our debates now are determined by it. It was during the 19th century that people started thinking about the future as a different country, made out of science and technology. Novels such as Frankenstein, in which authors made their future out of the ingredients of their present, were an important element in that new way of thinking about tomorrow.
Thinking about the science that made Frankenstein seem so real in 1818 might help us consider more carefully the ways we think now about the possibilities – and the dangers – of our present futures.
Science fiction may seem resolutely modern, but the genre could actually be considered hundreds of years old. There are the alien green “children of Woolpit”, who appeared in 12th-century Suffolk and were reported to have spoken a language no one could understand. There’s also the story of Eilmer the 11th-century monk, who constructed a pair of wings and flew from the top of Malmesbury Abbey. And there’s the Voynich Manuscript, a 15th-century book written in an unknowable script, full of illustrations of otherworldly plants and surreal landscapes.
These are just some of the science fictions to be discovered within the literatures and cultures of the Middle Ages. There are also tales to be found of robots entertaining royal courts, communities speculating about utopian or dystopian futures, and literary maps measuring and exploring the outer reaches of time and space.
The influence of the genre we call “fantasy”, which often looks back to the medieval past in order to escape a techno-scientific future, means that the Middle Ages have rarely been associated with science fiction. But, as we have found, peering into the complex history of the genre, while also examining the scientific achievements of the medieval period, reveals that things are not quite what they seem.
Science fiction is particularly troublesome when it comes to matters of classification and origin. Indeed, there remains no agreed-upon definition of the genre. A variety of commentators have located the beginnings of SF in the early-20th-century explosion of pulp magazines, and in the work of Hugo Gernsback (1884-1967), who proposed the term “scientifiction” when editing and publishing the first issue of Amazing Stories, in 1926.
“By ‘scientifiction’,” Gernsback wrote, “I mean the Jules Verne, H G Wells and Edgar Allan Poe type of story – a charming romance intermingled with scientific fact and prophetic vision … Not only do these amazing tales make tremendously interesting reading – they are always instructive.”
But here Gernsback was already looking backwards in time to earlier writers to define SF. His “definition”, too, was one that could also be applied to literary creations from much further into the past.
Science and fiction
Another longstanding idea is that the “science” in science fiction is key: SF can only begin, many historians of the genre proclaim, following the birth of modern science.
Alongside histories of SF, histories of science have long avoided the medieval period (over a thousand years in which, presumably, nothing happened). Yet the Middle Ages was no dark, static, ignorant time of magic and superstition, nor was it an aberration in the neat progression from enlightened ancients to our modern age. It was actually a time of enormous advances in science and technology.
The compass and gunpowder were developed and improved upon, and spectacles, the mechanical clock and blast furnace were invented. The period also laid the foundations for modern science through founding universities, advanced the scientific learning of the classical world, and helped focus natural philosophy on the physics of creation. The medieval science of “computus”, for instance, was a complex measuring of time and space.
Scholars have started to reveal the convergence of science, technology and the imagination in medieval literary culture, demonstrating that this era could be characterised by inventiveness and a preoccupation with novelty and discovery. Take the medieval romances that feature Alexander the Great soaring heavenwards in a flying machine and exploring the depths of the ocean in his proto-submarine. Or that of the famous medieval traveller, Sir John Mandeville, who tells of marvellous, automated golden birds that beat their wings at the table of the Great Chan.
Like those of more modern science fictions, medieval writers tempered this sense of wonder with scepticism and rational inquiry. Geoffrey Chaucer describes the procedures and instruments of alchemy (an early form of chemistry) in such precise terms that it is tempting to think that the author must have had some experience of the practice. Yet his Canon’s Yeoman’s Tale also displays a lively distrust of fraudulent alchemists, sending up their pseudo-science while imagining and dramatising its harmful effects in the world.
The medieval future
Modern science fiction has dreamt up many worlds based on the Middle Ages, using it as a place to be revisited, as a space beyond earth, or as an alternate or future history. The representation of the medieval past is not always simplistic, nor always confined to “back then”.
William M Miller’s immensely detailed medieval future in A Canticle of Leibowitz (1959), for instance, dwells on the way the past consistently reemerges in the fragments, materials and conflicts of a distant future. Connie Willis’s Doomsday Book (1992), meanwhile, follows a time-travelling researcher of the near-future back to a medieval Oxford in the grip of the Black Death.
Although “medieval science fiction” may sound like an impossible fantasy, it’s a concept that can encourage us to ask new questions about an often-overlooked period of literary and scientific history. Who knows? The many wonders, cosmologies and technologies of the Middle Ages may have an important part to play in a future yet to come.