Fusion is failing, but not because it is hard

Fusion is ‘thirty years away and always will be’ – but why?

There are basically three ideas:

  1. We need to keep doing more or less what we’re doing now but for a longer time, maybe much longer, because fusion is hard.
  2. We need to invent a ‘MacGuffin’ that will suddenly make everything easy: either a very complicated equation (the Einstein approach, favoured by academics) or a very complicated apparatus (the Doc Brown approach, favoured by startups)
  3. Fusion is junk and will/can never work, see e.g. “The Fairy Tale of Nuclear Fusion”.

The first option is now treated with a lot of scepticism. Mainstream fusion research has been going for seventy years without being able to gain more power from fusion than it consumes, and magnetic fusion hasn’t even posted a new power gain record in almost thirty years. If this ever works, “maybe much longer” seems to be “longer than anyone cares to wait”.

In recent years the ‘MacGuffin’ option has been reinvigorated by a bloom in fusion startups. All startups basically have to propose a machine that can generate something to sell – like electricity – with a capital expense investors might accept. The need to produce net power at a cost generally in the hundreds of millions demands a small apparatus and increased size efficiency, often radically increased, and that generally demands speculative innovations.

Finally, the view of most of society – when/if it thinks about fusion at all – is that fusion doesn’t work. Fusion theoretically offers unlimited clean power, which would solve several of society’s biggest problems at the same time. Nonetheless, social interest in and funding for fusion is actually minuscule. The only rational explanation is that most people, investors, and political authorities believe it’s speculative at best, at worst unlikely to ever work.

All of these views are mistaken.

Fusion’s current technical means are actually extremely good and adequate for the task. They already sustain hotter temperatures than the core of the sun in a device maybe four metres wide. The ‘problem’ is that the device needs to be eight or maybe even twenty metres wide.

Why is that a problem? Only on cost grounds. However, the most expensive fusion machines ever operated were not costly. JET, which came close to producing more power from fusion than it consumed, cost only $1.5 billion (current!) US dollars to construct. Its closest competitor, TFTR, was even cheaper at $1 billion (current) US dollars. This is not a lot of money!

A ten times bigger machine at $10 billion, perhaps constructed over ten years, would be within reach of even medium sized economies that generally do not consider themselves individually competitive in fusion. For example, this would be much less than Spain spent on solar feed-in tariffs in the 2010s, or about five months of NASA’s current (not Apollo-era!) budget. Yet the belief that fusion machines are extraordinarily costly – apparently based on feelings rather than numbers – has meant that progress stopped at that point, as larger tokamaks could only be contemplated in the framework of a huge multinational treaty organisation.

To succeed, fusion needs to embrace size and accept costs.

Unfortunately, the main reason this doesn’t happen may be fusion research’s own leadership and experts advise against it. Over decades, accelerating in recent years, there has been an opposite trend demanding miniaturisation at all costs. This has led to renewed problems and weakened the argument for fusion rather than strengthening it.

It has led to two major errors that seem likely to be catastrophic.

The first is the choice of tritium fuel. Although confidently asserted to be the natural or necessary choice by competent authorities, this fuel has never been shown to be manufacturable, probably is not manufacturable, and will probably lead to a complete failure of any design based on it. The purpose of using tritium is ultimately to reduce the device size by increasing the reaction rate. The priority is to reduce spending on metal and air, even in exchange for new costs of an unproven fuel cycle involving an ultra-rare radioactive isotope.

The second is the materials problem of containing a reaction designed to be as power dense as possible. Generally speaking, power density limits engine performance. Only in fusion, with its over-focus on miniaturisation untempered by practical realities, have researchers been able to propose maximising the performance of the fuel with little or no interest in how the reaction will be contained, or even if it will be contained.

It is clear to anyone who has been following fusion startups on LinkedIn over the past four years that these topics, initially ignored and presumably taken for granted, now account for a large and growing proportion of grant announcements and job advertisements. These companies seem to have blundered into these issues blindly, although they were predictable as early as the 1970s.

The natural solution to all these problems is simple: build a larger, less performant machine, and use deuterium as fuel. Power density will naturally drop and containment – the supposedly big problem in fusion research – will naturally increase. The only cost is more money for metal and air. Unlike a lack of fuel, or an engine that destroys itself in operation, this purely monetary cost can be tolerated.

Relative to other big engineering projects, our biggest tokamaks are no longer particularly expensive. Perhaps they were when they were ordered – in the early 1970s. Fusion research, meanwhile, will reorient itself from miniaturisation at all costs to economic simplification of large machines. Wind and solar have shown the potential for factor-ten cost improvements without radically changing the underlying technology.

Finally, a careful historical analysis shows that the choice of the tokamak – by far the dominant fusion reactor configuration today – is almost certainly a mistake. Far from being based in decades of expert best judgement and superior performance, the tokamak rose to prominence in just a few years in the late 1960s and early 1970s, on the basis of research politics as much as scientific results. It had the good luck to be temporarily the most favoured at the point when large spending decisions were made that could not then easily be reversed.

Unfortunately, tokamaks do not produce stable or sustainable high power fusion plasmas. The stellarator seems likely to replace the tokamak as the machine capable of stable power plant operation, and likely would have done as early as the 1960s if not for a series of remarkable and unpredictable events.

All this history, and the detailed argument for a workable approach to fusion power, is in the new book, Fusion’s Fading Star.

This book, written by an experienced fusion researcher and extensively citing the original scientific literature, explains the history of fusion research and its technical basis in non-technical language. It demolishes a number of dominant prejudices in fusion research, develops convincingly the argument that large, low power density stellarators are likely to be the only viable path forward, and justifies why this can be achieved at a reasonable and justifiable cost.

The biggest conclusion of this book is that fusion, however misconceived and poorly executed in practice, is not only possible but inevitable – it is within our grasp today.

Amazon: Fusion’s Fading Star

Leave a Comment