Home » Archive by Category

Articles in Environment/Sustainability

Andris Piebalgs : getting a sense of proportion
Saturday, 29 Mar, 2008 – 11:30 | No Comment

Andris Piebalgs continues this Friday his blogging on bio-fuels, addressing some of the concerns expressed by the readers of the last blog-entry.

I agree that a radical change in consumer behavior is needed if we want Europe to be more energy efficient. At the same time, as policy makers we have to come up with policies that are based on present day realities. And the reality is that most Europeans are living and working in big cities and using modern means of transport. It would be unrealistic to impose sanctions on car producers and users if no alternatives are provided.

Before continuing I can’t but express once more my joy in seeing EU’s leaders having such a close interaction with their citizens. More bio-fuel talk under the fold.

[break]

Crossposted at the European Tribune.

In Europe, we use less than 2 percent of our cereals production for biofuels, so they do not contribute significantly to higher food prices in the European context. Even if we reach our 10% biofuels target by 2020, the price impact will be small. Our modeling suggests that it will cause a 8 to 10% increase in rape seed prices and 3 to 6% increase in cereal prices. Increase in the price of the latest has very small influence on the cost of bread. It makes up around 4 per cent of the consumer price of a loaf.

[...]

We need to use first-generation biofuels as a bridge to the second generation biofuels using lignocellulosic materials as a feedstock. With this in mind, the Commission within the forthcoming review of the Common Agricultural Policy will urge the farmers to invest more in short rotation forestry crops and perennial grasses which are the most typical feedstocks for advanced biofuels.Over the past 30 years, Europe’s farmers have stood accused, through their association with the Common Agricultural Policy, of over-producing and dumping their surpluses with the aid of massive export subsidies on over supplied world markets, therefore depressing market prices and contributing massively to poverty and starvation in poor countries. That criticism has now been reversed. The charge now is that EU biofuel policy will contribute to third world poverty by driving food prices up. My impression from this debate sometimes is that we the Europeans know best what is good for people in developing world. Let them speak for themselves.

[...]

And let’s not forget that oil is a finite commodity, and high oil prices are one of the main factors making food more expensive, particularly in poor countries.

The most important questions raised in the previous log entries were left unattended. Here’s a simple accounting exercise to get a real sense of proportion:

The EU consumes today roughly 20 Mb/d of Oil. Of that about two thirds are used in Transport, make it 13 Mb/d. Assuming that EU’s Transport use remains unchanged up to 2020 that turns the target to something like 1.3 Mb/d.

Ethanol has an energy density of about 60% of gasoline, biodiesel is somewhat better, so make it 75%. Thus to replace those 1.3 Mb/d of Oil, about 1.75 Mb/d of bio-fuels are needed ( 1.3/0.75 ).

Ethanol production in temperate climates has an EROEI below 2:1, biodiesel about 4:1. Oil’s EROEI differs markedly from place to place (offshore versus onshore, etc) but 10:1 is a general enough mark. Accounting for EROEI, the useful energy the EU gets from Oil is about 1.2 Mb/d. To match that useful energy, total bio-fuels production has to rise to 2.1 Mb/d ( 1.2/0.75/0.75 ).

Corn crops yield about 3500 litres of ethanol per hectare per year (that’s 9.5 litres per hectare per day). With sugar cane in the tropics that number goes up to 6000 (16,5 litres per hectare per day). But for bio-diesels the numbers are considerably lower, around 1250 litres per hectare per year (3,5 per hectare litres per day).

Using 159 litres for a barrel, 2.1 Mb correspond roughly to 333 Ml (mega-litre). Using again the most optimistic figure for the temperate regions, the EU needs to allocate thirty five million (35 000 000) hectares to bio-fuels production.

I live in a state that has an area of less than 9 million hectares. Germany has an area just over 35 million hectares.


All that dark green area producing ethanol in 2020?

Good or evil? Friend or foe? This kind of wording doesn’t fit in my Engeneering/Architecture dictionaries. Bio-fuels are not an option, it’s all a matter of numbers.

Data sources:

Ethanol fuel

Biodiesel

The EROEI of ehtanol

Previous coverage of Andris Piebalgs blog:

Andris Piebalgs on Bio Fuels

Piebalgs on European Energy Security

Andris Piebalgs’ Blog


Luís de Sousa
TheOilDrum:Europe

From Botswana to New England – a different story
Tuesday, 25 Mar, 2008 – 9:00 | No Comment

I have recently been writing about Botswana, and their sudden discovery of vulnerability when they found that their supply of electricity was no longer to be available. There is a passage in Cape Wind, the book by Wendy Williams and Robert Whitcomb, that shows the increasing vulnerability of places such as New England as the balance that exists between available supply and demand narrows. The event occurred in mid January 2004 when there was a sudden cold spell that lasted over a week, and the story is told from the point of view of the Independent System Operator (ISO) that manages the supply for some 14 million folk, and is located in Holyoke, MA.

On January 14th the ISO had assurances that up to 10,000 megawatts would be available from gas-fired power plants as they anticipated demand rising to around 23,000 to 25,000 megawatts, as the temperature was anticipated to drop to minus ten degrees. But by 8:30 am on the first morning of the crisis, this began to change:

A trickle of phone calls began coming in to the Holyoke headquarters, all with pretty much the same bad news. Plant operators who relied on natural gas as their fuel reported that although their plants were in working order, there was no gas available for them to buy. It had all been taken by the companies responsible for providing gas for home heating.

By afternoon the trickle of “no gas” calls became a flood. . . . .During this all-time winter peak, when electricity was essential for the very survival of many New Englanders, roughly 7,200 megawatts of gas-fired generation was now unavailable. . . . .because they couldn’t find enough natural gas to buy.”

In the end crisis was averted by some load shedding, including closing the schools, but it illustrates the coming vulnerabilities that we face as our historic assumption that there will be enough power when we need it, suddenly starts to be significantly challenged. However, in this case, action was taken, and things no longer look as grim.
[break]
Following the 2004 event there was a report (pdf) prepared for the New England Governors in 2005, from a specially appointed Natural Gas Subcommittee. Summarizing their conclusions (from March 2005) they reported:
• Supplies will largely be challenged in the winter, there is more than enough power otherwise. (The highest electricity use is in the summer – this relates the NG).
• Demand can be met, through 2010, providing there is adequate LNG supply, without which supply would not be reliable.
• To ensure reliable deliveries beyond 2010 there must be either significant demand reduction or infrastructure development.
• Expansion of fuel switching, energy efficiency and renewable energy programs may be the least expensive ways to improve gas supply reliability, while improving fuel diversity. But expanding LNG facilities provides considerably greater improvement to gas supply reliability.
• Investing in energy efficiency programs may yield benefits, but this will require more study.

The LNG facility in question was that at Everett, MA., and this supplied 20% of the regions normal gas demand, but 30% at peak. The network had, in 2005, storage capacity for 10 days of peak winter demand, but this is conventionally stored at pressures below that required as feed for the natural gas power stations. Nine different supply scenarios were developed that looked at ways of meeting the need. In terms of cost fuel switching, so that gas-powered stations could switch to burning oil, was considered the cheapest; expanded electricity efficiency the next in cost; followed by new coal and nuclear power plants, and then renewable power. It was anticipated that expanded LNG facilities would be the most expensive.

Natural gas usage in New England was at 800 Bcf per year in 2005. It received some 60 tanker loads of LNG for a total of 158 Bcf, but has the capacity to handle up to 98 tankers per year. At that time it was supplied from Trinidad and Tobago (pdf). Growth was anticipated to be around 1.38% (EIA) or higher. By 2007 natural gas use was still providing 29.3% of electrical power, but the absolute amount (39,367 MWH) was down slightly from 2006 (39,423 KWH) but up over 2005 (38,583 KWH), and it provided 40% of New England’s total fuel supply.

In order to improve LNG supply the Northeast Gateway was proposed, with the ability to offload LNG tankers offshore, and pipe the gas ashore. It was completed in January 2008.

With peak deliveries of up to 800 MMcf/d of gas, Northeast Gateway can deliver about 500 MMcf/d of gas into the New England market during normal operations, or approximately 20 percent of the New England market’s current annual gas consumption.

The facility cost between $350 and $400 million about half that of an onshore facility, and was installed in around seven months. The downside to the operation, however, comes from the concept around which the facility was designed. For instead of off-loading the LNG as liquid and revaporizing it on-shore, the Gateway uses special tankers (ppt) that regasify the fluid on-board and deliver the revaporized gas to the pipeline. At present the company has only 3, with more scheduled for delivery by 2010. However, with this system, the overall storage capacity of the system is not greatly increased. This may mean that the worries that the ISO saw back in 2004, which were in part because the pipeline delivery volumes were already committed to their full capacity, may not be fully remediated by this additional supply. However a number of power stations have now converted so that 8,600 MW of plant can use dual-fuel (pdf) , i.e. natural gas or oil, so the criticality issue of being able to deliver energy is no longer quite as severe – only the price now becomes more of an issue. And there are always issues with ships losing power. That is, of course, if there is still LNG available. There are already stories of shortages.

“Globally, gas prices have shot up and it’s not available. For example, an 8,000 mw power plant in Japan is lying idle for want of fuel and they’re desperately looking for gas from anywhere. So it’s going to be a problem to source gas for our power plants too,” said a central power ministry (India) official, who did not want to be named.

I started looking into the consequences of the 2004 scare as a result of reading “Cape Wind”, which is a well written story that is quite easy to read and digest, and tells the sad story of a company foolish enough to want to put a wind farm in the waters where the Kennedys sail. At the time that the book was written (late 2006) the final decision as to the fate of the farm was not decided, but end runs through Congress to effectively shut it down had been derailed. At present the public hearings which the Minerals Management Service have held have brought significant outcry from both sides. The comment period has been extended until April 21.

The proposed Cape Wind Energy Project would be comprised of 130 wind turbine generators that could generate a maximum electric output of 468 megawatts and an average output of approximately 180 megawatts. The project is proposed to be located on federal submerged lands in Nantucket Sound off the coast of Massachusetts.

I await the book sequel, it is a comment on how controversial the topic has become that there may well be one.

Green Jobs
Saturday, 22 Mar, 2008 – 9:00 | No Comment

What Matters

Like many of you, I want to “make a difference.” I have felt this way as long as I can remember. After my first child was born, it became almost an obsession to make a better future for the generations that follow. When I see children enduring hardship, I internalize that by imagining my own children in that situation (this is why I avoid the news, as well as any discussions of “Die-Off”). Sometimes I wish I didn’t feel this way as it is often depressing, but this is the way my brain is wired. I strongly feel that we are making choices today that are setting up future generations for just the kind of hardship that troubles me. This, above all else, is what motivates me. And while I may fail to make a difference, I am compelled to try.

A big concern for me is that quality of life for a large segment of the world’s population – never good to begin with – is poised for further deterioration as fossil fuel supplies deplete. Quality of life to me starts with the basics: People have enough food and clean water, they have shelter, they live and work in safe conditions, and they have adequate access to affordable energy. At various stages of my life I have had involvement in projects in all of these areas, but most of my career has been focused on the energy portion – both in providing adequate supplies, and in urging conservation efforts to stretch our supplies. [break]

The affordable energy piece is becoming more challenging, and we need more people working on this issue. Conservation must continue to be central to the solution, but we will still need a variety of energy options. As I transition into my new “green” job, I intend to step up my efforts on the sustainable energy front. There are a number of ways I can do this. First, my new job directly impacts on this. The technology we are engaged in – described briefly in the final section – promises significant environmental and sustainability benefits. But that isn’t the sole contribution I can make. I can also help bring promising sustainable technologies together with highly-motivated and talented people to enhance the odds of success. Up to this point I have done this by calling attention to technologies that I felt were promising, as well as by providing technical advice for some projects on an ad hoc basis.

With this essay, I am attempting to marry talent/passion with need by publicizing vacancies for some specific “green jobs.” I have had a series of conversations over the past year or so with Choren, a renewable diesel company that is now looking to scale up. Google contacted me last week to inform me of some of their vacancies in their new renewable energy efforts. Vinod Khosla has informed me several times that many of the companies he is involved with are looking for talent. And my new company is recruiting as well. I don’t think these jobs will be competing for exactly the same talent pool, because the job locations are geographically diverse. So, if you are looking for a green future and decent job stability (a recent story from Yahoo identified jobs in the energy and environmental sectors as “recession proof”) – here are some opportunities of which I am currently aware.

Choren

I have had a series of discussions over the past year or so with some of the Choren staff, including the president of Choren USA, Dr. David Henson. During the course of these discussions, I formed the opinion that Choren is ideally positioned for long term success in the renewable energy sphere. I think they are focusing on the right technology (biomass-to-liquids) for sustainable liquid fuel production, and they are on the leading edge of that technology. Dr. Henson will be hosting me at Choren’s new BTL plant in Germany in a month or so, and hope to make a report on the visit.

Their opportunities are described from their website as follows:

For the expansion to “world”-scale 600 MWth “Sigma” production facilities and the exploration of additional applications of CHOREN’s technologies we are now seeking highly motivated engineering specialists in the areas of Mechanical Engineering, Process/Chemical Engineering and Energy Technology, preferably with long or short-term experience in any of the fields of gasification, Fischer Tropsch Fuel Synthesis and/or in the Petrochemical Industry.

Choren is looking to fill the following positions in Houston:

Project Manager CHOREN USA, Job Description

Senior Process Engineer CHOREN USA, Job Description

Process Engineer CHOREN USA, Job Description

You can learn more information about the job opportunities at Choren by visiting their Employment Opportunities USA page.

Google

I have admired Google for a long time. They seem genuinely motivated by a desire to help humanity. You may also be aware that they have topped CNN Money’s list of 100 Best Companies to Work For for the second year in a row.

Recently, they announced their intent to help power a clean energy revolution. I was aware of, and supportive of their efforts, and in a different time and place I might jump at the opportunity to work for them. Recently, they contacted me about just that, and I replied that while the timing is not right for me, I would help them publicize their vacancies.

Here is a short description of their vision, and what they are looking for:

Our thinking is that business as usual will not deliver low-cost, clean energy fast enough to avoid potentially catastrophic climate change. We need a clean energy revolution that will deliver breakthrough technologies priced lower than carbon-intensive alternatives such as coal. Google is launching an R&D group to develop electricity from renewable energy sources at a cost less than coal.

We are looking for extraordinarily creative, motivated and talented engineers with significant experience in developing complex engineering designs to join our newly-created renewable energy group. This group is tasked with developing the most cost-effective and scalable forms of renewable energy generation, and these people will play a key role in developing new technologies and systems.

…if you know other outstanding engineers who may be interested, I encourage you to pass along this information as we are hiring for multiple positions. If you prefer that I reach out to them directly, I am more than happy to do so.

Their specific job opportunities at the moment, mostly at their Mountain View, California site:

Renewable Energy Engineer
Head of Renewable Energy Engineering
Director, Green Business Strategy & Operations
Director of Other
Investments Manager, Renewable Energy

They are also asking for people with the following experience:

If you have relevant expertise in other areas beyond these specific positions, please send an email with your resume to energy@google.com . Areas of interest include, but are not limited to:

• regulatory issues
• land acquisition and management
• construction
• energy project development
• mechanical and electrical engineering
• thermodynamics and control systems
• physics and chemistry
• materials science

Khosla Ventures

Vinod Khosla has built quite a renewable energy portfolio. See this PowerPoint presentation for his complete (or at least what’s public) renewable portfolio. Opportunities range from corn ethanol (which I don’t recommend) to cellulosic ethanol (some promising opportunities there) to advanced biofuels, electrical power, and even water desalinization. There are far too many companies to give details on all of the job vacancies, so I will just pick out one of the most interesting (to me), LS9. They describe themselves as the Renewable Petroleum Companyâ„¢, and have this description on their website:

LS9 DesignerBiofuelsâ„¢ products are customized to closely resemble petroleum fuels, engineered to be clean, renewable, domestically produced, and cost competitive with crude oil.

LS9 is the market leader for hydrocarbon biofuels and is rapidly commercializing and scaling up DesignerBiofuelsâ„¢ products to meet market demands, including construction of a pilot facility leading to commercial availability. While initially focusing on fuels, LS9 will also develop sustainable industrial chemicals for specialty applications.

They are looking for the following for their South San Francisco location:

Current openings at LS9 are listed below. Please submit your resume stating qualifications and relevant experience to hr@ls9.com and include the job title in the subject line. We look forward to hearing from you.

Bioprocess/Engineering

Director, Bioprocess Development
Scientist, Fermentation
Scientist, Fermentation
Associate Scientist, Fermentation
Research Associate/Senior Research Associate, Fermentation
Downstream Recovery Scientist

Chemistry/Biochemistry

Biochemist / Bio-organic Chemist Scientist
Research Associate/Senior Research Associate, Biochemistry

Instrumentation

Automation Laboratory Specialist

Metabolic Engineering

Scientist, Metabolic Engineering
Associate Scientist, Microbiology
Senior Research Associate, Microbiology

Corporate Development

Corporate Planning Analyst

What LS9 is attempting is Holy Grail stuff, but what they are trying to do should be technically feasible. However, it won’t be easy and it’s going to take some very talented people.

Don’t forget that this is only one of the Khosla Ventures’ companies. There are numerous job opportunities there if you dig a little.

Accsys Technologies

As I have mentioned previously, I left the oil industry on March 1, 2008 to become the Engineering Director (note the TOD plug in my profile) for Accsys Technologies. While we are not producing energy as was the case with the previous companies I described, we are saving energy, sequestering carbon emissions, and attacking the problem of rainforest destruction. Here is a brief summary of what appealed to me about the company and my desire to make a difference:

Growing concerns about the destruction of tropical rainforests, a declining world stock of high quality timber and increasingly restrictive government regulations regarding the use of wood treated using toxic chemicals have created an exceptional market opportunity for the Company. Accsys believes that its technology will transform the use of wood in existing applications where durability and dimensional stability are valued, both halting the decline in the use of wood in outdoor applications and substituting plastics and metals.

Wood acetylation is a process which increases the amount of ‘acetyl’ molecules in wood, thereby changing its physical properties. The process protects wood from rot by making it “inedible” to most micro-organisms and insects, without – unlike conventional treatments – making it toxic.

I think you can see why that might appeal to me – this technology enables a sustainable replacement for tropical hardwoods, and can replace plastics and metals – which are energy intensive to produce – in some applications. What that means is that we have one of the best – if not the best – carbon sequestration technologies in existence. Carbon dioxide is pulled from the atmosphere as the (fast-growing) trees are developing, and then tied up for a much longer period of time because the durability of the wood has been greatly increased. Throw in the fact that you can use the wood for some applications currently dominated by metals (window frames, for instance), and you have sequestered even more CO2, and avoided the CO2 emissions from the producing the metals.

We are filling a wide variety of positions at our plant in Arnhem, in the Netherlands (where I am presently working) and in Dallas (where I will be based). If you are a citizen of an EU country, I believe you are eligible to work in the Netherlands. You can see the current listing of jobs at our Titan Wood site (Titan Wood is a subsidiary of Accsys).

We are also filling jobs in our new Dallas office that are global in nature. For Dallas we are looking for a Global Process Improvement Manager (reports to me), Global Procurement Manager (reports to CEO), and a Panel Products Manager (reports to Panel Products Director). These positions require travel (got to break a few eggs to make a cake) to places like the Netherlands, New Zealand, Chile, and China (where we are building a large facility in Nanjing). Required qualifications for these jobs include an engineering or chemistry degree, 7-10 years of relevant experience, and a preference for an MBA. Further, I want my Global Process Improvement Manager to share my passion for making the world a better place.

For now, you may send a cover letter and your resume or CV to JOBSUSA “at” accoya “dot” info (edited to slow the spambots) for positions in the U.S., or JOBSEurope “at” accoya “dot” info for positions in Europe. (Accoyaâ„¢ is what we call the wood we are producing). You may want to indicate that you are responding to this essay, and then the resume may be circulated to me.

Conclusion

While I am not going to get in the habit of using my writing as a platform for promoting my new company, it is directly topical to what I like to write about. I plan to do one post in the future about the technology, particularly on the potential for carbon sequestration. However, most of my posts will be as they have been in the past: Broad coverage of energy, sustainability, and environmental responsibility. I do plan to focus more on “problem solving”, and this post was one aspect of that. It is an attempt to bring together talent and passion with a critical need, and it also will hopefully provide needed job stability in a fragile economy.

I am really interested in writing more about promising technologies, especially those that haven’t received much attention, but I first have to figure out a way to manage this. I tend to get about 19 bad or unworkable ideas e-mailed to me for every 1 that shows promise. I can’t afford the time at present to work my way through that sort of volume (and some of the proposals I see are very extensive), so I will continue to focus for now on those that are already on the radar.

The Problem of Growth
Friday, 21 Mar, 2008 – 10:00 | No Comment

Stuart Staniford proposed a “way forward” for humanity in his article Powering Civilization to 2050. This article proposes an alternative vision: instead of trying create continual, technological stop-gaps to the demands of growth, we must address the problem of growth head on. Infinite growth is impossible in a finite world–a great deal of economic growth may be possible without a growth in resource consumption, but eventually the notion of perpetual growth is predicated on perpetual increase in resource consumption. This growth in resource consumption causes problems: it brings civilization into direct conflict with our environmental support system. Growth is also one way of improving the standard of living for humanity by creating more economic produce, more material consumption per human. Growth, however, produces very unevenly distributed benefits, and there is little convincing evidence that the poorest, most abused 10% of humanity is actually better off today than the poorest, most abused 10% of past eras. Furthermore, if you accept my statement above that infinite growth is impossible in a finite world, then employing growth today to “solve” our immediate problems incurs the significant moral hazard of pushing the problem—perhaps the greatly exacerbated problem—of addressing growth itself on future generations.

With that in mind, my intent here is to propose one possible means for humanity to directly address the problem of growth itself. I am attempting to take what I see as an inherently pragmatic approach—one that does not rely on the universal cooperation of humanity, nor on the assumption of yet-to-be-developed technologies. My approach to the problem of growth is to stop trying to address its symptoms—overpopulation, pollution, global warming, peak oil—and attempt instead to identify and address the underlying source of the problem.
[break]

That source is the hierarchal structure of human civilization. Hierarchy demands growth. Growth is a result of dependency. The solution to the problem of growth, then, is the elimination of dependency. This essay will elaborate on each of those points, and then propose a means to effectively eliminate dependency by creating minimally self-sufficient but interconnected networks that I call Rhizome. It is my hope that this topic, while not directly involving crude oil reserves or some similar topic, will be highly relevant within the context of Peak Oil and Peak Energy. Infinite growth requires, eventually, infinite energy. Assume that we develop a perfect fusion generator, or that we cover the entire surface of the Earth with 100% efficient solar panels. None of this actually solves the problem of growth—it just shifts the burden of dealing with that problem onto our grandchildren, or perhaps even 100 generations from now. It’s easy to take the self-centered perspective that such burden-shifting is acceptable, but I find it fundamentally morally unacceptable. This essay will begin and end with that understanding of morality, and attempt to find a way forward for humanity that balances the quality of life demands of both present and future generations. This essay isn’t about how to find more oil, how to recover more oil, or how to use energy in general more efficiently so that we can keep on growing. It is an opinion piece, not a data-driven scientific paper. It is about living well, now and in the future, individually and collectively, without growth.

I. Hierarchy Must Grow, and is Therefore Unsustainable

Why must hierarchy continually grow and intensify? Within the context of hierarchy in human civilization, there seem to be three separate categories of forces that force growth. I will address them in the order (roughly) that they arose in the development of human civilization:

Human Psychology Drives Growth

Humans fear uncertainty, and this uncertainty drives growth. Human population growth is partially a result of the desire to ensure enough children survive to care for aging parents. Fear also drives humans to accept trade-offs in return for security.

One of the seeds of hierarchy is the desire to join a redistribution network to help people through bad times—crop failures, drought, etc. Chaco Canyon, in New Mexico, is a prime anthropological example of this effect. Most anthropologists agree that the Chaco Canyon dwellings served as a hub for a food redistribution system among peripheral settlements. These peripheral settlements—mostly maize and bean growing villages—would cede surplus food to Chaco. Drought periodically ravaged either the region North or South of Chaco, but rarely both simultaneously. The central site would collect and store surplus, and, when necessary, distribute this to peripheral settlements experiencing crop failures as a result of drought. The result of this system was that the populations in peripheral settlements were able to grow beyond what their small, runoff-irrigated fields would reliably sustain. The periodic droughts no longer checked population due to membership in the redistributive system. The peripheral settlements paid a steep price for this security—the majority of the surplus wasn’t redistributed, but rather supported an aristocratic priest class in Chaco Canyon—but human fear and desire for security made this trade-off possible.

Still today, our fear of uncertainty and desire for stability and security create an imperative for growth. This is equally true of Indian peasants having seven children to ensure their retirement care as it is of rich Western European nations offering incentives for couples to have children in order to maintain their Ponzi-scheme retirements systems. Fear also extends to feelings of family or racial identity, as people all over the world fear being out-bred by rival or neighboring families, tribes, or ethnic groups. This phenomenon is equally present in tribal societies of Africa, where rival ethnic groups understand the need to compete on the level of population, as it is in America, where there is an undercurrent of fear among white Americans that population growth rates are higher among Hispanics Americans.

The Structure of Human Society Selects for Growth

The psychological impetus toward growth results in what I consider the greatest growth-creating mechanism in human history: the peer-polity system. This phenomenon is scale free and remains as true today as it did when hunter-gather tribes first transitioned to agricultural “big-man” groups. Anthropologically, when big-men groups are often considered the first step toward hierarchal organization. When one farmer was able to grow more than his neighbors, he would have surplus to distribute, and these gifts created social obligations. Farmers would compete to grow the greatest surplus, because this surplus equated to social standing, wives, and power. Human leisure time, quite abundant in most ethnological accountings of remnant hunter-gatherer societies, was lost in favor of laboring to produce greater surplus. The result of larger surpluses was that there was more food to support a greater population, and the labors of this greater population would, in turn, produce more surplus. The fact that surplus production equates to power, across all scales, is the single greatest driver of growth in hierarchy.

In a peer-polity system, where many separate groups interact, it was not possible to opt-out of the competition to create more surplus. Any group that did not create surplus—and therefore grow—would be out-competed by groups that did. Surplus equated to population, ability to occupy and use land, and military might. Larger, stronger groups would seize the land, population, and resources of groups that failed in the unending competition for surplus. Within the peer-polity system, there is a form of natural selection in favor of those groups that produce surplus and grow most effectively. This process selects for growth—more specifically, it selects for the institutionalization of growth. The result is the growth imperative.

The Development of Modern Economics & Finance Requires Growth

This civilizational selection for growth manifests in many ways, but most recently it resulted in the rise of the modern financial system. As political entities became more conscious of this growth imperative, and their competition with other entities, they began to consciously build institutions to enhance their ability to grow. The earliest, and least intentional example is that of economic specialization and centralization. Since before the articulation of these principles by Adam Smith, it was understood that specialization was more efficient—when measured in terms of growth—than artesian craftsmanship, and that centralized production that leveraged economy of place better facilitated growth than did distributed production. It was not enough merely to specialize “a little,” because the yardstick was not growth per se, but growth in comparison to the growth of competitors. It was necessary to specialize and centralize ever more than competing polities in order to survive. As with previous systems of growth, the agricultural and industrial revolutions were self-reinforcing as nations competed in terms of the size of the infantry armies they could field, the amount of steel for battleships and cannon they could produce, etc. It wasn’t possible to reverse course—while it may have been possible for the land area of England, for example, to support its population via either centralized or decentralized agriculture, only centralized agriculture freed a large enough portion of the population to manufacture export goods, military materiel, and to serve in the armed forces.

Similarly, the expansion of credit accelerated the rate of growth—it was no longer necessary to save first buy later when first home loans, then car loans, then consumer credit cards became ever more prevalent, all accelerating at ever-faster rates thanks to the wizardry of complex credit derivatives. This was again a self-supporting cycle: while it is theoretically possible to revert from a buy-now-pay-later system to a save-then-buy system, the transition period would require a significant period of vastly reduced spending—something that would crush today’s highly leveraged economies. Not only is it necessary to maintain our current credit structure, but it is necessary to continually expand our ability to consume now and pay later—just as in the peer polity conflicts between stone-age tribes, credit providers race to provide more consumption for less buck in an effort to compete for market share and to create shareholder return. Corporate entities, while existing at least as early as Renaissance Venice, are yet another example of structural bias toward growth: corporate finance is based on attracting investment by promising greater return for shareholder risk than competing corporations, resulting in a structural drive toward the singular goal of growth. And modern systems of quarterly reporting and 24-hour news cycles only exacerbate the already short-term risk horizons of such enterprises.

Why This is Important

This has been a whirlwind tour of the structural bias in hierarchy toward growth, but it has also, by necessity, been a superficial analysis. Books, entire libraries, could be filled with the analysis of this topic. But despite the scope of this topic, it is remarkable that such a simple concept underlies the necessity of growth: within hierarchy, surplus production equates to power, requiring competing entities across all scales to produce ever more surplus—to grow—in order to compete, survive, and prosper. This has, quite literally, Earth shaking ramifications.

We live on a finite planet, and it seems likely that we are nearing the limits of the Earth’s ability to support ongoing growth. Even if this limit is still decades or centuries away, there is serious moral hazard in the continuation of growth on a finite planet as it serves merely to push that problem on to our children or grandchildren. Growth cannot continue infinitely on a finite planet. This must seem obvious to many people, but I emphasize the point because we tend to overlook or ignore its significance: the basis of our civilization is fundamentally unsustainable. Our civilization seems to have a knack for pushing the envelope, for finding stop-gap measures to push growth beyond a sustainable level. This is also problematic because the further we are able to inflate this bubble beyond a level that is sustainable indefinitely, the farther we must ultimately fall to return to a sustainable world. This is Civilization’s sunk cost: there is serious doubt that our planet can sustain 6+ billion people over the long term, but by drawing a line in the sand, that “a solution that results in the death of millions or billions to return to a sustainable level” is fundamentally impermissible, we merely increase the number that must ultimately die off. Furthermore, while it is theoretically possible to reduce population, as well as other measures of impact on our planet, in a gradual and non-dramatic way (e.g. no die off), the window of opportunity to choose that route is closing. We don’t know how fast—but that uncertainty makes this a far more difficult risk management problem (and challenge to political will) than knowing that we have precisely 10, 100, or 1000 years.

This is our ultimate challenge: solve the problem of growth or face the consequences. Growth isn’t a problem that can be solved through a new technology–all that does is postpone the inevitable reckoning with the limits of a finite world. Fusion, biofuels, super-efficient solar panels, genetic engineering, nano-tech–these cannot, by definition, solve the problem. Growth is not merely a population problem, and no perfect birth control scheme can fix it, because peer polities will only succeed in reducing population (without being eliminated by those that outbreed them) if they can continue to compete by growing overall power to consumer, produce, and control. All these “solutions” can do is delay and exacerbate the Problem of Growth. Growth isn’t a possible problem–it’s a guaranteed crisis, we just don’t know the exact time-frame.

Is there a solution to the Problem of Growth? Can global governance lead to an agreement to abate or otherwise manage growth effectively? It’s theoretically possible, but I see it about as likely as solving war by getting everyone to agree to not fight. Plus, as the constitutional validity and effective power of the Nation-State declines, even if Nation-States manage to all agree to abate growth, they will fail because they are engaged in a very real peer-polity competition with non-state groups that will only use this competitive weakness as a means to establish a more dominant position–and continue growth. Others would argue that collapse is a solution (a topic I have explored in the past), but I now define that more as a resolution. Collapse does nothing to address the causes of Growth, and only results in a set-back for the growth-system. Exhaustion of energy reserves or environmental capacity could hobble the ability of civilization to grow for long periods of time–perhaps even on a geological time scale–but we have no way of knowing for sure that a post-crash civilization will not be just as ragingly growth-oriented as today’s civilization, replete with the same or greater negative effects on the environment and the human spirit. Similarly, collapse that leads to extinction is a resolution, not a solution, when viewed from a human perspective.

A solution, at least as I define it, must allow humans to control the negative effects of growth on our environment and our ability to fulfill our ontogeny. The remaining essays in this series will attempt to identify the root cause of the problem of growth, and to propose concrete and implementable solutions that satisfy that definition.

II. Hierarchy is the Result of Dependency

The first section in this essay identified the reason why hierarchal human structures must grow: surplus production equals power, and entities across all scales must compete for this power—must grow—or they will be pushed aside by those who do. But why can’t human settlements simply exist as stable, sustainable entities? Why can’t a single family or a community simply decide to opt out of this system? The answer: because they are dependent on others to meet their basic needs, and must participate in the broader, hierarchal system in order to fulfill these needs. Dependency, then, is the lifeblood of hierarchy and growth.

Dependency Requires Participation on the Market’s Terms

Take, for example, a modern American suburbanite. Her list of dependencies is virtually unending: food, fuel for heat, fuel for transport, electricity, clothing, medical care, just to name a few. She has no meaningful level of self-sufficiency—without participation in hierarchy she would not survive. This relationship is hierarchal because she is subservient to the broader economy—she may have negotiating power with regard to what job she performs at what compensation for what firm, but she does not have negotiating power on the fundamental issue of participating in the market economy on its terms. She must participate to gain access to her fundamental needs—she is dependent (consider also Robert Anton Wilson’s notion of money in civilization as “bio-surival tickets”).

Compare this to the fundamentally similar situation of family in Lahore, Pakistan, or a farmer in rural Colombia. While their superficial existence and set of material possessions may be strikingly different, they share this common dependency. The Colombian farmer is dependent on a seed company and on revenue from his harvest to fuel his tractor, heat his home, and buy the 90% of his family’s diet that he does not grow. The family in Lahore is dependent on the sales from their clothing store to purchase food—they cannot grow it themselves as they live in an apartment in a dense urban environment. They are dependent on participation in hierarchy—they cannot participate on their own terms and select for a stable and leisurely life. The market, as a result of competition between entities at all levels, functions to minimize input costs—if corn can be grown more cheaply in America and shipped to Colombia than it can be grown in Colombia, by a sufficient margin, then that will eventually happen. This requires the Colombian farmer to compete to make his corn as cheap as possible—i.e. to work as long and as hard to maximize his harvest. While if he were participating on his own terms, he may only wish to work 20 hours per week, he may have to work 50, 60, or more hours at hard labor to make enough money off competitively priced corn to be able meet the basic needs of his family in return. He is in competition with his neighbors and competing entities around the world to minimize the input cost of his own efforts—a poor proposition, and one that is forced upon him because he participates on the market’s terms, all a result of his dependency on the market to meet his basic needs. The situation of the family of shopkeepers in Pakistan or the Suburban knowledge-worker in America is fundamentally the same, even if it may vary on the surface.

The Blurring of Needs and Wants

Why not just drop out? It isn’t that tough to survive as a hermit, gather acorns, grow potatoes on a small plot of forest, or some other means of removing oneself from this dependency on the market. To begin with, “dropping out” and becoming self-sufficient is not quite as easy as it sounds, and just as importantly, it would become nearly impossible if any significant portion of the population chose that route. But more fundamentally, humans don’t want to drop out of participation in the market because they desire the enhanced consumption that is available—or at least exists in some far-off-promised land called “America” (fantasy even in the mind of most “Americans”)—only through such participation. It may be possible to eat worms and acorns and sleep in the bushes, but this would be even more unacceptable than schlepping to work 40+ hours a week. Most people cannot envision, let alone implement, a system that maintains an acceptable “standard of living” without participation in the system, and all but the very lucky or brave few can’t figure out how to participate in that system without being dependent on it.

There is certainly a blurring of “needs” and “wants” in this dependency. Humans don’t “need” very much to remain alive, but a certain amount of discretionary consumption tends to increase the effectiveness of the human machine. From the perspective of the market, this is desirable, but is also an input cost that must be minimized. This is the fundamental problem of participating in the market, the economy, the “system” on its terms: the individual becomes nothing more than an input cost to be minimized in the competition between entities at a higher organizational level. John Robb recently explored this exact issue, but from the perspective of the local community–the implications are quite similar.

In an era of globalization, increased communications connectivity, and (despite the rising costs of energy) an ever increasing global trade network, this marginalization is accelerating at breakneck speed. Is your job something that can be done online from India? How about in person by an illegal immigrant? Because there are people with doctorates willing to work for ¼ what you make if you’re in a knowledge field, and people with high tolerance for mind-numbing, back-breaking labor willing to work hard for $5/hour or less right next door (or for $2/day overseas). If this doesn’t apply to you, you’re one of the lucky few (and, if I might add, you should be working to get yourself to into just such a position). Maybe they don’t know how to outsource your function yet, but trust me, someone is working on it. Participation in the market on its terms means that the market is trying to find a way to make your function cheaper.

This dependency on participation in the hierarchal system fuels the growth of hierarchy. Even if there is a severe depression or collapse, hierarchy will survive the demand destruction because it is necessary to produce and redistribute necessities to people who don’t or can’t produce them themselves. It may be smaller or less complex, but as long as people depend on participation in an outside system—whether that is a local strong man or an international commodities exchange—to gain access to basic necessities, the organization of that system will be hierarchal. And, as a hierarchy, that system will compete with other hierarchies to gain surplus, to grow, and to minimize the cost of human input.

Dependency on a Security Provider

One of the most significant areas in which people are dependent on hierarchal systems is to provide security. This seems to be especially true in times of volatility and change. While it may be possible to set up a fairly self-sufficient farm or commune and provide for one’s basic needs, this sufficiency must still be defended. If everyone doesn’t have access to the necessities that you produce for yourself, then there is potential for conflict. This could range from people willing to use violence to access to your food or water supply to governments or local strong-men expecting your participation in their tax scheme or ideological struggle. Ultimately, dependence on hierarchy is dependence on the blanket of security it provides, no matter how coercive or disagreeable it may be, and even if this security takes the form of “participation” in exchange for protection from the security provider itself.

Why this is Important

Virtually everyone is dependent on participation in hierarchal systems to meet their basic needs, of one type or another. This dependency forces participation, and drives the perpetual growth—and therefore the ultimate unsustainability—of hierarchy. If growth is the problem, then it is necessary to identify the root cause of that problem so that we may treat the problem itself, and not merely a set of symptoms. In our analysis, we have seen in Part 1 that hierarchies must grow, and now in this installment that human dependency is what sustains these hierarchies. Dependency, then, is the root cause of the problem of growth.

III. Building an Alternative to Hierarchy: Rhizome

So far in this essay, I have argued that competition between hierarchal entities selects for those entities that most efficiently grow and intensify, resulting in a requirement for perpetual growth, and that ongoing human dependency on participation in this system is the lifeblood of this process. At the most basic level, then, an alternative to hierarchy and a solution to the problem of growth must address this issue of dependency. My proposed alternative—what I call “rhizome”—begins at exactly this point.

Achieving Minimal Self-Sufficiency

The first principle of rhizome is that individual nodes—whether that is family units or communities of varying sizes—must be minimally self-sufficient. “Minimally self-sufficient” means the ability to consistently and reliably provide for anything so important that you would be willing to subject yourself to the terms of the hierarchal system in order to get it: food, shelter, heat, medical care, entertainment, etc. It doesn’t mean zero trade, asceticism, or “isolationism,” but rather the ability to engage in trade and interaction with the broader system when, and only when, it is advantageous to do so. The corollary here is that a minimally self-sufficient system should also produce some surplus that can be exchanged—but only to the extent that is found to be advantageous. A minimally self sufficient family may produce enough of its own food to get by if need be, its own heat and shelter, and enough of some surplus—let’s say olive oil—to exchange for additional, quality-of-life-enhancing consumables as it finds advantageous. This principle of minimal self-sufficiency empowers the individual family or community, while allowing the continuation of trade, value-added exchange, and full interaction with the outside world.

It should be immediately apparent that “dependency” is the result of one’s definition of “need.” Total self-sufficiency in the eyes of a Zimbabwean peasant, even outright luxury, may fall far short of what the average American perceives as “needing” to survive. As a result, an “objectively” self-sufficient American may sell himself into hierarchy to acquire what is perceived as a “need.” To this end, what I have called “elegant simplicity” is a critical component of the creation of “minimal self-sufficiency.” This is the notion that through conscious design we can meet and exceed our “objective” needs (I define these as largely experiential, not material, and set by our genetic ontogeny, not the global consumer-marketing system) at a level of material consumption that can realistically be provided for on a self-sufficient basis. I’ve written about this topic on several previous occasions (1 2 3 4 5).

Leveraging “Small-Worlds” Networks

How should rhizome nodes interact? Most modern information processing is handled by large, hierarchal systems that, while capable of digesting and processing huge amounts of information, incur great inefficiencies in the process. The basic theoretical model for rhizome communication is the fair or festival. This model can be repeated locally and frequently—in the form of dinner parties, barbecues, and reading groups—and can also affect the establishment and continuation of critical weak, dynamic connections in the form of seasonal fairs, holiday festivals, etc. This is known as the “small-worlds” theory of network. It tells us that, while many very close connections may be powerful, the key to flat-topography (i.e. non-hierarchal) communications is a broad and diverse network of distant but weak connections. For example, if you know all of your neighbors well, you will be relatively isolated in the context of information awareness. However, if you also have weak contact with a student in India, a farmer across the country, and your cousin in London, you will have access to the very different set of information immediately available to those people. These weak connections greatly expands information awareness, and leverages a much more powerful information processing network—while none of your neighbors may have experienced a specific event or solved a particular problem before, there is a much greater chance that someone in your diverse and distant “weak network” has.

In high-tech terms, the blogosphere is exactly such a network. While many blogs may focus primarily on cat pictures, there is tremendous potential to use this network as a distributed and non-hierarchal problem solving, information collection, and processing system. In a low-tech, or vastly lower energy world, the periodic fair or festival performs the same function.

Building Rhizome Institutions

The final aspect of the theory of rhizome is the need to create rhizome-creating and rhizome-strengthening institutions. One of these is the ability of rhizome to defend itself. Developments in fourth generation warfare suggest that, now more than ever, it is realistic for a small group or network to effectively challenge the military forces of hierarchy. However, it is not my intent here to delve into the a plan for rhizome military defense—I have explored that topic elsewhere, and strongly recommend John Robb’s blog and book “Brave New War” for more on this topic.
One institution that I do wish to explore here is the notion of anthropological self-awareness. It is important that the every participant node in rhizome has an understanding of the theoretical foundation of rhizome, and of the general workings of anthropological systems in general. Without this knowledge, it is very likely that participants will fail to realize the pitfalls of dependency, resulting in a quick slide back to hierarchy. I like to analogize anthropological self-awareness to the characters in the movie “Scream,” who were aware of the cliché rules that govern horror movies while actually being in a horror movie. When individual participants understand the rationale behind concepts like minimal self-sufficiency and “small-worlds” network theory, they are far more likely to succeed in consistently turning theory into practice.

Additionally, it is important to recognize the cultural programming that hierarchal systems provide, and to consciously reject and replace parts of this with a myth, taboo, and morality that supports rhizome and discourages hierarchy. Rules are inherently hierarchal—they must be enforced by a superior power, and are not appropriate for governing rhizome. However, normative standards—social norms, taboos, and values—are effective means of coordinating rhizome without resorting to hierarchy. For example, within the context of anthropological self-awareness, it would be considered “wrong” or “taboo” to have slaves, to be a lord of the manor, or to “own” more property than you can reasonably put to sustainable use. This wouldn’t be encoded in a set of laws and enforced by a ruling police power, but rather exist as the normative standard, compliance with which is the prerequisite for full participation in the network.

Finally, institutions should be devolutionary rather than accrete hierarchy. One example of this is the Jubilee system—rather than allow debt or excess property beyond what an individual can use, accumulate, and pass on to following generations–a system that inevitably leads to class divisions and a de facto aristocracy–some ancient cultures would periodically absolve all debt and start fresh, or redistribute land in a one-family-one-farm manner. These specific examples may not apply well to varying circumstances, but the general principles applies: cultural institutions should reinforce decentralization, independence, and rhizome, rather than centralization, dependency, and hierarchy.

Is This Setting the Bar Too High for All?

I’ll be the first to admit that this is a tall order. While the current system—massive, interconnected, and nested hierarchies and exchange systems—is anything but simple, its success is not dependent on every participant comprehending how the system works. While rhizome doesn’t require completely omniscient knowledge by all participants, the danger of hierarchy lurks in excessive specialization in the knowledge and rationale supporting rhizome—dependency on a select few to comprehend and operate the system is just that: dependency. Is it realistic to expect people to, en masse, understand, adopt, and consistently implement these principles? Yes.

I have no delusions that this is some perfect system that can be spread by airdropped pamphlet and then, one night, a switch is flipped and “rhizome” is the order of the day. Rather, I see this as the conceptual framework for the gradual, incremental, and distributed integration of these ideas into the customized plans of individuals and communities preparing for the future. I have suggested in the past that rhizome should operate on what Antonio Negri has called the “diagonal”– that is, in parallel but out of phase with the existing, hierarchal system. There may also be lessons to be incorporated from Hakim Bey’s notions of the Temporary Autonomous Zone and the Permanent Autonomous Zone—that flying under the radar of hierarchy may be a necessary expedient. Ultimately, this will likely never be a system that is fully adopted by society as a whole—I tend to envision this as analogous, in some ways, to the network of monasteries that retained classical knowledge through the dark in Western Europe after the fall of the Roman Empire. In a low-energy future, it may be enough to have a small rhizome network operating in parallel to, but separated from, the remnants of modern civilization. Whether we experience a fast crash, a slow collapse, the rise of a neo-feudal/neo-fascist system, or something else, an extant rhizome network may act as a check on the ability of that system to exploit and marginalize the individual. If rhizome is too successful, too threatening to that system it may be imperiled, but if it is a “competitor” in the sense that it sets a floor and for how much hierarchal systems can abuse humanity, if it provides a viable alternative model, that may be enough to check hierarchy and achieve sustainability and human fulfillment. And, if this is all no more than wishful thinking, it may provide a refuge while Rome burns.

IV. Implementing Rhizome at the Personal Level

Rhizome begins at the personal level, with a conscious attempt to understand anthropological processes, to build minimal self-sufficiency, and to engage in “small-worlds” networks. This installment will outline my ideas for implementing this theory at the personal level in an incremental and practicable way. This is by no means intended to be an exhaustive list of ideas, but rather a starting point for discussion:

Water

In the 21st Century, I think it will become clear that water is our most critical resource. We’ll move past our reliance on oil and fossil fuels—more by the necessity of resorting to dramatically lower consumption of localized energy—but we can’t move beyond our need for water. There is no substitute, so efficiency of use and efficacy of collection are our only options. In parts of the world, water is not a pressing concern. However, due to the fundamental and non-substitutable need for water everywhere, creating a consistent and resilient water supply should be a top priority everywhere. Climate change, or even just periodic extreme drought such as has recently hit the Atlanta area, may suddenly endanger water supplies that today may be considered a “sure thing.” How does the individual do this? I think that four elements are crucial: efficient use, resilient collection systems, purification, and sufficient storage.

Efficient use is the best way to maximize any available water supply, and the means to achieve this are varied: composting (no-flush) toilets, low-flow shower heads, mulching in the garden, etc. Greywater systems (also spelled “graywater,” various spellings seem popular, so search on both) that reuse domestic water use in the garden are another critical way to improve efficiency.

Resilient collection systems are also critical. Rainwater harvesting is the best way to meet individual minimal self-sufficiency—dependence on a shared aquifer, on a municipal supply system, or on a riparian source makes your water supply dependent on the actions of others. Rainwater falling on your property is not (at least arguably not) dependent on others, and it can provide enough water to meet minimal needs of a house and garden in even the most parched regions with sufficient planning and storage. There are many excellent resources on rainwater harvesting, but I think Brad Lancaster’s series is the best—-buy it, read it, and implement his ideas.

While dirty water may be fine for gardens, water purification may be necessary for drinking. Even if an existing water supply doesn’t require purification, the knowledge and ability to purify enough water for personal use with a solar still or via some other method enhances resiliency in the face of unforeseen events.

Storage is also critical. Rain, fortunately, does not fall continuously—it comes in very erratic and unpredictable doses. Conventional wisdom would have said that long-term storage wasn’t necessary in the Atlanta area because rain falls so regularly all year round that storage of only a few months supply would suffice. Recent events proved this wrong. Other areas depend on short, annual monsoon seasons for the vast majority of their rain (such as Arizona). Here, storage of at least one year’s water supply is a threshold for self-sufficiency, and more is desirable. Significant droughts and erratic rainfall mean the more storage the better—if you don’t have enough storage to deal with a drought that halves rainfall for two straight years, then you are forced back to dependency in such an event at exactly the worst time, when everyone else is also facing scarcity. Where to store water? The options here are also varied—cisterns are an obvious source for drinking water, as are ponds where it is a realistic option, but storage in the ground via swales and mulch is a key part of ensuring the water supply to a garden.

Food

If you have enough water and land, it should be possible to grow enough food to provide for minimal self-sufficiency. While many people consider this both unrealistic and extreme, I think it is neither. Even staunchly “establishment” thinkers such as the former chief of Global Strategy for Morgan Stanley advise exactly this path in light of the uncertainty facing humanity. There are several excellent approaches to creating individual food self-sufficiency: Permaculture (see Bill Mollison’s “Permaculture: A Designer’s Manual”), Masanobu Fukuoka’s “Natural Way of Farming” (see book of the same name), Hart’s “Food Forests,” and John Jeavons’ “Biointensive Method” (see “How to Grow More Vegetables”). Some combination and modification of these ideas will work in your circumstances. It is possible to grow enough calories to meet an individual’s requirements in only a few thousand square feet of raised beds—a possibility on even smaller suburban lots, and I have written about the ability to provide a culinarily satisfying diet on as little as 1/3 acre per person.

An additional consideration here is the need to make food supplies resilient in the face of unknown events. I have written about exactly this topic in “Creating Resiliency in Horticulture”, which basically advises to hedge failure of one type of food production with others that are unlikely to fail simultaneously—e.g. balance vegetable gardens with tree-crop production, mix animal production with the availability of reserve rangeland, or include a reserve of land for gathering wild foods. In Crete, after World War II, while massive starvation was wreaking Greece, the locals reverted to harvesting nutritious greens from surrounding forests to survive. The right mix to achieve food resiliency will vary everywhere—the key is to consciously consider and address the issue for your situation.

Shelter, Heating, & Cooling

Shelter should be designed to reduce or eliminate outside energy inputs for heating and cooling. This is possible even in the most extreme climates. Shelter should also be designed to eliminate reliance on building or maintenance materials that can’t be provided in a local and sustainable fashion. I realize that this is a challenge—but our architectural choices speak just as loudly about our real lifestyle as our food choices. Often, studying the architectural choices of pre-industrial people living in your region, or in a climatically similar region, provides great insight into locally appropriate architectural approaches. Passive solar heating and cooling is possible, with the right design, in virtually any climate—something that I have written about elsewhere.

Defense

I’m not going to advocate that individuals set up their own private, defensible bunker stocked with long rifles, claymore mines, and cases of ammunition. If that’s your thing, great. I do think that owning one or more guns may be a good idea for several reasons—defense being only one (hunting, good store of value, etc.). Let’s face facts: if you get to the point that you need to use, or threaten to use a lethal weapon to defend yourself, you’re A) already in serious trouble, and B) have probably made some avoidable mistakes along the way. The single best form of defense that is available to the individual is to ensure that your community is largely self-sufficient, and is composed of individuals who are largely self-sufficient. The entirety of part five of this series will address exactly that topic. Hopefully, America will never get to the point where lethal force must be used to protect your garden, but let’s face it, large parts of the world are already there. In either case, the single best defense is a community composed of connected but individually self-reliant individuals—this is rhizome. If your neighbors don’t need to raid your garden or “borrow” your possessions, then any outside threat to the community is a galvanizing force.

For now, aside from building a resilient community, there are a few things that individuals can do to defend their resiliency. First, don’t stand out. Hakim Bey’s notion of the permanent autonomous zone depends largely on staying “off the map.” How this manifests in individual circumstances will vary wildly. Second, ensure that your base of self-sufficiency is broad and minimally portable. At the risk of seeming like some wild-eyed “Mad Max” doom-monger, brigands can much more easily cart off wealth in the form of sheep or bags of cracked corn than they can in the form of almond trees, bee hives, or a well-stocked pond. Just think through how you achieve your self-sufficiency, and how vulnerable the entire system is to a single shock, a single thief, etc. You don’t have to believe that there will ever be roaming bands of brigands to consider this strategy—it applies equally well to floods, fire, drought, pestilence, climate change, hyperinflation, etc. My article “Creating Resiliency in Horticulture” also addresses this point.

Medicine, Entertainment, & Education

You don’t need to know how to remove your own appendix or perform open heart surgery. You don’t need to become a Tony-award caliber actor to perform for your neighbors. You don’t need to get a doctorate in every conceivable field for the education of your children. But if you understand basic first aid, if you can hold a conversation or tell a story, if you have a small but broad library of non-fiction and reference books, you’re a step ahead. Can you cook a good meal and entertain your friends? Look, human quality of life depends on more than just the ability to meet basic caloric and temperature requirements. The idea of rhizome is not to create a bunch of people scraping by with the bare necessities. Having enough food is great—you could probably buy enough beans right now to last you the next 10 years, but I don’t want to live that way. Most Americans depend on our economy to provide us a notion of quality of life—eating out, watching movies, buying cheap consumables. Minimal self-sufficiency means that we need the ability to provide these quality of life elements on our own. This probably sounds ridiculous to people in the third world who already do this—or to the lucky few in the “West” who have regular family meals, who enjoy quality home cooking, who can carry on enlightening and entertaining conversations for hours, who can just relax and enjoy the simplicity of sitting in the garden. It may sound silly to some, but for others this will be the single, most challenging dependency to eliminate. Again—dependency is the key. I’m not saying that you can never watch E! or go out to Applebee’s. What I am saying is that if you are so dependent on this method of achieving “quality of life” that you will enter the hierarchal system on its terms to access it, you have not achieved minimal self-sufficiency.

Production for Exchange

Finally, beyond minimal self sufficiency, the individual node should have the capability to produce some surplus for exchange because this allows access to additional quality-of-life creating products and services beyond what a single node can realistically provide entirely for itself. This is the point where minimal self-sufficiency doesn’t require isolationism. It is neither possible nor desirable for an individual or family node to provide absolutely everything desired for an optimal quality of life. While minimal self-sufficiency is essential, it is not essential to produce independently every food product, every tool, every type of entertainment, every service that you will want. Once minimal self-sufficiency is achieved, the ability to exchange a surplus product on a discretionary basis allows the individual node to access the myriad of wants—but not needs—that improve quality of life. This surplus product may be a food item—maybe you have 30 chickens and exchange the extra dozen or two eggs that you don’t consumer on a daily basis. Maybe you make wine, olive oil, baked bread, or canned vegetables. Maybe you provide a service—medicine, childcare & education, massage, who knows? The possibilities are endless, but the concept is important.

Practical Considerations in Implementing Rhizome at the Personal Level

Rhizome isn’t an all or nothing proposition—it is possible, and probably both necessary and desirable, to take incremental, consistent steps toward rhizome. Learn how to do more with less. Work to consciously integrate the principles of rhizome into every aspect of your daily life—think about your choices in consumption, then make medium and long-term plans to take bigger steps towards the full realization of rhizome.

And, perhaps most of all, rhizome does not demand, or even endorse, a “bunker mentality.” The single greatest step that an individual can take toward rhizome is to become an active participant in the creation of rhizome in the immediate, local community.

V. Implementing Rhizome at the Community Level

This final essay in this five-part series, The Problem of Growth, looks at implementing rhizome at a community level. Rhizome does not reject community structures in favor of a “bunker mentality,” but rather requires community structures that embrace and facilitate the principles of rhizome at both the personal and community level. Ultimately a rhizome community is composed of rhizome individual or family nodes—participants who do not depend on the community for their basic survival, nor participants who expect to benefit from the community without contribution. Rather, both the individual and the community choose to participate with each other as equals in a non-zero-sum fashion.

The results-based focus of the community is essentially the same as the individual, because the community consists of individuals who recognize the ability of the community to help them build resiliency and self-sufficiency in the provision of their basic needs, as well as the ability to access a broader network beyond the community.

Water

The first thing that communities can do is to get out of the way of individuals’ attempts to create water self-sufficiency: remove zoning and ordinance hurdles that prevent people from practicing rainwater collection and storage, or that mandate people keep their front lawns watered. Communities can also address their storm water policies—many communities simply direct storm water into the ocean (see Los Angeles, for example), rather than effectively storing it in percolation ponds, or otherwise retaining it for community use. Communities can also facilitate the collection and sharing of water-collection and efficiency best practices, as well as help people to refine ideas from outside the community in a locally-appropriate manner. The possibilities are endless—as with virtually everything else here, the key is that the community recognize the issue and make a conscious effort to address it.

Food

Again, communities should start by getting out of the way of individuals’ attempts to become food self-sufficient. This means eliminating zoning or ordinances that require lawns instead of vegetable gardens, that prevent the owning of small livestock such as chickens in suburban developments, and even (!) that mandate the planting of non-fruit bearing trees (on the theory that they’re messy if you forget to harvest them). But communities can also have a very proactive role in facilitating food self-sufficiency. Community gardens are a great place to start, especially where people live in high density housing that makes individual gardening impracticable. This has been done to great effect in urban areas in Venezuela, for instance. Communities can also foster knowledge and facilitate the sharing of best practices via lecture series, master gardener courses, local gardening extensions, community college courses, or community seek banks for locally appropriate species. Finally, communities should consider encouraging farmers markets to promote local surplus produce, to promote at least regional food self-sufficiency, and to kindle a public appreciation for the quality and value of fresh, seasonal, locally grown foods.

Shelter, Heating, & Cooling

I see the actual implementation of self-sufficient shelters as primarily an individual concern, though communities should certainly consider making communal structure, schools, etc. that conform to these standards. Most significantly, however, communities can work to get government out of the way of people who wish to do so individually. Get rid of zoning requirements that forbid solar installations, graywater, rainwater catchment, or small livestock, or that mandate set-backs and minimum numbers of parking spaces. Pass laws or ordinances that eliminate Home Owners’ Association rules prohibiting vegetable gardens, that mandate lawns, that prevent solar installations, etc. Many Colorado Home Owners’ Associations (HOAs) used to ban the installation of solar panels, but Colorado recently passed a statute that prevents HOAs from banning solar—seems like a good idea to me. The Colorado law certainly isn’t perfect, but it is an example of a very real step that a few people can take to work with their local or state government to help make your community more self-sufficient. If your HOA prevents you from installing solar hot water (or other solar), why not try to get the HOA to change its rules–there may be many other neighbors who want the same thing, and the more self-sufficient your immediate neighbors, the stronger your community, even if that community is “suburbia.” If your HOA won’t change, follow Colorado’s example.

Defense

As with individual defense, I don’t advocate that a community take a bunker mentality and make preparations for a Hizb’Allah style defense of South Lebanon. I think that could work, and I’ve written about it here, but I think it is the second to worst outcome and something to be avoided if possible. In modern America, it seems obvious to me that it is fully possible for a rhizome community to operate within the umbrella of any current state government, as well as the federal government. However, there are other nations—take Colombia for example—where this is probably not possible. It seems like a very real possibility that the permissive environment America currently enjoys could look much more like Colombia at some point in the future. For that reason, this is an issue that must be taken up on a case-by-case basis by local communities. While I certainly wouldn’t advocate an armed militia patrolling the perimeter of the self-sufficiency conscious town of Willits, California (though some American communities effectively do this already), this kind of “extreme” action may well be a basic requirement for a small village in Colombia that is attempting to institute localized self-sufficiency and rhizome structure.

Medicine, Entertainment, & Education

Communities have a myriad of ways to provide for their own entertainment, without resorting to some canned cable-TV product. Also, communities can address the specialized knowledge problems—education and medicine, as well as gardening, and the theory of rhizome, by ensuring that these topics are covered in local school curriculums at all levels (public and private), by making these kinds of learning resources available via a community college, the local library, a lecture series, etc.

Exchange, Information Processing, and Interaction Beyond the Local Community

The possibilities here are numerous, and I’ll just name a few possibilities for consideration: Community currency, community paper or blog, community development micro-loans, sponsoring seasonal fairs or festivals, etc. This is an area ripe for innovation and the sharing of best-practices…for additional ideas, see “Going Local” by Michael Schuman.

Practical Considerations in Implementing Rhizome at the Community Level

Just as with implementing rhizome at the individual level, rhizome is not an all-or-nothing proposition for communities. Any step that makes it easier for individuals to move toward rhizome is beneficial. Every community’s situation is different, and the number of ways to combine just the few suggestions provided here is nearly limitless. Customize, come up with new solutions, adapt or reject these ideas as you see fit, and share what works (best practices) and what doesn’t with the world in an open-source manner—but more than anything else, think about how to bring your community closer to rhizome, and then act.

Addressing Free-Riders

Finally, every community must address the problem of free riders. Some people will want to benefit from the community without contributing anything at all. In most cases, normative pressures will suffice, and this is especially true of rhizome, where there isn’t a grand redistributive scheme that facilitates some people to leach indefinitely off the collected surplus. Still, the problem will arise, and there will always be a need and a place for charity, within rhizome and elsewhere. The most important factor in determining who is worthy of charity and who is a free-rider is the conscious articulation of the requirements for membership: the community gains strength by helping up its least self-sufficient members, but it should do so by helping them to fish, rather than repeatedly just giving them fish to eat. Rhizome communities need not be heartless—in fact, they shouldn’t be heartless, not just on moral grounds, but on selfish grounds of building a more resilient community—but they should exert normative pressures to demand participation roughly commensurate with capability.

VI. Conclusion

I hope that this five-part series addressing the Problem of Growth has been useful. One of the cornerstones of my personal philosophy is that growth is the greatest challenge facing humanity, and that shifting from a hierarchal to a rhizome form of social organization is our best chance to “solve” that problem. I also think that rhizome is valuable as it is a scale-free solution: I think that it can help to solve our international and national problems, but even if that fails it can certainly improve our individual situations. Ultimately, removing ourselves, one at a time, from being part of the cause of humanities problem cannot be a bad thing. As Ghandi said, “be the change that you wish to see in this world.” That seems particularly applicable to a scale-free solution!
I think that this discussion is particularly relevant within the context of Peak Oil and Peak Energy.

Infinite growth requires, eventually, infinite energy. Assume that we develop a perfect fusion generator, or that we cover the entire surface of the Earth with 100% efficient solar collectors. None of this actually solves the problem of growth—it just shifts the burden of dealing with that problem onto our grandchildren, or perhaps even 100 generations from now. It’s easy to take the self-centered perspective that such burden-shifting is acceptable, but I find it fundamentally morally unacceptable. This (rather long) essay begins with that moral assumption—if you don’t share it, then you will likely have found a preferable solution, or perhaps denied that growth even represents a problem to begin with. That’s fine by me—I am trying to present one possible solution without claiming that it is the only possible solution. I hope you have found it useful.

The original five parts of this essay can be found here.

How To Make Money on Energy Efficiency
Friday, 21 Mar, 2008 – 7:29 | No Comment

How can the power companies that now make money selling electricity make money instead by helping Americans conserve energy?
It’s a question that pervades the impending transformation to a low-carbon energy economy. Power plants, already massively …

Andris Piebalgs on Bio Fuels
Sunday, 16 Mar, 2008 – 22:15 | No Comment

This week European Energy Commissioner, Andris Piebalgs, moves the debate onto the key issue of bio-fuels. The comment I left on his blog pursued the theme of EroEI and energy efficiency. If you feel strongly about bio-fuels then PLEASE call by Andris Piebalg’s blog and leave him a polite, forceful, well documented message.

Andris Piebalgs drives a Saab 9-5 that runs on bio-ethanol. By my estimation, the energy efficiency of this vehicle is a meagre 5%. Andris no doubt believes he is doing the right thing and I believe he cares a great deal about European energy. And yet he is driving one of the least energy efficient vehicles ever produced – and he is a physicist. How on Earth have these totally bizarre circumstances come about?

[break]

So how have I determined the energy efficiency of a bio-fuel Saab to be 5%. The calculation is as follows:

I have assumed the ERoEI (energy return on energy invested) of tempearte latitude bio-ethanol is 1.2. Sources here and here. Hence the energy efficiency of fuel production is:

((eroei-1) / eroei) * 100 = 0.2 / 1.2 = 16.7%

Assuming the internal combustion engine efficiency is 30% (combined urban cycle) yields an over all efficiency of 0.3 * 0.167 = 5%.

“And how have these bizarre circumstances come about?” – the answer to that I believe lies in an obsession with CO2 emissions that has lost sight of energy efficiency.

First of all, when biofuels replace fossil fuels, greenhouse emissions are almost always lower. Biofuels are produced from plants that absorb the CO2 they generate when they are burnt. This has to take into account the fertiliser used to produce the crops, the energy needed to convert them into liquid fuels and so on. On this basis, biofuels produced in Europe from rape seed, wheat and sugar beet, typically reduce emissions by 20-50% compared to the oil they replace. Biofuels from sugar cane, waste vegetable oil and second generation biofuels can save 75% or more. Under our proposal, all biofuels used for the EU target will have to save, at least, 35%.

I have to say that in this statement the claims made about CO2 conservation seem accurate – proving that the principals involved are understood by the EU Commission. It is just that the energy cost / energy efficiency has not been taken into account.

Variations in ERoEI with CO2 conserved assuming the energy input to bio fuel production is from fossil fuel.

Andris goes on to say:

And this is why biofuels are so important. Today, there are only three ways to reduce greenhouse emissions: the shift from polluting modes to more energy efficient ones (i.e. rail, short sea shipping, collective transport); the promotion of less consuming cars, by establishing CO2/km targets; and biofuels.

I’m sorry this is just not true. The middle of the three options is of course the most sensible – to concentrate upon energy efficient vehicles. But what about:

1. Electric cars running on renewable or nuclear electricity. This is the future of vehicular transportation – so why are the European Commission not sinking billions into this?

2. Pneumatic cars (which I know very little about) but which are reported to be a viable option.

3. Reducing the speed limits across Europe which will save fuel (the number one priority!) reduce pollution and save lives.

Andris, I would like to emphasise how much we appreciate the opportunity to present these arguments on your blog. In your first blog entry you said you were here to listen. I sincerely hope that is the case and that following the period of listening and analysis that there is a period of action.

Prof Peter Newman Diamonds of Hope (Part A)
Friday, 14 Mar, 2008 – 21:42 | No Comment

Peter Newman is a hero of the sustainable transport movement in Australia. In this video he talks about the arrival of record oil prices and peak oil.

Peter Newman has spent thirty years of his career trying to prepare Australian cities for peak oil and he is one of the key people who helped push Perth towards its already very successful new rail infrastructure.


Click to watch the video

Food to 2050
Monday, 10 Mar, 2008 – 7:40 | No Comment



Average United States yields per unit area for various crops, 1900-2007. Yields are expressed as a multiplier of the 1900-1935 average. Source: National Agricultural Statistics Service.

[break]

This post continues an exercise I began a month or so ago of trying to figure out how civilization could be moved to a mostly sustainable footing by 2050, while still being recognizable as civilization, and in particular allowing some continued level of economic growth between now and then, especially in the developing countries. Let me remind you of the parameters of the exercise:

  • Population: The global population is able to grow and go through its demographic transition with death rates continuing to go down. No die-offs.
  • Economy: The world economy is able to grow on average over the period – modestly in developed countries, faster in developing countries.
  • Carbon emissions: The global energy infrastructure will be mainly replaced with non-carbon-emitting energy sources by the end of the period, and residual emissions will be rapidly diminishing.
  • Fossil fuels: I assume that peak oil is here about now but that declines will be governed by the Hubbert model (and thus will be gradual). I assume natural gas and coal are globally plentiful enough that climate policy is required to prevent their full use.
  • Technology: I do not assume any massive breakthroughs – no technological miracles that solve problems in ways completely unknown or untested today. However, where technological sectors have long established rates of progress in key metrics, I extrapolate the metric to continue improving at the historic rate (eg the economics of solar power, or the yields/acre of agriculture are assumed to keep improving on the historical trajectory).
  • Impact on wild ecosystems. Developed countries are assumed to maintain the protections they currently have in place (for national parks, wildernesses etc). Developing countries are assumed to exploit their unused land up to the point of best current practices for developed countries. Whatever impact on ecosystems arises from climate change due to past carbon emissions and the tail of emissions to 2050 is viewed as unavoidable.
  • Conservatism Other than the above, I use the overarching principle of trying to assume as little change in the way the world works as possible – I assume it remains a more-or-less free market world, in which national governments regulate their own countries to temper the worst excesses of the free market and periodically enter into treaties on the more pressing global problems. I assume it remains full of highly imperfect humans mostly struggling to improve their own circumstances. I assume people are willing to come together and take collective action for the common good, but only when the need for that action has become so overwhelming and immediate as to be irrefutable.

In Powering Civilization to 2050 I argued it was potentially feasible to transition to power civilization with a mix of solar, wind, and nuclear energy, with the transition well on the way to completion by 2050. (Luis de Sousa made a broadly similar argument in Olduvai Revisited 2008). This would require a period of belt tightening and conservation in the next couple of decades, but once the transition had overcome the critical threshold (as solar energy in particular became cheap), I suggested energy in general would get cheap again. I adopted the UN medium population projection which has population at about 9 billion by 2050, with growth slowing sharply. Making plausible assumptions for economic growth between now and 2050 if energy was available, we got to a world GDP of about $350 trillion in 2050 (in 2006 purchasing power parity dollars), versus about $70 trillion in 2007

If the average global citizen was significantly wealthier in 2050, they would undoubtedly want to drive more. The switch to primarily electrical energy sources for civilization would preclude doing this with all liquid fuels. In Four Billion Cars in 2050? I argued that, given that the average citizen will be living in a dense third world city by 2050, we can assume rates of ownership typical of the most car-free corners of western Europe at the moment (Holland), which gives rise to a few billion cars in 2050. I further argued that it seems feasible that this many plugin-hybrids could be built – there appears to be enough lithium for the batteries – and run on less than 10mbd of liquid fuels.

In this piece I want to look at another area that many people think is likely to be a critical bottleneck to civilization continuing – the area of food, agriculture, and soil. I am of course not an expert in these areas, but happily there is a lot of excellent scholarship and scenario building that I can lean on. My task is reduced to reporting of the existing science, with some modest adjustments to reflect where my assumptions differ from those of published scenarios (most especially the assumption of a near-term peak in oil supply, and a full-speed effort to convert society to carbon-free energy sources.)

Let’s begin with two very helpful UN Food and Agriculture Organization reports: World agriculture: towards 2015/2030, and the sequel World Agriculture: Towards 2030/2050. What these reports do is basically look at projections for population and economic growth and then estimate how much food people would want in the future, and what quantity of agricultural commodities would be required to fulfill that demand. The first report focusses a lot more on the supply-side factors of how this could be done, while the second report extends the analysis out further in time but confines itself much more to demand side considerations.

The input assumptions about population and world GDP are slightly different than mine, but close enough that I am just going to adopt their food scenario wholesale, rather than trying to construct my own from first principles. The differences would be small – much smaller than the other uncertainties in the problem. Let me first summarize their scenario, and then we will start to explore the potential bottlenecks that might prevent achievement of this much food production. (However, I strongly encourage readers that care about where their food is going to be coming from in the future to take the time and read the FAO reports themselves.)

Let’s start with a look at what the FAO scenario has for average nutrition. This next graph shows both history and projections to 2050 for daily dietary energy (in Kilocalories/day/person) in various regions of the world, as well as the global average.



Per capita food availability 1970-2050 for various regions, together with world average. Values for 2000 and before are data (left of the vertical red line), 2010 onwards are projections (right of vertical red line). Source: Table 2.1 of UN Food and Agriculture Organization, World Agriculture: Towards 2030/2050.

As you can see, the history is that most regions of the world have been getting more and more food. The exceptions are some of the formerly communist countries which suffered a partial collapse of their societies as they attempted to transition to a different economic system. The FAO projects that as the developing countries continues to grow faster than the developed world, they will be able to afford more food, and thus they will continue to approach, but not completely achieve, developed world levels of (over)feeding.

I could quibble with a few things here – I might guess that wealthier developing countries will get closer to current developed country averages by 2050, and I wonder about the sharp trend break between the past and the projections in the developed world. Still, these are minor issues – I think this has to be in the right ballpark for any scenario that assumes continued improvement of economic conditions in the developing world, and no major societal collapses (which is what we are trying to figure out how to avoid).

If we take the FAO’s scenario breakout of food groups (which they give by weight on a per-capita basis) and multiply by population, we get the following for total food demand:



Total food requirement 1970-2050 by major food types. Values for 2000 and before are data (left of the vertical red line), 2010 onwards are projections (right of vertical red line). Source: Table 2.7 of UN Food and Agriculture Organization, World Agriculture: Towards 2030/2050 and UN Medium Population Scenario for population figures. Note that I did not include “Other food”, which is only given in calorific terms in the table, and constitutes less than 10% of calories. Fruits and green vegetables would be included under that category.

As you can see, by 2050, the world would need to be producing about 50% more food than it is today (by weight – somewhat more in terms of energy in crops, since the meat component grows more than 50%). This contrasts with roughly doubling the planetary food production over the last 40 years. However, it’s still an awful lot of extra food to produce – the required absolute increase in food production is similar in size to what has been achieved in the last forty years.

Let’s now consider a variety of potential bottlenecks to achieving this kind of increase in food production. One major area of concern (water) I will reserve for its own future piece, but I address the other big potential constraints that I am aware of.

Land Use and Crop Yields

The doubling of global food production since the 1960s has not come about because of expanding cropland. The world has about 14.8 billion hectares of land area, and the uses of it over the last few decades are as follows:



Major classes of global land use 1961-2003. Source: FAO.

As you can see, the areas of cropland and pasture have increased slightly, at the expense of forests and other land, but the shifts are small percentage-wise. Instead, increased food production for the planet’s extra billions of humans has largely come from big increases in agricultural yields.

I’m going to start with some yield data for the US, where we have long time series on yields for a number of crops. After that, we’ll discuss the global situation. I have taken National Agricultural Statistics Service data on average US yields and reexpressed them on a common basis as a multiplier of the 1900-1935 average (or for those crops were the series doesn’t start till after 1900, from whenever it does start until 1935).



Average United States yields per unit area for selected crops, 1900-2007. Yields are expressed as a multiplier of the 1900-1935 average. Source: National Agricultural Statistics Service.

All the series show a roughly similar pattern. They were all fairly flat (with noise) until sometime in the late 1930s or 1940s. Then they all took off and began growing roughly linearly (again with noise). Modern yields are anywhere from 2.3 to 6.5 times greater than yields in the early twentieth century. Although some series have had periods of lagging for a decade or two (eg peanuts after 1983, dried beans – garbanzos and the like – after 1990), on the whole most of the series look like they are still increasing – there is no obvious pattern of yields flattening off yet. I encourage you to stare at this remarkable data for a long time. It’s really worth thinking about the implications of it. Here are a few conclusions I draw.

Firstly, mechanization (and fossil-fuel powered machinery) are not the main cause of modern yields. Steam tractors were in widespread use in the late 1800s and early 1900s:



Steam Tractor in action in Ontario, 1916. Source: Ontario Govt Photo Archive.

The first gasoline powered tractor to be mass produced was introduced by Ford in 1917. Yet the yield take-off doesn’t begin until 1940, and is almost certainly due to the agricultural innovations that comprise the green revolution. As The Future of Crop Yields and Cropped Area explains it:

The Green Revolution strategy emerged from a surprising confluence of different lines of agricultural
research (Evans, 1998) – the development of cheap nitrogenous fertilizers, of dwarf varieties of major
cereals, and of effective weed control. Nitrogenous fertilizers increase crop production substantially, but
make plants top-heavy, causing them to fall over. The development of dwarf varieties solves this problem,
but at the cost of making plants highly susceptible to weeds, which grow higher than the dwarf plants,
depriving them of light. The development of effective herbicides removed this problem. Further Green
Revolution development focused on crop breeding to increase the harvest index – the ratio of the mass of
grain to total above-ground biomass.

Secondly, anyone who wants to suggest that the world can be fed other than through industrial agriculture has some explaining to do about this data. Every crop shows yields prior to the green revolution that were flat and a small fraction of modern yields. If we returned to yields like that, either a lot of us would be starving, or we’d be terracing and irrigating most of the currently forested hillsides on the planet for food. While shopping for locally grown produce at your nearest organic farmer’s market, stop and give a moment of thanks for the massive productivity of the industrial enterprise that brings you, or at least your fellow citizens, almost all of your calorie input.

Which raises a third important point. Food = Area Cropped x Average Yield. If average yields had not increased like this, humanity’s impact on natural ecosystems would be much greater. It’s true that industrial agriculture has a lot of impacts (nitrogen runoff and the like). However, the alternative would probably have been worse, since it would have required us to intensively exploit enormous areas of fragile, and currently less intensively exploited, land.

Fourthly, the period of greatest global warming, since 1950, coincides with the explosion of yields. I do not suggest that global warming caused increased yields. But at any rate, it would be hard to argue that industrial agriculture yields cannot grow rapidly in the face of the kind of warming we have seen to date: they just did

Well, is the global situation the same, or is this US data unrepresentative? I don’t have access to as much data, but roughly, yes, it’s the same:



Average global cereal yields, 1961-2000. T. Dyson: World Food Trends: A Neo-Malthusian Prospect?, compiled from FAO data.

As you can see, global cereal yields are on the same roughly linear upward trajectory since 1961. Cereals are by far the most important food crop since not only do people eat a lot of them directly, but they also account for much of the input to the meat and dairy food groups that people eat, and thus are the base for the bulk of human calorie intake.

So obviously the critical question is whether or not yields can continue to increase in this manner? If we can just project out the linear increase than clearly a linearly increasing amount of food from a roughly constant amount of land is feasible, and humanity will be able to feed itself without having too much further impact on other ecosystems. On the other hand, if yields fail to increase, then we will be faced with unpleasant tradeoffs like trying to farm fairly unsuitable regions (think tropical rainforests, or the hilly parts of the western US), or not have enough food. So are we near some kind of theoretical yield limit?

Some people seem to think so. Lester Brown, who has been issuing alarming prognostications about food for several decades now, writes in Chapter 4 of his book Outgrowing the Earth

Although the investment level in agricultural research,
public and private, has not changed materially in recent
years, the backlog of unused agricultural technology to
raise land productivity is shrinking. In every farming
community where yields have been rising rapidly, there
comes a time when the rise slows and eventually levels
off. For wheat growers in the United States and rice growers
in Japan, for example, most of the available yield-raising technologies are already in use. Farmers in these
countries are looking over the shoulders of agricultural
researchers in their quest for new technologies to raise
yields further. Unfortunately, they are not finding much.

From 1950 to 1990 the world’s grain farmers raised the
productivity oftheir land by an unprecedented 2.1 percent
a year, slightly faster than the 1.9 annual growth of world
population during the same period. But from 1990 to 2000
this dropped to 1.2 percent per year, scarcely half as fast.

The argument in the second paragraph doesn’t hold water to me. Population has been increasing pretty much linearly in recent decades, and agricultural yields have also been increasing pretty much linearly – I don’t see any break from that pattern in the 1990-2000 decade. Of course, a linear rise will look like a dropping exponential growth rate, but Brown is careful to only point out the slowing in the yield growth rate. What he doesn’t tell you is that world population growth had also dropped to only 1.4% during 1990-2000. In general, food prices until very recently were in a multi-decade secular decline, indicating that food production was not under serious supply-side constraint until the last few years:



Ratio of crude food/feed producer price index to all US consumer prices, Jan 1969-Dec 2007. Source: St Louis Fed.

And the argument in the first Brown paragraph I quoted doesn’t seem to be how the agricultural scientists themselves are feeling. For example, Science reported last week:

A decade ago, sequencing the maize genome was just too daunting. With 2.5 billion DNA bases, it rivaled the human genome in size and contained many repetitive regions that confounded the assembly of a final sequence. But last week, not one but three corn genomes, in various stages of completion, were introduced to the maize genetics community. In addition, researchers announced the availability of specially bred strains that will greatly speed tracking down genes involved in traits such as flowering time and disease resistance. These resources are ushering in a new era in maize genetics and should lead to tougher breeds, better yields, and biofuel alternatives. “We’re sitting on very exciting times,” says Geoff Graham, a plant breeder at Pioneer Hi-Bred International Inc.

The geneticists are well on the way to having complete genome sequences for thousands of corn varietals from all over the world. If I was a corn geneticist, I’d be pretty excited too.

A more grounded attempt to estimate the issue seems to be the FAO’s discussion in World agriculture: towards 2015/2030:

The slower growth in production projected for the next 30 years means that yields will not need to grow as rapidly as in the past. Growth in wheat yields is projected to slow to 1.1 percent a year in the next 30 years, while rice yields are expected to rise by only 0.9 percent per year.

Nevertheless, increased yields will be required – so is the projected increase feasible? One way of judging is to look at the difference in performance between groups of countries. Some developing countries have attained very high crop yields. In 1997-99, for example, the top performing 10 percent had average wheat yields more than six times higher that those of the worst performing 10 percent and twice as high as the average in the largest producers, China, India and Turkey. For rice the gaps were roughly similar.

National yield differences like these are due to two main sets of causes:

Some of the differences are due to differing conditions of soil, climate and slope. In Mexico, for example, much of the country is arid or semi-arid and less than a fifth of the land cultivated to maize is suitable for improved hybrid varieties. As a result, the country’s maize yield of 2.4 tonnes per ha is not much more than a quarter of the United States average. Yield gaps of this kind, caused by agro-ecological differences, cannot be narrowed.

Other parts of the yield gap, however, are the result of differences in crop management practices, such as the amount of fertilizer used. These gaps can be narrowed, if it is economic for farmers to do so.

To find out what progress in yields is feasible, it is necessary to distinguish between the gaps that can be narrowed and those that cannot. A detailed FAO/IIASA study based on agro-ecological zones has taken stock of the amount of land in each country that is suitable, in varying degrees, for different crops. Using these data it is possible to work out a national maximum obtainable yield for each crop.

This maximum assumes that high levels of inputs and the best suited crop varieties are used for each area, and that each crop is grown on a range of land quality that reflects the national mix. It is a realistic figure because it is based on technologies already known and does not assume any major breakthroughs in plant breeding. If anything, it is likely to under-estimate maximum obtainable yields, because in practice crops will tend to be grown on the land best suited for them.

The maximum obtainable yield can then be compared with actual national average yield to give some idea of the yield gap that can be bridged. The study showed that even a technologically progressive country such as France is not yet close to reaching its maximum obtainable yield. France could obtain an average wheat yield of 8.7 tonnes per ha, rising to 11.6 tonnes per ha on her best wheat land, yet her actual average yield today is only 7.2 tonnes per ha.

For example:


Gap between actual national yields and estimated yield with best currently known varietals and inputs. Source: FAO report, World agriculture: towards 2015/2030

And so,

Similar yield gaps exist for most countries studied in this way. Only a few countries are actually achieving their maximum obtainable yield. When real prices rise, there is every reason to believe that farmers will work to bridge yield gaps. In the past, farmers with good access to technologies, inputs and markets have responded very quickly to higher prices. Argentina, for example, increased her wheat production by no less than 68 percent in just one year (1996), following price rises, although this was done mainly be extending the area under wheat. Where land is scarcer, farmers respond by switching to higher-yielding varieties and increasing their use of other inputs to achieve higher yields.

It seems clear that, even if no more new technologies become available, there is still scope for increasing crop yields in line with requirements. Indeed, if just 11 of the countries that produce wheat, accounting for less than two-fifths of world production, were to bridge only half the gap between their maximum obtainable and their actual yields, then the world’s wheat output would increase by almost a quarter.

Another way to try to get at the issue is to look at how current yields compare to the theoretical potential of photosynthesis. This is generally expressed as net primary productivity (NPP) – the amount of carbon that plants can fix, exclusive of that used to power their own respiration. The net primary productivity is the photosynthetic product that is available to be eaten by people and other animals, rot into the soil, etc. Here is a map of the fraction of net primary productivity appropriated by humans published by Haberl et al last year in the Proceedings of the National Academy of Sciences, which I take to be a decent representative of the state-of-the-art in this kind of calculation:


Global distribution of fraction of potential net primary productivity appropriated by humans. Source: Haberl et al: Quantifying and mapping the human appropriation of net primary production in earth’s terrestrial ecosystems

You might look at the red – 60%-80% appropriation of NPP in many of the world’s key crop growing areas, and think there wasn’t enough head room for another 50%+ increase in yield in those areas. However, it’s important to understand exactly how the accounting in these calculations is done. Let’s consider a piece of the US midwest that used to be tall-grass prairie and is now under corn. What Haberl et al would do is first use a vegetation model (specifically, this one) to establish that it would be a prairie there absent human intervention, and figure out how much carbon the prairie would have fixed as NPP. That quantity they call NPP0 (for that particular area – they compute NPP0 for every cell in a global grid). So this is an estimate of the theoretical carbon fixation in the absence of any human influence. In particular, this is with the rainfall that falls naturally – carbon fixation in actual use could potentially exceed this if the crop was irrigated.

Then they would run the model again, but constrained to have cornfields rather than prairies. The carbon fixed by the model in that scenario would be NPPact. Thus a model estimate of the actual carbon fixation in the actual human use of the area.

Next, they would figure out NPPh which would be basically the carbon in the harvested corn based on national agricultural statistics (and in agricultural residues if those were harvested and statistically tracked also, but not likely in the case of corn). So NPPh is the part that we humans really use (either by eating or feeding to our animals).

Given the actual NPPact, and the NPPh they would then compute the difference, NPPt – basically the carbon in the corn stover which gets returned to the ground, eaten by mice, or whatever happens to it.

So then the human appropriation of net primary productivity (HANPP) is defined as 1 – NPPt/NPP0. That is to say, if you look at the carbon that the prairie would have fixed, and then the carbon in the corn-stover, the difference is what is considered to be human appropriated. And that’s the thing in the map that’s 60-100% in the midwest (and other heavily utilized major cropland areas). However, this is not the same as the theoretical yield. In particular, a lot of the appropriated carbon comes about due to the difference between NPP0 and NPPact – the corn field doesn’t fix as much carbon as the prairie, probably mainly because it starts the season out as bare soil and has to grow an annual crop from seed, instead of being a set of perennial grasses that can sprout from last year’s roots and cover the available area in chlorophyll much faster.

Let’s look at their Table 2 to make this clearer. This table shows the global breakdown of HANPP by food class. If we look at the “Cropping” category, we can see the different figures.


Summary of human appropriation of net primary productivity. NPP0 is modeled carbon fixation in wild condition. NPPact is carbon fixation in actual human usage. NPPh is carbon harvested or unfixed by harvest. NPPt is residual carbon flowing into ecosystem. Source: Haberl et al: Quantifying and mapping the human appropriation of net primary production in earth’s terrestrial ecosystems

As you can see, the average m2 of cropfield (worldwide) would fix 0.6kg of carbon if it wasn’t actually a field, but instead was covered in whatever the climactic climax vegatation is in that location. As a square meter of a field instead, it fixed 0.4kg of carbon, and of that humans got, on average 0.3kg as food and straw etc, leaving 0.1kg to go to the ground. So the HANPP is considered to be 5/6 (1 – 0.1/0.6). (The authors insist on three significant figures (83.5%), but I’m skeptical that the calculations are really that accurate). However, hopefully it should be clear by now that that doesn’t mean there’s a theoretical limit of only increasing yield by a further 1/5. Instead, there are multiple targets for the agronomists and geneticists to go after. The gap between the 0.4kg of NPPact and the 0.6kg NPP0 could be addressed with plants that had a longer growing season, covered the ground earlier, etc. To the extent some cropland is water-limited, irrigation could potentially increase the total NPP feasible. To the extent the 0.3kg of NPPh is showing up as straw rather than food, then potentially that could be increased further.

A few decades down the road, one imagines heat-loving genetic mutant corn plants that pop up in the spring from perennial roots, promptly cover the ground with leaves that flatten themselves to the soil, and then start spitting out corn kernels, which can be harvested several times a year. It might not look much like a corn plant, but made into Doritos, people would probably still eat it (well, Americans would, anyway).

In short, another factor two of global cropland yields seems not to be ruled out on theoretical grounds. However, much more than that would appear to require the geneticists to come up with better photosynthesis (black plants basically – on which there has been no progress, as far as I understand).

Finally, it’s worth mentioning that the FAO thinks there is considerable potential to use more land for agriculture:

There is still potential agricultural land that is as yet unused. At present some 1.5 billion ha of land is used for arable and permanent crops, around 11 percent of the world’s surface area. A new assessment by FAO and the International Institute for Applied Systems Analysis (IIASA) of soils, terrains and climates compared with the needs of and for major crops suggests that a further 2.8 billion ha are to some degree suitable for rainfed production. This is almost twice as much as is currently farmed.

Here’s the breakdown for where the alleged potential cropland is:


Regional breakdown of land considered available for cropping, compared to land in present use for that purpose. Source: FAO report World agriculture: towards 2015/2030

However, “much of the land reserve may have characteristics that make agriculture difficult, such as low soil fertility, high soil toxicity, high incidence of human and animal diseases, poor infrastructure, and hilly or otherwise difficult terrain.” Caveat emptor!

If you look carefully at this figure – with the available land mainly in South America and Sub-saharan Africa, and the HANPP map above, you’ll realize that much of what the FAO is talking about is cutting down the remaining tropical rainforests and using them for agriculture. I don’t think that’s a very good idea for a host of different reasons – better that we eat mutant corn, I think. The great bulk of the best land is almost certainly in production already.

Soil Loss

It appears to me that until recently, there has been a good deal of scientific confusion on the seriousness of soil erosion, estimates of the rate of erosion vary by more than an order of magnitude, and the overall data situation make global oil reserves look like a model of precision. As such, I don’t think it’s possible to make a clear evaluation of how near term the threat is globally. My best impression is that it’s regionally quite severe, especially on fragile and marginal lands (dry, steep, or thin-soiled), but is probably not a near-term (next few decades) threat on the core agricultural regions from which most food comes (which tend to be flatter places with deep soils that don’t erode quickly). It is certainly a major concern on the century timescale. However, there are many cultural practices that can help while still allowing good yields and, if I’m reading the literature correctly, erosion appears to be controllable, even within the context of fairly industrial styles of agriculture. Let me quickly sketch some of the debate.

The last global evaluation appears to have been GLASOD done by Oldeman et al and published in 1990. They produced a map which looks like this:


Global map of soil degradation. Source: GLASOD map, as shown in FAO report World agriculture: towards 2015/2030

This looks really bad – everywhere humans are, the soil is degraded, and much of the world’s core crop land is in the “severely degraded” category. However, that did not yet have much noticeable effect on global yields, which have continued to increase by leaps and bounds since then. Moreover, this map was produced by what amounts to a survey of soil scientists, who used their subjective judgement. The instructions for filling out the questionaire describe how to set up the map cells, and then say:

The next step involves evaluation of the degree, relative extent, recent past rate and causative factors for each type of human-induced soil degradation, as it may occur in the delineated physiographic unit. This evaluation process should be carried out in close cooperation with national and/or international experts with local knowledge of the region. The evaluation process results in a list of of human-induced soil degradation types per physiographic unit, ranking them in order of importance.

So this doesn’t sound like a precise, quantitative sort of estimate. And more quantitative estimates are dogged with problems. A central issue is that most soil eroded from place A (let’s say a steep field on the side of a valley) isn’t necessarily lost to cultivation. Instead, it may end up in place B (let’s say the flood plain of the river in the bottom of the valley) where it may still be of use in cultivation.

The US is the best measured place, in that we at least have a national agency charged with regular quantitative assessments of soil erosion (a legacy of the dustbowl years). The last assessment was the 2003 National Resources Inventory.


NRCS maps of US soil erosion in 2003. Source: US National Resources Conservation Service 2003 National Resources Inventory

These estimates are made by applying models (the Universal Soil Loss Equation and the Wind Erosion Equation) to topographical and climate data. The model inputs are things like the rainfall data, the slope of the field, the erodibility of the particular soil, etc. The overall amounts of erosion are decreasing, and the amount is not imminently scary. The current national average of 4.7 tons/acre/year corresponds to a little more than 1 kg/m2/yr, which in turn is about 1mm/year, or an inch in twenty five years. That’s not good, but doesn’t sound like a likely disaster before 2050, particularly given that the rate of erosion is dropping quite rapidly.

However, these estimates in one way overstate the problem because the USLE and WEE are designed to assess how much soil is removed from its original location, but not where that soil goes. Most of it is unlikely to make it all the way out to the ocean, but instead end up somewhere else where it may be put to use. An extraordinary paper by Trimble in 1999 assessed the details of where soil went in a single valley in Wisconsin by doing detailed samples and cross sections of the alluvial plains. His estimates of the trends and disposition of soil is as follows:


Disposition of soil erosion in Coon Creek watershed, Wisconsin. Source: S. Trimble Decreased Rates of Alluvial Sediment Storage in the Coon Creek Basin, Wisconsin, 1975-93

Clearly, the soil erosion is decreasing, but also, most of it hasn’t gone that far, and, therefore, could potentially be put back at some point in the future if that becomes economically desirable.

Still, in the long term, it seems that eroding an inch every few decades from upland areas is certainly not sustainable, though it’s not an imminent crisis either. In an important meta-analysis last year, D. Montgomery compiled erosion rates for a wide variety of situations and plotted the following cumulative density function for the probability of different erosion rates:


Cumulative distribution function of soil erosion and formation rates from numerous studies around the world. Hollow circles represent rates of soil formation, solid line is geological erosion rates, triangles are soil erosion rates under native vegetation, while diamonds are soil erosion rates under various conservation tillage methods (terracing or no-till agriculture). Solid circles represent plough-based agriculture. Source: D. Montgomery, Soil erosion and agricultural sustainability

The key things to note are these:

  • Rates of soil production and erosion under native vegetation are roughly similar, suggesting soil depths are naturally in equilibrium.
  • Rates of “agricultural” erosion are a couple of orders of magnitude higher, suggesting that ploughing is not a long-term proposition.
  • Rates of “Conservation” erosion are roughly comparable to to natural erosion rates under native vegetation. This covers more sustainable management regimes such as terracing and no-till agriculture.

This suggests that the long-term sustainability of industrial agriculture requires the use of no-till farming systems in which ploughing is not done, crop residues are left on the field, and weeds are managed another way (primarily via herbicides today).

Fertilizer

The three major fertilizer nutrients applied in industrial agriculture are Nitrogen (N), Phosphorus (P), and Potassium (K). None appear to be a critical constraint on agriculture to the 2050 timeframe, though there are significant issues with nitrogen in the short term.

Nitrogen fertilizer is manufactured via the Haber-Bosch process in which nitrogen gas (which forms almost 80% of the atmosphere) is heated with hydrogen over an iron catalyst at high temperatures and pressures to form ammonia (NH3) which is subsequently reacted with other compounds to form urea, ammonium sulphate, and other compounds used as fertilizer. Presently, almost all the hydrogen input to this process is produced by steam reformation of natural gas, and this is the cause of the short term problem since natural gas supplies are problematic, and likely to worsen with both Europe and North America probably at or past peak natural gas. Fertilizer manufacture is exiting these regions and moving to the Middle East, Trinidad, and other places with more natural gas.

However, in the long term, there’s no reason nitrogen fertilizer has to be made from natural gas. In my scenario in which energy production is dominated by renewable/nuclear electricity by 2050, the natural source of hydrogen for Haber-Bosch is by electrolyzing water. Producing nitrogen fertilizer is unproblematic as long as society has ample energy.

The reserves and reserve-base for phosphorus are enormous. According to the USGS, 2006 global production of phosphate rock was 145 million tons, while reserves were 18 billion tons, and the reserve base was 50 billion tons. For the 2050 timeframe, I consider reserve base to be the more appropriate number for the same reasons discussed under lithium. The reserve base for phosphate rock is 350 times larger than 2006 production, so there is no evidence of a problem at present.

Some bloggers are concerned that the Hubbert linearization suggests peak phosphorus has already past. However, Hubbert linearization is not very reliable if there is no independent evidence to suggest peak is at hand, due to the problem of dual peak structures giving rise to misleading linear regions (eg see the UK oil linearization). In this case, with enormous reserves, and stable phosphorus prices (they haven’t varied outside the range of $27-$28/ton from 2002-2006), it seems very unlikely that phosphorus is in trouble. JD has made a similar point (snark warning).

Potassium comes from the mining of potash. The USGS estimates the global reserve base to be 550 times larger than current usage. So potassium is unlikely to limit civilization any time soon.

Fuel use in Farming and Food Transport

I don’t have global statistics, but at least in the US, agriculture is a minor user of oil. In total, it only used 2.2% of oil in 2000. This contrasts with cars and light trucks, which used 40%, heavy trucks which used 12.7%, air travel at 6.7% etc. Since agriculture is such a critical industry, we can ensure it is preferred for oil usage.

Furthermore, all shipping trade only uses 2.5% of US oil use. Most of that is shipping things other than food, but the bulk of food transportation is in there too. Amongst critics of globalism, the image of strawberries being flown from Chile is a popular thing to pick on. However, things like strawberries form a miniscule fraction of our diet. A more representative image of global food trade would be a grain ship like this one:


Grain ship docked in Australia.

Shipping is extremely energy efficient – two orders of magnitude better per ton-mile than air freight. Thus, long-haul shipping of food will be cost effective long after oil has peaked. Ships can also be run on nuclear power, as the US navy has been demonstrating for decades.

In Conclusion

There seems to be reason for cautious optimism that if other global problems can be solved, food production will not be a critical constraint on civilization to 2050. If industrial agricultural yields maintain their historical trajectory, there will be enough food without needing much more land. In case yields fail to continue increasing, more land is potentially available globally, though likely of poor quality. Soil erosion is an important problem, but not a critical emergency, and can seemingly be solved permanently with no-till farming methods. Fertilizer does not appear to be seriously constrained in the long-term, though nitrogen fertilizer needs to be transitioned away from reliance on natural gas. Agriculture only needs a tiny fraction of global liquid fuel use to operate, and this can be maintained for a long time, since food production is a critical infrastructure.

However, if we were to keep growing the conversion of food into biofuels, all bets would be off.

Other sources

In addition to the sources linked directly above, I consulted the following references

Cassandra’s curse: how “The Limits to Growth” was demonized
Sunday, 9 Mar, 2008 – 10:22 | No Comment

Cassandra’s story is very old: she was cursed that she would always tell the truth and never be believed. But it is also a very modern story and, perhaps, the quintessential Cassandras of our age are the group of scientists who prepared and published in 1972 the book titled “The Limits to Growth”. With its scenarios of civilization collapse, the book shocked the world perhaps more than Cassandra had shocked her fellow Trojan citizens when she had predicted the fall of their city to the Achaeans. Just as Cassandra was not believed, so it was for the “Limits to Growth” which, today, is still widely seen as a thoroughly flawed study, wrong all along. This opinion is based only on lies and distortions but, apparently, Cassandra’s curse is still alive and well in our times.



Above: image from an Athenian red vase from 5th century BC: Cassandra falls victim of the usual destiny of those who tell inconvenient truths.

[break]

The first book of the “The Limits to Growth” series was published in 1972 by a group of researchers of the Massachusetts Institute of Technology: Dennis Meadows, Donella Meadows, Jorgen Randers and William Behrens III. The book reported the results of a study commissioned by a group of intellectuals who had formed the “Club of Rome” a few years before. It examined the evolution of the whole world’s economy by means of a mathematical model based on “system dynamics”, a method that had been developed earlier on by Jay W. Forrester. Using computers, a novelty for the time, the LTG world model could keep track of a large number of variables and of their interactions as the system changed with time. The authors developed a number of scenarios for the world’s future in various assumptions. They found that, unless specific measures were taken, the world’s economy tended to collapse at some time in 21st century. The collapse was caused by a combination of resource depletion, overpopulation, and growing pollution (this last element we would see today as related to global warming).

In 1972, the LTG study arrived in a world that had known more than two decades of unabated growth after the end of the Second World War. It was a time of optimism and faith in technological progress that, perhaps, had never been so strong in the history of humankind. With nuclear power on the rise, with no hint that mineral resources were scarce, with population growing fast, it seemed that the limits to growth, if such a thing existed, were so far away in the future that there was no reason to worry. In any case, even if these limits were closer than generally believed, didn’t we have technology to save us? With nuclear energy on the rise, a car in every garage, the Moon just conquered in 1968, the world seemed to be all set for a shiny future. Against that general feeling, the results of LTG were a shock.

There is a legend lingering around the LTG report that says that it was laughed off as an obvious quackery immediately after it was published. It is not true. The study was debated and criticized, as it is normal for a new theory or idea. But it raised enormous interest and millions of copies were sold. Evidently, despite the general optimism of the time, the study had given visibility to a feeling that wasn’t often expressed but that was in everybody’s minds. Can we really grow forever? And if we can’t, for how long can growth last? The LTG study provided an answer to these questions; not a pleasant one, but an answer nevertheless.

The LTG study had everything that was needed to become a major advance in science. It came from a prestigious institution, the MIT; it was sponsored by a group of brilliant and influential intellectuals, the Club of Rome; it used the most modern and advanced computation techniques and, finally, the events that were taking place a few years after publication, the great oil crisis of the 1970s, seemed to confirm the vision of the authors. Yet, the study failed in generating a robust current of academic research and, a couple of decades after the publication, the general opinion about it had completely changed. Far from being considered the scientific revolution of the century, in the 1990s LTG had become everyone’s laughing stock. Little more than the rumination of a group of eccentric (and probably slightly feebleminded) professors who had really thought that the end of the world was near. In short, Chicken Little with a computer.

The reversal of fortunes of LTG was gradual and involved a debate that lasted for decades. At first, critics reacted with little more than a series of statements of disbelief which carried little weight. There were a few early papers carrying more in-depth criticism, notably by William Nordhaus (1973) and by a group of researchers of the university of Sussex that went under the name of the “Sussex Group” (Cole 1973). Both studies raised a number of interesting points but failed in their attempt of demonstrating that the LTG study was flawed in its basic assumptions.

Already these early papers by Nordhaus and by the Sussex group showed an acrimonious streak that became common in the debate from the side of the critics. Political criticism, personal attacks and insults against the LTG authors, and in general a rather rude attitude. For instance, the editor of the journal that had published Nordhaus’ 1973 paper refused to published Forrester’s response to it. With time, the debate veered more and more on the political side. In 1997, the Italian economist Giorgio Nebbia, noted that the reaction against the LTG study had arrived from at least four different fronts. One was from those who saw the book as a threat to the growth of their businesses and industries. A second set was that of professional economists, who saw LTG as a threat to their dominance in advising on economic matters. The Catholic world provided further ammunition for the critics, being piqued at the suggestion that overpopulation was one of the major causes of the problems. Then, the political left in the Western World saw the LTG study as a scam of the ruling class, designed to trick workers into believing that the proletarian paradise was not a practical goal. And this by Nebbia is a clearly incomplete list; forgetting religious fundamentalists, the political right, the believers in infinite growth, politicians seeking for easy solutions to all problems and many others.

All together, these groups formed a formidable coalition that guaranteed a strong reaction against LTG. This reaction eventually succeeded in demolishing the study in the eyes of the majority of the public and of specialists at the same time. This demolition was greatly helped by a factor that initially had bolstered the credibility of the study: the world oil crisis of the 1970s

The crisis had peaked in 1979 but, in the years that followed, oil started flowing abundantly from the North Sea and from Saudi Arabia. With oil prices plummeting down, it seemed to many that the crisis had been nothing but a scam; the failed attempt of a group of fanatic sheiks of dominating the world using oil as a weapon. Oil, it seemed, was, and had always been, plentiful and was destined to remain so forever. With the collapse of the Soviet Union and the “New Economy” appearing, all worries seemed to be over. History had ended and all what we needed to do was to relax and enjoy the fruits that our high technology would provide for us.

At this point, a perverse effect started to act on people’s minds. In the late 1980s, all what was remembered of the LTG book, published almost two decades before, was that it had predicted some kind of catastrophe at some moment in the future. If the world oil crisis had been that catastrophe, as it had seemed to many, the fact that it was over was the refutation of the same prediction. This factor had a major effect on people’s perception of the LTG study.

The change in attitudes was gradual and spanned a number of years, however we can locate a specific date and an author for the actual turning point, the switch that changed LTG from a respectable, if debatable, study to everybody’s laughing stock. It happened in 1989 when Ronald Bailey, science editor of the Forbes magazine, published a sneering attack (Bailey 1989) against Jay Forrester, the father of system dynamics. The attack was also directed against the LTG book which Bailey said was, “as wrong-headed as it is possible to be”. To prove his point Bailey revived an observation that had already been made in 1972 by a group of economists on the “New York Times” (Passel 1972). Bailey said that:

“Limits to Growth” predicted that at 1972 rates of growth the world would run out of gold by 1981, mercury by 1985, tin by 1987, zinc by 1990, petroleum by 1992, copper, lead and natural gas by 1993.

In 1993 Bailey reiterated his accusations in the book titled “Ecoscam.” This time, he could state that none of the predictions of the 1972 LTG study had turned out to be correct.

Of course, Bailey’s accusations are just plain wrong. What he had done was extracting a fragment of the LTG text and criticizing it out of context. In table 4 of the second chapter of the book, he had found a row of data (column 2) for the duration, expressed in years, of some mineral resources. He had presented these data as the only “predictions” that the study had made and he had based his criticism on that, totally ignoring the rest of the book.

Reducing a book of more than a hundred pages to a few numbers is not the only fault of Bailey’s criticism. The fact is that none of the numbers he had selected was a prediction and nowhere in the book it was stated that these numbers were supposed to be read as such. Table 4 was there only to illustrate the effect of a hypothetical continued exponential growth on the exploitation of mineral resources. Even without bothering to read the whole book, the text of chapter 2 clearly stated that continued exponential growth was not to be expected. The rest of the book, then, showed various scenarios of economic collapse that in no case took place before the first decades of 21st century.

It would have taken little effort to debunk Bailey’s claims. But it seemed that, despite the millions of copies sold, all the LTG books had ended in the garbage bin. Or, perhaps, browsing one’s shelves was considered too much of an effort to be worth doing in a moment when, with the new economy starting to run, there were better things to do. Whatever the case, Bailey’s criticism had success and it started behaving with all the characteristics of what we call today “urban legends.” We all know how persistent urban legends can be, no matter how silly they are. At the time of Bailey’s article and book, the internet as we know it didn’t exist yet, but word of mouth and the press were sufficient to spread and multiply the legend of the “wrong predictions” of the LTG study.

Just to give an example, let’s see how Bailey’s text even reached the serious scientific literature. In 1993, William Nordhaus had published a paper titled “Lethal Models” which was meant as an answer to the second edition of LTG, published in 1992. Despite the title, a little aggressive to say the least, it was a serious study. In it, Nordhaus criticized the 1992 LTG study, but also corrected some of the most glaring mistakes of his first study on the subject (Nordhaus 1973). However, the paper was accompanied by a series of texts by various authors grouped under the title of “Comments and Discussion”. A better definition of that section would have been “feeding frenzy” as criticism of this distinguished group of academic economists clearly went out of control. Among these texts, we find one by Robert Stavins, an economist from Harvard University, where we can read that:

If we check today to see how the Limits I predictions have turned out, we learn that (according to their estimates) gold, silver, mercury, zinc, and lead should be thoroughly exhausted, with natural gas running out within the next eight years. Of course, this has not happened.

That, obviously, is taken straight from Bailey. Apparently, the excitement of a “Limits-bashing” session had led Stavins to forget that it is the duty of a serious scientist to check the reliability of the sources that he or she cites. Unfortunately, with this paper the legend of the “wrong predictions” of LTG was even enshrined in a serious academic journal.

With the 1990s, and in particular with the development of the internet, we can say that the dam gave way and a true flood of criticism swamped LTG and its authors. One after the other, scientists, journalists, and whoever felt entitled to discuss the subject, started repeating the same line over and over: the LTG study had predicted a catastrophe that didn’t take place and therefore the whole idea was wrong.

After a while the concept of the “wrong predictions” became so widespread that it wasn’t any more necessary to state in detail what these wrong predictions were. At some point, it became politically incorrect even to declare that LTG might have been, after all, not so wrong as some people thought. The criticism could also become aggressive and I can cite at least one internet page where you can read that the authors of the LTG book should be killed, cut to pieces, and their organs sent to organ banks. Hopefully, that was meant as a joke (perhaps). Today, we can use Google to find Bailey’s legend repeated on the internet literally thousands of times in various forms, with minimal variations. In hundreds of cases, it is exactly the same, cut and pasted as it is; in others it is just slightly modified.

At this point, we may ask ourselves whether this wave of slander had arisen by itself, as the result of the normal mechanism of human legends, or it had been somehow masterminded by someone, the result of what we call nowadays “viral marketing”. Can we think of a conspiracy organized against the LTG group, or against their sponsors, the Club of Rome?

The question is not unreasonable since the LTG authors were accused in all seriousness by ostensibly respectable researchers to be themselves the acting branch of an evil conspiracy organized by the oil multinationals in order to enslave most of humankind and create “a kind of fanatical dictatorship” (Golub and Towsend, 1977). Could it be that the LTG group were victims, rather than perpetrators, of a conspiracy?

On this point we can seek an analogy with the case of Rachel Carson, well known for her book “Silent Spring” of 1962 in which she criticized the overuse of DDT and other pesticides. Also Carson’s book was strongly criticized and demonized. Kimm Groshong has reviewed the story and she tells us in her 2002 study that:

The minutes from a meeting of the Manufacturing Chemists’ Association, Inc. on May 8, 1962, demonstrate this curious stance. Discussing the matter of what was printed in Carson’s serialization in the New Yorker, the official notes read: “The Association has the matter under serious consideration, and a meeting of the Public Relations Committee has been scheduled on August 10 to discuss measures which should be taken to bring the matter back to proper perspective in the eyes of the public.”

Whether we can call that a “conspiracy” is open to discussion, but clearly there was an organized effort on the part of the chemical industry against Rachel Carson’s ideas. By analogy, we could think that, in some smoke filled room, representatives of the world’s industry had gathered to decide what measures to take against the LTG study in order to “bring the matter back to proper perspective in the eyes of the public”

We can’t rule out that something like that took place, but it seems unlikely. Surely, think tanks and political groups financed studies that were likely to arrive to conclusions differing from those of LTG. But the demolition of the LTG ideas seems to have been mainly a spontaneous process, probably helped, but not directly caused, by economic interests. The 1989 article by Ronald Bailey was no more than a catalyst for something that, most likely, would have taken place anyway. It was the result of the tendency of our minds to believe what we want to believe and to disbelieve what we don’t want to believe.

Now, in the early years of 21st century the general attitude towards LTG seems to be changing again. The war, after all, is won by those who win the last battle and the LTG ideas are becoming again popular. One of the first cases of reappraisal has been that of Matthew Simmons (2000), expert on crude oil resources. It seems that the “peak oil movement” has been instrumental in bringing back to attention the LTG study. Indeed, oil depletion can be seen as a subset of the world model used in the study (Bardi 2008).

Climate studies have also brought back the limits of resources to attention; in this case intended as the limited capability of the atmosphere to absorb the products of human activities. In this field, the LTG study can be seen as having taken the right approach from the beginning; modeling for the first time the interaction of the environment with the human industrial and agricultural system.

But it is not at all obvious that a certain view of the world, one that takes into account the finite amount of resources, is going to become prevalent, or even just respectable. Consider that, in the 1980s – 1990s, a decade of lull in oil prices was enough to convince everyone that all worries about resource depletion were akin to the substance that male bovines produce from their rear end. Now, imagine that for some reasons the world’s average temperatures were to stabilize, or even slightly go down, for some years. Or imagine that oil prices were to stabilize or go down for some years. That wouldn’t change anything to the concepts of global warming and peak oil, which deal both with long term changes. But it would be sufficient to unleash a smear wave similar to that which engulfed LTG. It could easily do the same damage to the efforts against global warming and oil depletion.

Prophets of doom, nowadays, are not stoned to death, at least not usually. Demolishing ideas that we don’t like is done in a rather subtler manner. The success of the smear campaign against the LTG ideas shows the power of propaganda and of urban legends in shaping the public perception of the world, exploiting our innate tendency of rejecting bad news. Because of these tendencies, the world has chosen to ignore the warning of impending collapse that came from the LTG study. In so doing, we have lost more than 30 years. Now, there are signs that we may be starting to heed the warning, but it may be too late and we may still be doing too little. Cassandra’s curse may still be upon us.

References

Bailey, Ronald 1989, “Dr. Doom” Forbes, Oct 16, p. 45

Bardi, U. 2008, “Peak oil and the Limits to Growth: two parallel stories”, The Oil Drum. http://europe.theoildrum.com/node/3550

Cole H.S.D., Freeman C., Jahoda M., Pavitt K.L.R., 1973, “Models of Doom” Universe Books, New York

Golub R., Townsend J., 1977, “Malthus, Multinationals and the Club of Rome” vol 7, p 201-222

Groshong, K. 2002, “The Noisy Response to Silent Spring: Placing Rachel Carson’s Work in Context!, Pomona College, Science, Technology, and Society Department Senior Thesis http://www.sts.pomona.edu/ThesisSTS.pdf

Nebbia, G. 1997, Futuribili, New Series, Gorizia (Italy) 4(3) 149-82

Nordhaus W., 1973 “Word Dynamics: Measurements without Data“, The Economic Journal n. 332.

Nordhaus W. D., 1992, “Lethal Models” Brookings Papers on Economic Activity 2, 1

Passel, P., Roberts, M., Ross L., 1972, New York Times, April 2

Simmons, M., 2000, “Revisiting The Limits to Growth: Could The Club of Rome Have Been Correct, After All?” http://www.simmonsco-intl.com/files/172.pdf

Andris Piebalgs’ Blog
Sunday, 2 Mar, 2008 – 21:01 | No Comment


Andris Piebalgs is the European Energy Commissioner with responsibility for shaping European Union (EU) energy policy. These policies may then be adopted by the European Parliament and will effectively shape Europe’s energy future.

Mr Piebalgs has an informative web site where he has newly installed a blog inviting comments on EU energy policy.

I would like to invite all my fellow bloggers and all citizens to contribute your ideas.

Andris, I would like to thank you for providing us bloggers with this wonderful opportunity to relay our ideas and opinions directly into the heart of the European Parliament. But beware, not all ideas and opinions are born equal.

There’s more under the fold…..

[break]

I have left a lengthy comment trying to emphasise the importance of energy efficiency:

Andris Piebalgs said:

“I would like 2008 to be the European year of Energy Efficiency. I’m proposing to table measures to increase energy efficiency in our buildings, in our energy devices, in the way we consume energy. What are your ideas?”

To which I replied:

I agree whole heartedly with this but need to draw attention to one glaring omission. The most important energy efficiency measure to consider is the efficiency of energy gathering / energy production systems. This must lie at the very heart of EU energy policy IMHO. And once this idea is taken on board then we will be on the road to our salvation.

The policy page says this:

Our sustainable future largely depends on increased use of renewable energies. The European Commission has proposed and the European Council has endorsed an overall binding 20% renewable energy target and a binding minimum target of 10% for transport biofuels for the EU by 2020. That means that in 2020 one fifth of the energy and one tenth of all transport fuels consumed in the EU will have to come from renewable energy sources.

My immediate reaction to this is one of unreserved endorsement combined with disbelief with respect to the biofuels targets. Until a way is found to grow temperate latitude biofuels with eroei over 7 that do not threaten our food supplies then my opinion is that further development of biofuels should be abandoned until such time. Internal combustion engines are at best 40% efficient. Thus taking bio ethanol with eroei of 1.5 and burning it in this way is tantamount to simply burning food piles for no beneficial reason.

This also caught my eye:

Technology will play a central role in achieving the targets of the new Energy Policy for Europe. For this reason the Commission will annually invest approximately €1 billion between 2007 and 2013 in energy technology research and innovation. Technology must help to lower the costs of renewable energy, increase the efficient use of energy and ensure that European industry is at the global forefront. The Commission will therefore prepare the first European Strategic Energy Technology Plan in 2007.

That is some €7 billion. Let us hope the money is spent wisely. I would feel inclined to replace “lower the cost of renewable energy” with “improve and prioritise the efficiency of renewable energies” – and then we will be on the right track.

And so if you were given an opportunity to give advice to the EU Energy Commissioner, what would you say? Post comments for discussion here or visit Andris Piebalgs’ blog to tell him directly what you think. Remember this will be a rolling debate that will take place over many months that may hopefully culminate in the building of a trans European HVDC grid and electrification of all our transportation.