International Relations

Licensing Information

This text was adapted by #OpenCourseWare under an Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)

Chapter 1: The Making of the Modern World

  • The rise of the sovereign state
  • The Westphalian system
  • An inter-national system
  • The Europeans and the rest of the world
  • Conclusion

Chapter 2: Diplomacy

  • What is diplomacy?
  • Regulating nuclear weapons
  • To the brink and back
  • The Non-Proliferation Treaty
  • The US and Iran
  • The Iran hostage crisis
  • Nuclear Iran
  • Conclusion

Chapter 3: One World, Many Actors

  • Levels of analysis
  • How the level of analysis determines our findings
  • Levels of analysis and the changing ambitions of a discipline
  • IR as arena or process?
  • Beyond the state
  • IR and you

Chapter 4: International Relations Theory

  • Traditional theories
  • The middle ground
  • Critical theories
  • Theory in practice: examining the United Nations
  • Conclusion

Chapter 5: International Law

  • What law is international law?
  • The contents of international law
  • From ‘no world government’ to global governance
  • The functioning of international law
  • Conclusion

Chapter 6: International Organisations

  • International governmental organisations
  • International non-governmental organisations and hybrid international organisations
  • How international organisations shape our world
  • Conclusion

Chapter 7: Global Civil Society

  • Conditions for transnational activism
  • Global civil society as a response to transnational exclusion
  • Values promotion and creating change
  • Contested legitimacy
  • The case of the moratorium on the death penalty
  • Conclusion

Chapter 8: Global Political Economy

  • Liberal approaches
  • Individual actors
  • The state and the multinational corporation
  • Towards global economic governance?
  • Conclusion

Chapter 9: Religion and Culture

  • Elements of religion
  • Elements of culture
  • Religion and culture: difference and similarity
  • Can we all live together?
  • Conclusion

Chapter 10: Global Poverty and Wealth

  • Defining poverty
  • Measuring and reducing poverty
  • Globalisation and the wealth–poverty dynamic
  • Globalisation and neoliberalism
  • Conclusion

Chapter 11: Protecting People

  • Key positions
  • Emerging norms of human protection
  • Problems and challenges
  • Conclusion

Chapter 12: Connectivity, Communications and Technology

  • The internet
  • Digital commerce
  • Digital communications
  • Reach
  • Affordability
  • Reliance
  • Control
  • Conclusion

Chapter 13: Voices of The People

  • Change in a globalising world
  • ‘Colour’ and ‘umbrella’ revolutions
  • The Occupy movement
  • The Arab Spring
  • Conclusion

Chapter 14: Transnational Terrorism

  • What is transnational terrorism?
  • Motivation and goals
  • Activities
  • Organisation and resources
  • Countering transnational terrorism
  • Conclusion

Chapter 15: The Environment

  • The relationship between international relations and environmental problems
  • Common pool resource theory
  • The global environment as a global commons
  • Global rights and domestic environmental politics and policy
  • Do we need a global environmental organisation?
  • Conclusion

Chapter 16: Feeding the World

  • The bottom-up approach
  • Sudden food shortages and the disenfranchised citizen
  • Chronic hunger and the civic participant
  • Adulterated milk and the protective parent
  • Childhood obesity and the bad mother
  • Low wages and the deserving worker
  • Land dispossession and the traditional peasant
  • Conclusion

Chapter 17: Managing Global Security Beyond ‘Pax Americana’

  • From isolation to global superpower
  • On global watch
  • A world full of troubles
  • A world full of free riders?
  • Finding an alternative world order
  • Conclusion

Chapter 18: Crossings and Candles

  • The four-minute mile
  • Servant of empire
  • World-making
  • Industrial IR
  • Talk, text, technology
  • Conclusion

 

Chapter 18: Crossings and Candles

‘It is better to light a candle than curse the darkness’
W. L. Watkinson 1838–1925

An old lesson teaches that endings are more difficult to write than beginnings. This may be so, but I have found it difficult to even begin writing about the world International Relations (IR) makes without reflecting on a near-forty-year career in both the theory and practice of IR. This is because my intellectual engagement in IR is indivisible from who I am. To make the same point in a slightly more elevated mode, although trained in the tradition that a scholar’s gaze is objective, my academic pilgrimage has been one of continuous crossings between the personal, the political and the professional. My early professional life was conducted during a particularly nasty period of apartheid in South Africa. Not only was the minority-white-ruled government cracking down on all forms of political dissent, it was also wedded to a fierce anti-communism. In these circumstances it was difficult to exercise academic objectivity when it came to thinking about the world. Those years taught me a valuable lesson in life and learning: to believe that there is a totally objective or value-free view in IR is to call up the old Russian saying that ‘he lied like an eye witness!’ We all come to understand the world through our own experiences. Because of this, even the most objective person has predetermined understandings about the world.

A standard dictionary definition of international relations runs that the term ‘is used to identify all interactions between state-based actors across state boundaries’ (Evans and Newnham 1998, 274). This is certainly suggestive of the scholarly field of IR but unhelpful in explaining the international relationships that fall between the cracks of the discipline’s many boundaries and the personal anxiety and fear around these issues. After all, at the height of the Cold War there was real fear that the entire planet would be destroyed by nuclear warfare. In these circumstances, it was difficult not to be anxious about the future or fearful for one’s family. So we ought to require, perhaps, that a definition does something more than simply demarcate boundaries. A more reflective gaze points to what it is that we, the prospective student or emeritus professor, actually do when we ‘do’ academic IR and why it matters to us.

The four-minute mile

To understand why it matters to me, I will begin with a story of a crossing – a very recent one – between my colonial boyhood and my late-middle-aged self. This particular one took place not in South Africa, the country in which I was born and of which I am a citizen, but in England.

To explain why the crossing between past and present matters to my own understanding of IR, some personal background is required. Growing up in colonial South Africa, my home was littered with the culture of England – a country that my South African-born mother never visited until she was fifty. In addition, the boarding school that I attended was loosely modelled on the English public school tradition. So, we were encouraged to participate in the forms of organised sport that were England’s ‘gift’ to the world. Understandably then, my earliest thinking about what made the international was set by the cultural authority of England and the political sweep of the British Empire. Given this, the story of Roger Bannister’s sub-four-minute mile had a particular appeal for my young self. To explain: the measured mile became an important test in competitive athletics in the early 1950s. It was long believed that no person could run a mile in under four minutes. But, in the aftermath of the Second World War, when physical training and nutrition techniques improved along with the instruments for timing, the four-minute mile came closer and closer to being conquered. Indeed, breaking the barrier became a sort of milestone competitive goal for both individual athletes and the countries they represented.

My initial fascination with the four-minute mile was ignited by an edition of the Eagle Sports Album, which had been sent to the school library from London. In its pages, much was made of the importance of Bannister’s feat for Britain and Britons like my family, who were located in distant parts of the world. The drama of the event whetted a life-long interest in athletics. Finally, while on a trip to Oxford in October 2015, I visited the field on the Iffley Road where Bannister ran the famous measured mile. Like many a pilgrimage, the visit was exciting, elating and enlightening. As I stood on the ‘Roger Bannister running track’ – as the field is now called – I looked for the church flagpole that Bannister had spotted seconds before his famous run. When a young man carrying spiked running shoes walked by, I remembered, if only for a fleeting moment, the thrill of competitive running. But more important than the rush was the slow realisation that what had happened on that famous day offered lessons in how I had first come to know and understand the world of IR.

Until the visit, it never occurred to me that what had taken place on the day of the event was a quintessential moment of modernity – the conquering of space by time. In IR, of course, the control of territory through the instruments and techniques of administration and the control that follows is the very essence of the discipline. So, the idea of the international has no meaning unless territory is under the control of sovereignty. As a result, bringing ungoverned places into the idea of the international is the very first order of business in international relations. The notion of sovereignty, which is the enabling force of IR, follows upon this demarcation of space opening towards the exercise of control along a boundary-line between ‘the international’ and ‘the domestic’. Technology, in the form of maps and their making, helped to make ‘permanent’ such boundaries in the minds of rulers – especially colonial ones (see Branch 2014).

Strictly speaking, without boundaries there can be no IR. But, the divide between the boundaries drawn by the instruments of modernity are not the tightly patrolled frontier with its technology of control – passports, visa, immigration documents and the like. It is a liminal space where inclusion and exclusion is negotiated continuously. So, there were – as there remain – forms of interaction between groups who have resisted incorporation into the command and control that orthodox IR insists is the gift of statehood. This betwixt-and-between space has been a site of great tragedy, as the migrant crisis in Europe that began in 2015 shows. In many places, outside of the authoritative gaze of modern media, frontiers were killing fields. European colonisation, which drew the furthest corners on the planet into a single political whole under the banner of civilisation and Christianity, was extremely violent. If killing was one dimension of this, another was the disruption to the ways of living of millions upon millions. This violent disruption in the lives of people continued into the 1960s as the idea of the international spread across the world.

One example was a 1965 agreement in which Britain gave an archipelago of islands in the Indian Ocean to the United States. The residents of these islands, known collectively as the Chagos islands, were forcibly moved. In the past fifty years, the islanders themselves and their descendants have made numerous unsuccessful legal attempts to overturn this decision. Generally speaking, tragedies like these – which occur at the margin of the world – have been ignored in IR – although anthropologists, historians and international lawyers have explored them.

The second issue that occurred to me was the power of who pronounces on these matters. In literature – and increasingly in social science – this is the issue of ‘voice’: who gets to speak, how they get to speak and why this happens. At the policy end of IR, alas, the issue of voice is seldom considered a priority issue, notwithstanding the path-breaking insights that feminists have brought to the discipline. They have exposed the multiple ways in which women experience the international differently to men and how they are silenced in the story of the international despite the significant roles they have and continue to play in its creation.

Two signs on Iffley Road declare Bannister’s triumph. The first, which is mounted on a stone gatepost, is informational. It reads, ‘Here at the Iffley Road track the first sub-four-minute mile was run on 6th May 1954 by ROGER BANNISTER. Oxford University.’ The second is positioned above a wooden fence facing Iffley Road. Under the crest of Oxford University, it reads, ‘Here, on 6 May 1954, Roger Bannister set a new World Mile record of 3 minutes 59.4 seconds. The first Mile ever run under 4 minutes.’ If the first sign informs, the second proclaims Bannister’s achievement as truth. Here, in the historical conquest of space by time, there is no room for ambiguity.

Let us be clear about several things. Of course, the Iffley Road field was the site of the first ‘timed’, ‘authenticated’ or ‘measured’ mile run under four minutes. But – and this is why critical questioning is important in IR, as it is in all forms of knowledge – it seems unlikely that nobody else, anywhere else, across human history had ever run this distance in under four minutes. Indeed, medical science today suggests that humans with particular kinds of physiological traits are able to run faster over distance than those without them. For our purposes of understanding my appreciation of IR, this signage – its declaration and its claim to authority – is rooted in a white, Western, male-dominated world. This is the world into which I was born and raised. Outside of this, nothing is worthy of recognition. It confirms that the late-imperial gaze of the early 1950s, when Bannister ran his famous mile, had little understanding of, or interest in, the non-West.

It seems obvious that prejudices like these need to be challenged, but this is difficult because mainstream IR has elevated its denial of the non-Western world to an art form. For many, the business of IR remains mortgaged to the commonsense understandings of race, class and gender that marked the early decades of the twentieth century when IR emerged as a formal academic discipline. As a result, in many corners of the world, IR is called a ‘mutant’ discipline (Vale 2016a). This is because IR seems to have no conceptual capacity – no grammar or vocabulary, as social theorists might say – to explain the everyday lives of people who live beyond or beneath sovereign borders. And, because it has no adequate category to include them, IR fails to understand them.

Servant of empire

There is an obvious link between the claims of the signs on Iffley Road and how it is that the voice of authority is used to preserve and sustain social orders. In the Iffley Road case, the claims to authority and the making of history aimed to position British authority in a quickly changing world. After the Second World War, the United Kingdom scrambled to reassert its global positioning in the face of the rising post-war profile of the United States. Roger Bannister’s achievement and the authority offered by one of the world’s great universities, Oxford, was one way to do so. At the time, the four-minute mile was linked to another attempt to reposition the United Kingdom internationally – the summiting of the world’s highest mountain (Everest) by a British-led expedition, which had taken place almost exactly a year before events on the Iffley Road track.

The dilemma that the British faced in the world was best captured by US Secretary of State Dean Acheson, who famously pointed out that ‘Great Britain … [has] … lost an empire and has not yet found a role’ (1962). Although no longer an imperial power, the United Kingdom’s hold on the imagination of the world – and how it is organised and studied, through IR – continues via its culture and language. It appears, however, in some quite perverse ways. This outcome was foretold in the late 1960s by Richard Turnbull, the governor of the colony of Aden (now part of Yemen). Turnbull informed a future British cabinet minister, Denis Healey, ‘that when the British Empire finally sank beneath the waves of history it would leave behind only two monuments: one was the game of Football, the other was the expression, “Fuck Off”’ (Healey 1989, 283). Though a vulgar phrase like this is seldom heard in IR, British cultural imperialism lingers in the discipline, which explains why English is its tongue. In no small part this is because the language of global culture is increasingly English – a fact readily attributed to the global reach not of the United Kingdom but of the United States. This suggests another relationship between IR and modernity. The third instrument of modernity, after time and space, is language. Like the other two, the English language has set the borderlines for inclusion and exclusion in the world and in its study through IR.

The place of language and culture in fostering international relationships is explained by the idea of Soft Power (Nye 1990). This concept helpfully drew the issue of culture towards the centre of IR but was silent on the dimensions of language. This is because, as we have already noted, English has been proclaimed a ‘global language’ and therefore objective in its views of the ways of the world. But no language is neutral. Two further points suggest the limitations of having a monopoly of one language in IR – and, indeed, in other social sciences. The first draws upon the thinking of the Austrian philosopher Ludwig Wittgenstein – who pointed to the conceptual limitations of language – and is caught in his famous phrase, ‘the limits of my language mean the limits of my world’. So, however commanding language is as a tool to access the social world, its vocabulary sets limits on our understanding. Second, if English remains the language of IR, the discipline will not only be the domain of a global elite but will continue its long history of serving and servicing insiders. Those who have no knowledge of English are excluded from IR, or they can only access the discipline by developing a professional competence in the language. This is plainly discriminatory. There is also the challenge of the English language unable to grasp concepts that lie outside of its vocabulary. For instance, the Sanskrit word ‘dharma’ is translated as ‘religion’, but dharma in the Hindu cosmology includes a range of practices and conceptions of rights, duties, law and so on, which are not divinely ordained, as in Christianity. Other important terms in the vocabulary of IR – such as ‘state’, ‘civilisation’ and ‘order’ – are sometimes lost in translation.

World-making

One of the great disciplinary shibboleths is that IR is to be celebrated because it is a neutral instrument of restoration – IR does not so much ‘make’ the world as ‘restore’ it (Kissinger 1957). According to this logic, the discipline provides helpful tools – and, sometimes, a hopeful heart – that a world devastated by war can be restored by the discipline’s science. But here too there is a need for a contrarian view. Largely absent from this optimism are the interlinked questions: who has the right to remake the world and whose interests will be served by any remaking? These questions would not have troubled those responsible for making – or remaking – the international community on three previous occasions: at the end of the South African War (1899–1902); at the end of the First Word War (1914–1918); and at the end of the Second World War (1939–1945). Certainly, each of these moments presented as a time of despair interlaced with feelings of hope for what might come; each was marked by a particular configuration of politics, both local and global; and each was held captive by the vocabulary of the moment. Let’s consider each event in turn.

The South African War (also known as the Second Boer War) was fought between the United Kingdom and the peoples of European descent on African soil known as Afrikaners. This is because the Westphalian state – and the diplomatic routines developing around it – had migrated from its European heartland to Africa. It was the culmination of many contestations for the positioning of an alien social form, the modern state, on a new continent. As recent work has shown, the making of the world after the South African War was concerned with reorganising the British Empire, which was then the dominant form of international organisation. The idea of shifting understandings of what constituted sovereign identity away from an imperial setting towards a species of ‘inter-nation’ exchange, primarily between Britain and its four settler-ruled vassals – Australia, Canada, New Zealand and South Africa – had gained salience in the years following the First World War. If the three other dominions showed that the local and the international could be seamlessly realigned, South Africa – with its diverse peoples – was a harbinger of the messy world to come. Hence, for the theoreticians of the empire, the reorganisation of the colonies in southern Africa into the single state of South Africa foreshadowed a model for the dismembering of empire. Thus, the chosen path was the idea of an ‘organic union’, a system that gestured towards the importance of sovereignty within the semblance of an imperial brotherhood – in modern terms, it was a particular strain of multilateralism.

The later incorporation of white-ruled India into this organisation would end in the British Commonwealth. Out of this, in the 1930s, grew the idea of a white-dominated ‘World Commonwealth’, sometimes called a ‘World State’ (Curtis 1938). The thought crime – there is no other phrase for it – in this world-making was that all the imaginings of the international excluded other racial groups except in the sense of ‘trusteeship’. After the First World War, this status was awarded to states that could be ‘trusted’ to control foreign spaces in the interests of those who were deemed to be lower down the Darwinian ladder (Curtis 1918, 13). The legacy of this move remains the great unexplored story in IR as an academic discipline because it continues to suffer from the arrogance of defining the international by the optic provided by wealth, race and gender.

In the lore of IR, the restoration of the world after the First World War is sacred ground. The discipline’s celebrated tale is how the international codified as science would build a better world. The discipline’s institutionalisation was the founding of an academic chair, named after Woodrow Wilson, America’s twenty-eighth president, at what is now Aberystwyth University in Wales. As Ken Booth (1991, 527–8) has pointed out, ‘when David Davies founded the Department for International Politics at Aberystwyth in 1919, he became the midwife for the subject everywhere.’ The genuflection to the United States suggests that the establishment of the discipline was in recognition of America’s importance in ending the ‘war to end all wars’. Not only did Wilson help to deliver victory, he also offered the League of Nations as an instrument for securing a future of international peace. But this was not to be. In the 1930s, the League failed to prevent another war – the idealism of early IR, around which the discipline was founded – was in tatters. The failure of this resolve, both institutionally and theoretically, is well documented in the chronicles of IR.

The construction of a new world was sought mainly through the idea of embedded liberalism, which could marry free trade, strong government and multilateralism (Ruggie 1982). But an inconvenient truth remained: global apartheid was entrenching itself. Absent in the great councils of peace were the voices of those who were situated in the outer reaches of world-making and excluded by IR’s founding bargain. The truth was that sovereignty, and the passport it offered to statehood, was only available to those privileged by birth and by skin colour. The scientific task of understanding those who were excluded was for not IR, but for other academic disciplines, especially Applied Anthropology (on this, see Lamont 2014).

IR folklore holds that the international system is indebted to the triumph of American idealism. An end to American isolationism in the 1940s beckoned the world’s most powerful country towards a reincarnation of its ‘manifest destiny’ – rooted in the nineteenth-century belief that settlers were foreordained to spread across North America. It was a belief shot through with understandings of white superiority, as this quote from the Maryland Democrat, William F. Giles, in 1847 suggests:

We must march from ocean to ocean. … We must march from Texas straight to the Pacific Ocean, and be bounded only by its roaring wave. … It is the destiny of the white race, it is the destiny of the Anglo-Saxon race. (Zinn 1980, 153)

The call now was towards making ‘the international’ as it had made the national – with technology, violence and self-belief. Hopes for this future were transmitted through the increased force of culture, especially American. The sense of ‘freedom’ that this sentiment conveyed was infectious, and it spread increasingly to all spaces – including colonised ones. In doing so, it fostered ‘a period of optimism’ throughout the world, as the Indian social theorist Ashis Nandy (2003, 1) put it. Interestingly, for all the celebration of the idea of freedom, the discourse suffered terrible amnesia: the story of the Haitian Revolution (1791–1804), the only successful slave revolution in modern history and a powerful example of black people making a state, conducting diplomacy and practising freedom, was excluded from the emerging narrative.

But American optimism and the future it promised arose in the very age when the conquest of nature by science promised to deliver much to the world. It is difficult today to underestimate how ‘the endless frontier’ – as America’s chief scientist, Vannevar Bush (1945), called natural science – was received in the final years of the Second World War.

Demonstrably, the atom bomb, the quintessential product of science, had brought the war to an end – even though the surrender cry from Japan’s emperor foreshadowed different understandings of what science had delivered to the people of Japan and to the world. Speaking after the second bomb was dropped on Nagasaki, Emperor Hirohito surrendered with these words: ‘We have resolved to endure the unendurable and suffer what is insufferable.’

Conventional IR history has it that both politics and science – acting both on their own and together – speeded the desire of peoples all over the world for liberation, thus ending formal colonialism. This is certainly nominally so, but the reach of this freedom was, once again, to be framed within the sovereign state. If freedom was one dimension of an American-inspired post-1945 world, it was complimented by a series of international bureaucracies that aimed to manage the new world in the making. These drew sovereign states – both newly independent and well established – towards the bureaucratic authority insisted upon by modernity with its technical know-how and techniques of social control. The international community in the making was to be what anthropologists call an ‘administered community’ – both states and individuals would be controlled even as they celebrated their freedom.

So, the celebrated multilateral structures of post-1945 – the United Nations and the Bretton Woods family; the International Monetary Fund; the World Bank; and the General Agreement on Tariffs and Trade – were controlling institutions even if they were intermittently cloaked within a rights-based discourse. The archetype of this was the UN Security Council where the power of veto was vested in five states – China, France, Russia, the United Kingdom and the United States. This ‘override power’, which aimed to control any threat to the interest (or interests) of an already advantaged group, remains a symbol of an international structure that is fatally unequal and grossly unfair.

In academic IR, the reconstruction of the world after 1945 is the story of how the United States appropriated and adapted European ‘understandings’ of the international for the challenges it faced as ‘leader of the free world’. The evidence supports this explanation: at least 64 first-generation émigré scholars (mostly from Germany) taught political science and IR in the United States. More than half of them came from law, including figures such as Hans Kelsen, Hans Morgenthau, John Herz and Karl Deutsch, who would command IR. The ways of the world that they transmitted – culture, diplomacy, law – remained essentially white, Western and male. In disciplinary IR, the non-West was deliberately silenced by exorcising two of the most important issues – decolonisation and racism – from its theoretical concerns (Guilhot 2014). It was this legacy that led the late Stanley Hoffman, who was born in Vienna, to declare that IR was ‘an American Social Science’ (1977).

The ghastly – but truly historical – advent of nuclear weapons certainly raised the question that awakened ethical concerns within IR, the most important of which has already crossed our paths: could humankind destroy the planet? Yet the counter-factual question on this issue, the question that should have mattered but which was never asked or answered, is: would the United States have atom-bombed a white Western country? At the centre of IR was – and remains – the ideology of white supremacy. This is undergirded by the understanding that only Europeans – and whites, to sharpen the point – live ‘within’ history: all others, as Ashis Nandy (2003, 83–109) has argued, ‘live outside’ of it.

If these three moments of reconstruction – the South African War, the Paris Peace Conference of 1919 which concluded the First World War, and the ending of the Second World War in 1945 – represented the remaking of the world, what about the ending of the Cold War? It is difficult not to believe that the ending of the Cold War has been one of continuity rather than the much-anticipated fundamental rethink of the nature and idea of the international. The moment was certainly marked by a new vocabulary, of which the word globalisation promised new horizons. However, it quickly became an encryption for the celebration of neoliberal economics and a ‘thin’ form of democracy that was characterised by Francis Fukuyama as ‘the end of history’ (Fukuyama 1989). In essence, Fukuyama argued that liberal democracy and capitalism had proved itself superior to any other social system. This theory was seized upon by IR scholars who had, embarrassingly, failed to predict the ending of the Cold War. For IR theorists, the bipolarity that had characterised the Cold War was a stable system for both superpowers. They therefore saw no reason for either power to seek to end it. What they did not envision was that an internal collapse of the Soviet economy matched with the rising opposition of subjugated peoples in Eastern Europe would break the Soviet system from within. This was just one of the reasons that the critical turn in IR theory began around the end of the Cold War and IR began to look beyond the state towards the individual.

However, not long after this embarrassment there was a return to triumphalism. A US president, George H. W. Bush, declared that the ‘West had won’ the Cold War – but even this was not enough. What lay ahead was a new challenge that one disciple of realist thought called a ‘clash of civilizations’ (Huntington 1993). Let me insert a personal story here. Just after the Berlin Wall came down in 1989 – the event that symbolised the beginning of the end of the Cold War – I was invited to participate in a high-level panel organised by one of the big think tanks in the world, the New York-based Council on Foreign Relations. My co-panellists included former members of successive American cabinets, a former director of the CIA, and many academic luminaries from the IR community. During the course of several meetings, it became clear to me that Islam was being constructed as a threat to America’s ‘global interests’ and that it would be targeted. This kind of thinking created a kind of intellectual swamp that gave rise to successive wars in Iraq and Afghanistan and, dangerously, for IR a tendency to focus disproportionately on such ‘threats’. What this does to how the world is made remains to be seen.

Industrial IR

No academic development has had a greater impact on IR’s recent history than the rise of think tanks. This is a big claim, to be sure, so let me illustrate it with a story from my own country. In the post-apartheid years, the emergence of a think tank called the Institute for Security Studies (ISS) shifted the hopes of the immediate post-apartheid years from the high idealism of the Nelson Mandela presidency towards a security-centred society. This, in a country where some ten million children – over 54 per cent – live in poverty. Elsewhere, as others have shown (see Ahmad 2014), think tanks have and continue to play a critical role in making the case for war against Islam in the United States, and in pushing the UK’s Blair government to enthusiastically support the invasion of Iraq in 2003 (on this, see Abelson 2014).

Rather than viewing think tankers as neutral and disinterested parties in the making of IR, we must take them seriously. As the German-born critical thinker, Hannah Arendt (1970, 6), put it in her book, On Violence:

There are … few things that are more frightening than the steadily increasing prestige of scientifically minded brain trusters in the councils of government during the last decades. The trouble is not that they are cold-blooded enough to ‘think the unthinkable,’ but that they do not think.

In the economic-speak of our times, think-tankers are ‘norm-entrepreneurs’; protagonists for one or another position on policy and its outcomes who, while claiming to provide objective analysis, are in fact complicit in pursuing particular agendas: political, economic and social.

Invariably, think-tankers are well schooled in the repertoire of IR; they have mastered its vocabulary and are familiar with its disciplinary traditions. Using this, think-tankers are encouraged to promote the current policy fashion by drawing uncritically on the prevailing meta-narrative. During the Cold War, for instance, think tanks in the West promoted the ‘threat’ posed by the Soviet Union (and its allies) in much of their work, which was also embedded within different shades of realist thinking.

Early in my own pilgrimage I worked for one such think tank: the South African Institute of International Affairs (SAIIA) which, nowadays, calls itself the country’s ‘premier research institute on international issues’. It was never branded as such when I worked there – perhaps that was because I was one of only two academic professionals on the staff. The other professional was John Barratt, my boss, who was a former South African diplomat. He had not studied IR, but read modern history at Oxford after taking a first degree – also in history – in South Africa. The watchwords for our work were ‘facts’ and ‘objectivity’ – to seek ‘truth’ in the way that practitioners in the natural sciences do. In this view of scholarship, knowledge was neutral and the role of SAIIA was to present as many opinions as possible in international affairs so that the public could make up their own minds. This was in the ‘non-political’ spirit of London’s Chatham House on which the SAIIA was modelled.

Sustaining this position in the South Africa of the 1970s was bizarre. The apartheid government had cracked down on internal dissent with the result that censorship was pervasive, even in universities. There was, for example, no access to the vigorous debates on the liberation of South Africa that were taking place amongst exiled groups. More seriously, the country’s black community had absolutely no voice in the management and the affairs of the SAIIA: they did serve the tea, however. In the 1970s I often thought that the good and the great who gathered in the SAIIA classical-styled headquarters were of the view that those on the other side of apartheid’s cruel divide had no imaginary, or, indeed, experience, of the international.

John Barratt was often as frustrated by this state of affairs as was I, and we made several efforts – mostly unsuccessful – to cross the divide. What the corporate sponsors of the SAIIA would have made of these efforts is unknown. What I do know is that on many occasions I faced the raised eyebrows of the white liberals – and the not so liberal – who gathered, say, to deliberate on whether South Africa’s outreach to independent black states was compatible with the policy of apartheid, or the unquestioning fealty of the white state towards the West in the face of sanctions (Vale 1989).

We need to pause here and return to Hannah Arendt’s concerns: who stands to benefit from the work of think tanks? In the main, the funding is linked to the business sector. The assumption is that the work of think tanks – publications, public commentary, conferencing – reflects the interests of their sponsors and the status quo. Certainly, the conservative inclination of the SAIIA, when I worked there, was a reflection of the interests of South African business in the 1970s, as successive waves of critical scholars, including myself, have been keen to point out. This personal experience confirms four things. First, access to the discipline – certainly in South Africa, but elsewhere too – was a closed shop. IR was an elitist pursuit. Second, the conversations were limited by particular vocabularies. Certainly, they were not critical in the sense of asking deep questions and, in the press of the everyday, reflecting on what we were doing. Third, a particular meta-narrative – the Cold War – framed all the analysis. But mostly, and fourth, think tanks are what sociologists have called ‘total institutions’ – institutions with tight regimens, tight supervision and rules that ‘routine’ professional behaviour. These observations were confirmed when, a few years later, I spent some time as a research associate in a more cosmopolitan think tank community at the International Institute for Strategic Studies (IISS) in London.

As the Cold War ended, the meta-narrative of IR shifted. Today, the almost pre-packaged understanding of the ‘advantages’ of liberal reform – often simply a code for economic austerity – is stock-in-trade for contemporary think tanks. While neoliberal economics as an instrument of social engineering, both domestically and internationally, has increasingly hovered over the discipline, security and geopolitics remain the staple diet of the policy end of IR. In fact, threading these together is not new. The most famous example (yet notoriously overlooked in IR circles) is the Nixon administration’s intervention in Chile in September 1973. This coup against the democratically elected government occurred almost at the mid-point of the United States’ two-decades-long direct involvement in this country. Driven by Cold War anti-communism, the United States was determined to keep the Marxist-inclined government of Salvador Allende in check. The successful right-wing military coup was a precursor to a policy of social control, which gathered force from 1975 onwards, and was based on neoliberal economic policies. But in its more recent incarnation, under the utopian guise of globalisation, there is a sense that a ‘neo-liberal corporate takeover … has asserted America’s centrality in the world’ (Buell 2000, 310).

Three further points on think tanks need to be aired. First, as the discipline has become a popular academic subject, more and more IR graduates have entered the work place, and think tanks are significant places of employment. Indeed, it is possible to talk about IR as an academic ‘industry’ grounded in think tanks. This is linked to the second of my points, that there exists a triangular relationship between think tank, sponsors and the press or social media. Finally, the interaction of people trained in the same grammar and vocabulary often produces groupthink and a closed insider terminology. It becomes impossible to see beyond closed and often self-selecting groups – called ‘experts’ – who are pre-destined, almost, to repeat the same ideas to each other. Can any of these practices be conducive to sound policy outcomes? This is where the ‘critical turn’ in IR, which began in the early 1980s and spread in the course of the decade to several of its sub-fields, is especially important for understanding the future of IR and the world it makes. The arrival of critical theories opened up a space to question legitimately the theory and practice of an inner sanctum in the discipline. It certainly enabled me to be self-reflexive of my own thinking and to ask searching questions about the theory and practice of security in southern Africa (Vale 2003).

As in every discipline, and in every facet of life and knowing, sources of certainty have to be questioned continuously and critical perspectives have freed the space for doing so in IR. The constant challenge in our professional lives – especially in IR – is to negotiate the space between understanding what questions are intellectually interesting and which will truly make the world a better place.

Talk, text, technology

Technology matters in the world that IR makes – it always has and it always will. This is because it helps us understand and explain the world and also helps to shape it. So, the same kinds of technology that have helped to develop drones that are killing people in the Middle East and elsewhere have also enabled the delivery of more effective health care in remote parts of the world. Today, technology seems – irrevocably, perhaps – to have changed how scholars and students access information and how it is processed and published in an acceptable and professional way. This is because technology is changing faster than are understandings of the world that IR is making.

Technology also constantly changes the very ‘stuff’ of IR. For example, the complex and still unresolved relationship between IR and the idea of globalisation may well be the result of IR’s failure to understand the fact that new technologies have eroded the discipline’s central tenets – those of sovereignty, order, power and the very idea of ‘the international’.

Technology may well have finally shattered any hope of a detached, or objective, search for truth that the academic discipline of IR once hoped to tap from the practices of the natural sciences. Can IR scholars pretend to be objective on an issue when technology (media, internet) regularly reminds us that in some distant place, bodies are piling up?

Notwithstanding IR’s undertaking to provide understanding and rationality, technology seems to have widened conceptual cracks at the social, political and economic levels. As I write these words, there seems no end to the erosion of this order and the headaches that will follow. Consider three technology-generated issues that immediately knock against IR’s busy windows. First, as viruses like Zika, Ebola and HIV/AIDS spread, the invariable question is whether technology can halt this. Second, packaging its ideological message in bundles fashioned by technology, the Islamic State group continues to wreak havoc and draw in supporters globally. Finally, the global monetary system is flummoxed by bitcoin – technology’s reimagining of what money is, and can be, at the global level.

Is one tradition of storytelling in IR – that of the state, sovereignty and an international system – at an end? In earlier times, the making of the international was slow and ponderous as letters and directives travelled slowly between the metropole and periphery. Today, this is an instantaneous process – the international is being made and remade by bits, bytes and blogs. The discipline is challenged to respond to this new way of knowing – which makes the book in which this chapter appears – with its presentation in various formats and its open access – an investment in IR’s future.

Conclusion

I draw to a close my reflections on the ‘doing’ of IR by returning to the epigram at the head of this chapter from W. L Watkinson, an English Methodist minister. It is also the motto of Amnesty International. If the idea of ‘crossings’ in the title comes from my confession, made at the beginning, that the personal, the professional and the political have been interwoven in my approach to IR over four decades, the other image in the title encapsulates a belief that IR – especially in its critical mode – is a kind of candle that casts light in often very dark places.

There is a paradox which stalks the discipline of IR: as it speaks of peace, the principle of sovereignty, which is at the centre of its world view, looks out upon messy – and often very violent – social relationships. These pages have suggested that there are no uncontaminated places in the making and remaking of these social relationships; there is thus no space where IR can escape the hot breath of compromise, concession or conciliation. However, the task, which lies beyond the pages of this book, is to recognise that despite all that we are taught, this is still a largely unexplored world. It remains a place of infinite possibilities and a site of great hope.

Chapter 17: Managing Global Security Beyond ‘Pax Americana’

We often hear that we live in a world where power and wealth are increasingly decentralised. The world is indeed changing, in some cases rapidly, as prior chapters in this book have documented. Despite this, there has been one constant since the end of the Second World War – the United States of America (US) has been the dominant military and economic power in the world and the manager of global security. The phrase ‘Pax Americana’ can therefore be used to describe an era without major war post-1945, overseen by the stabilising force and military might of the United States. IR calls actors that are noticeably above others in military and economic terms ‘hegemons’. While there have certainly been regional hegemons in the past, there has never been a global hegemon in known history – until now.

Today, the bulk of the citizens of earth would surely be able to identify the sitting American president by name, or at least recognise their face. This cannot be said for any other leader. Many debates in International Relations circle around the question of whether such a situation is desirable or sustainable. In order to address these debates, it is important to assess how dominant the United States is and whether the situation is likely to continue. As we ponder this we must also understand that a debate is underway not just internationally but also within American society over whether it should continue to play a global role. This chapter explores such questions in a direct and sometimes provocative way: the eventual answers, whatever they may be, will determine the next era of international relations. We should therefore not shy away from pondering the implications of a world beyond Pax Americana.

From isolation to global superpower

The Second World War was the hinge point for establishing American dominance. Prior to that war, the United States had focused on continental expansion, making sure its neighbours recognised its regional dominance and pre-empting the influence of European powers in the Americas. George Washington, the first American president, warned in his farewell address that the US should avoid ‘entangling alliances’. Another president, John Quincy Adams, said that America should not go abroad searching ‘for monsters to destroy’ and that its glory was in liberty, not dominion. The United States did, nevertheless, dabble in imperialism during the late nineteenth century, toppling a decaying Spanish empire to help liberate Cuba and acquiring Puerto Rico, Guam and the Philippines in the process. But, having won its own freedom in 1776 from British colonial control, there was little desire in America for it to become a colonial power itself. Even involvement in the First World War could not shake the US out of its preferred isolationist shell. The United States entered the war late, brought its forces home quickly afterwards and refused to help enforce a peace its president helped design due to the US Congress rejecting membership of the League of Nations.

The Second World War was truly global in scope and revolutionary in its impact. The United States was drawn into the conflict, again late, by German submarine warfare in the Atlantic and a surprise attack on its military facilities at Pearl Harbor by the Japanese in December 1941. When the war began in 1939 there were several powers contesting for global leadership, but the United States was not among them. The United Kingdom and France had sizeable empires. Adolf Hitler was determined to create a new German ‘Reich’ (or empire) that would last at least 1,000 years. Imperial Japan was seeking dominance in Asia and had already occupied parts of China and all of Korea. Finally, the Soviet Union had proved that a communist revolution was possible, and prospects were good that other nations would follow suit and communism would spread globally. By the war’s end Germany and Japan were devastated, defeated countries, occupied by foreign powers. Among the victors, the United Kingdom and France were spent powers. Their empires were fragmenting and their economies near-destroyed. The Soviet Union had suffered the most significant losses of all, primarily through battling a German invasion. Despite winning the war, the cost of victory for the allied powers had been high. In contrast, by 1945 the United States had shaken off the effects of the Great Depression, the global economic collapse of the 1930s, and was relatively untouched by the war. It had demonstrated its power by mobilising and equipping a military of over 16 million. As the war ended it had military forces stationed across the globe and was the world’s dominant economic power.

The United States took several lessons from the Second World War, the most important of which was that it had to be involved in managing global security in order to protect its own security. It was too big and too powerful for others not to challenge even if it had no interest in challenging them. Because international relations as a system is anarchical, with no ruler, powerful states tend to make other states feel insecure by default. Even if powerful states do not behave threateningly, there is a fear that they may do so in the future. This leads to competition and the risk of future conflict as states seek to maximise their security by attempting to increase their relative power. In the past this was typically done by acquiring territory, as described in chapter one. But in a post-war era characterised by decolonisation and the presence of nuclear weapons, security calculations were in flux. To monitor the situation, the United States chose to be involved globally, designing the international frameworks for commerce and governance at conferences it convened in Bretton Woods and San Francisco, both in America, and joining the United Nations which was headquartered in New York City. Essentially, the Americans created a new system of international relations, both economic and political, and placed themselves in the driving seat. Although the bulk of its forces were demobilised at the war’s end, the United States maintained the network of bases it had built during the war and retained a substantial military presence in both Europe and Asia. At home, it created, via the National Security Act of 1947, the governmental framework for coordinating the development and exercise of global power. In short, the United States was now permanently constituted to be a different type of actor.

Having helped destroy fascism in the Second World War, the United States set itself the task of first containing and then undermining the two remaining rival systems of global order – colonialism and communism. The test came quickly with the Soviet Union’s push to dominate Eastern Europe and its acquisition of nuclear weapons in 1949. Many American politicians feared that the Soviet Union could dominate all of Europe and Asia – an area with the industrial resources and military potential to match or even surpass the United States. When China turned communist in 1949 and other nations looked set to follow, these fears seemed to have a basis in reality. A series of confrontations and crises that we now call the Cold War became the new normal in international relations. The conflict marked a two-power struggle between the United States and the Soviet Union spanning more than forty years. IR calls this a bipolar system, as two principal actors are responsible for shaping global affairs. In the end, with the Soviet Union’s internal collapse between 1989 and 1991, there was one superpower standing – the United States. The question was, would this mean that bipolarity would give way to unipolarity (the dominance of one power) or multipolarity (many centres of power)?

On global watch

Today, the American population of 325 million is the third highest in the world. Still, that total is less than five per cent of the world population and small by comparison with the billion-plus populations of China and India. However, the United States accounts on its own for over 40 per cent of global military expenditures, exceeding those of the next ten nations combined. The current amount it spends on defence per year is similar (adjusted for inflation) to its military spending during the Cold War when it faced a direct military competitor. Perhaps more significant is the legacy effect, as the United States has been investing tens of billions of dollars per year in defence technology since the Second World War. That investment has built a capacity that gives it a peerless military advantage in nearly every aspect of warfare. As we enter a period known as the ‘Revolution in Military Affairs’, when drones and other types of advanced – and even autonomous – weaponry become the new norm, the United States has a significant head start.

The United States Armed Forces is the only military with the ability to carry out truly global operations. It has a worldwide network of nearly 700 bases and other military-related facilities that supports its overseas deployment of more than 200,000 military personnel. Command and control for these forces is provided by several redundant and protected communications, intelligence and surveillance systems. Orbiting above the earth are dozens of US military satellites. Constantly circling the skies above several of the earth’s trouble spots is an air armada of American military drones. Finally, roaming the world’s oceans are ten US aircraft carrier groups – perhaps the most illustrative statistic as no other state has more than two. This military is substantially bigger than is needed to defend the American homeland. The United States is a geographically advantaged nation with oceans on two of its sides and non-hostile states (Canada and Mexico) on the other two. It is a nation that is hard to invade because of those oceans and even harder to intimidate because of its scale and wealth. Although reachable by missiles, the United States maintains a formidable nuclear deterrent force that has global reach.

The US military is scaled to maintain what it describes as global stability. In other words, the tempering of regional conflicts via deterrence and engagement. But, no one elected the United States to the position of global security manager. When the Cold War ended, no force stood in the way. It had the global presence, the alliance and aid relationships and the extra military resources to intervene anywhere to prevent conflicts from escalating and to provide assistance when famine or natural disasters struck. Some viewed this as a moral obligation as they believed American leadership was an indispensable force for good in the world. For others, the United States was acting more narrowly and using the opportunity of a lack of a rival to embed its position as the world’s dominant power and gain a long-term advantage over any future rivals.

A world full of troubles

The United States has been constantly engaged in military operations of one type or another since the end of the Cold War. The seizure of Kuwait by Saddam Hussein’s Iraq in 1990 is an early example. The United States led an international coalition to liberate Kuwait soon afterwards in what was known as the Gulf War. Unlike the second US-Iraq war 12 years later in 2003, the Gulf War of 1991 was authorised by the United Nations Security Council. Another mission for American forces just after the Cold War ended was the humanitarian effort in Somalia. Warring factions there had disrupted the distribution of food, causing widespread hunger and the potential for a major famine. Under a United Nations mandate, a US-led coalition sought to bring relief and stability to Somalia. Fighting among the factions soon spiralled out of control and the aid mission collapsed as the United States and other nations withdrew troops from the chaos to prevent any more of their personnel being killed or wounded. Somalia had become the classic failed state, a land and a people without a functioning government. The United States, chastened by the Somalian experience, has since been hesitant to help in other such cases. It turned away from intervening in the 1994 genocide in Rwanda, as did other members of the international community. However, it has gradually returned to involvement in Africa via training and supporting the regional coalitions acting as peacekeepers in African Union and/or United Nations operations, especially those directed against militant Islamic terrorist groups like Boko Haram. Significant effort has also gone into humanitarian projects related to fighting international piracy off the Horn of Africa and combatting pandemics such as Ebola and HIV/AIDS.

Elsewhere, the nations freed by the collapse of the Soviet Union face continuing problems as Russia seeks to reclaim lost territory and protect the interests of ethnic Russian populations caught on what they see as the wrong side of new borders. Russia annexed Crimea from Ukraine in 2014 and has also intervened in parts of Georgia and Moldova. And Ukraine endures a Russian-supported rebellion in its disaffected eastern regions. Although the United States now rotates combat units through Northern and Eastern European nations, and is constructing a ballistic missile defence system on NATO’s eastern frontier, West Europeans have been content to be mostly worried observers, concerned about Russian behaviour but also concerned about their trade with Russia. There seems no strong appetite in Europe to rise to the Russian challenge in any way other than via economic sanctions and punitive diplomacy.

Closer to home, in Latin America, there are constant problems with poverty, drugs and corruption. Haiti, the region’s poorest country, has had US troops as frequent visitors – for instance, to help the government survive a coup attempt and to provide relief after a devastating earthquake. Columbia required substantial assistance to suppress a persistent insurgency, fed in part by narcotics traffic. Less visibly, the United States helps Mexico cope with wars among rival drug gangs that have cost thousands of lives and threaten the stability of the Mexican government. Several Central American nations suffer similarly. Through the Mexican border and the Caribbean flows a flood of migrants seeking to escape poverty and crime by heading north into the United States.

More than six decades after the 1953 truce that ended the Korean War, one of the first battles of the Cold War, the United States still keeps nearly 30,000 troops in South Korea to protect it from North Korea. American forces also keep Japan separated from its neighbours, several of whom have territorial disputes with Japan and outstanding grievances tied to Japan’s behaviour prior to and during the Second World War. The most significant of the neighbours is China, whose expansive designs in the South China Sea appear to threaten the interests of many Southeast Asian states as well as the right of free passage for shipping through one of the most travelled international shipping routes. The US Navy has stepped up its patrols in the region and other elements of the US military, primarily the Marine Corps, have begun rotating units to Australia in what some have called the ‘Pivot’, a US military rebalance towards to Asia.

This quick contextual sweep across the globe does not reflect the central concern the United States has when it looks out to the world. Since 9/11, when it was attacked by Al-Qaeda, its main military preoccupation has been in fighting transnational terrorism. This includes a 2001 invasion of Afghanistan, where the leaders of Al-Qaeda were being harboured by the Taliban regime. It also includes drone and other raids in Pakistan where some of the terrorist leadership had fled. Most notably, perhaps, it also includes an invasion of Iraq in 2003 to depose Saddam Hussein, supposedly to eliminate his efforts to develop and stockpile weapons of mass destruction. Both Afghan and Iraqi actions succeeded quickly in removing the offending regimes, but led to ongoing and costly counter-insurgency campaigns that have destabilised neighbouring countries. The so-called ‘Global War on Terror’ has ensured that the gaze of the United States remains cast widely, especially in those regions where terrorism is prevalent such as the Middle East and North Africa. This extends beyond traditional military means into areas of intelligence and cyber warfare.

A world full of free riders?

The United States does not always act alone. Often it is in a coalition of one kind or another. Some of the coalitions are authorised by United Nations Security Council mandates such as those in Somalia and Haiti. Others are under NATO auspices, as in Bosnia, Kosovo and Libya. Others are the product of the recruitment of ‘coalitions of the willing’, such as those formed for the invasion of Iraq in 2003 when the United Nations would not approve the war. Coalitions are important because they add political legitimacy at home and abroad to interventions with a high risk of substantial casualties and long-term costs. The American public typically sees the participation of other nations as an endorsement of its own leaders’ wisdom in deciding to intervene. That being said, as Afghanistan and Iraq demonstrated in their initial phases, the United States is perfectly willing to act on its own when it feels there are serious threats to its security. This is also the case when there are complications or delays in gaining international approval and assistance. Acting alone is often referred to as ‘unilateralism’. Strong states such as the United States can be prone to acting unilaterally because they do not always feel bound by shared rules or norms. However, this can have consequences and it is more common for states to at least appeal to multilateral principles and practices so they do not incur the wrath of the international community. The issue with the United States is that, arguably, it has the power to withstand any such criticism.

American politicians complain occasionally about the burdens the United States carries, but not often and not with conviction. NATO was created to contain the westward spread of the Soviet Union during the Cold War. The principle of NATO is that it offers a collective security guarantee for all of its members. If one member is attacked, all others are treaty-bound to respond to the aggression. In the Cold War context, this was to deter any communist attack on Western Europe so that communism would not spread any further. However, it has expanded greatly since the end of the Cold War, even absorbing many former republics of the Soviet Union. NATO endures in the post-communist era because collective security is a positive thing for states, especially those newly independent states that fear Russian resurgence. But, few of the newer or older members of NATO meet the alliance’s goal of allocating 2 per cent of Gross Domestic Product (GDP) to defence. Instead, they are safe in the knowledge that the United States, which invests nearly twice that, will be there to do the heavy lifting when a crisis arises.

This raises the larger issue, which is that it appears to some in America that other rich states find excuses to do little for global security or even their own defence. Japan and Germany, the world’s third and fourth biggest economies, seem to prefer to be on what is now mostly a voluntary parole for their Second World War crimes. Japan spends about 1 per cent of its GDP on defence. Germany does participate in some United Nations and NATO-sponsored operations, but largely avoids a combat role. Both nations are shielded from nuclear threats by a US deterrence policy that promises them protection from challenges by other nuclear powers. The United Kingdom and France, the fifth and sixth largest global economies, do contribute to global security somewhat in proportion to their wealth. Both, however, have found it hard to prioritise military spending as they embark on domestic austerity policies in the wake of the 2008 financial crisis. South Korea has an economy just outside the world’s top ten. It is at least 25 times richer than North Korea on a per capita basis and has double the North’s population. Yet it leaves the task of defending itself primarily to the United States. South Korea rarely participates in coalitions to help others, and when it does, as in the case of Afghanistan, it sends non-combat troops. The Scandinavian countries, particularly Demark and Sweden, are exceptions, but Spain, Italy and a half-dozen other developed countries seem to prefer to opt out from most of the hard work in international coalitions. Going beyond Western nations and those with historic ties to the United States; China and India are big in many dimensions but both are absorbed with their own security interests. China has the world’s second-largest economy and India the ninth. Both are greatly expanding their military power, but both limit their participation in international peacekeeping efforts and global security issues. China’s recent focus has been on asserting itself as Asia’s dominant power, causing unease among its neighbours who had grown accustomed to a more inward-looking China.

Finding an alternative world order

As the Cold War was ending US president George H. W. Bush and Soviet Communist Party General Secretary Mikhail Gorbachev declared there was a new world order emerging that would be based upon cooperation between the two superpowers. But with the collapse of the Soviet Union, only one super power remained to provide order. Filled with both goodwill and vast hubris, the United States has set itself an unsustainable task of maintaining global security. It is unsustainable because such a world order is in neither America’s interest nor in the interests of the world at large. Although it is possible to concoct long causal chains that tie American safety or prosperity to the fate of failing states in Africa or ethnic conflict in the Balkans, most global problems are distant and of marginal importance to the United States. On the contrary, American involvement in these distant problems can be said to threaten American interests. Interventions often produce enemies, with some of those affected assuming it is not altruistic motives that drive the United States but a desire to steal their assets or slander their religion. And there are real costs of blood and resources. Americans (and of course non-Americans) die in these distant fights and domestic needs such as education and healthcare are neglected as vast sums of money are diverted to military operations.

Those challenged by the United States, including Russia, China and many in the Middle East, deny the legitimacy of its actions and see the United States as a neo-imperial power meddling in the affairs of others. Even America’s allies worry about the wisdom of its interventions, most especially the invasion of Iraq in 2003. People the world over concern themselves with who is going to be the next president of the United States, even though they cannot vote in its elections, because of the potential impact a presidential choice has on US foreign policy and its readiness to intervene in their states. Some Americans hope that the United States will come to its strategic senses and abandon the quest to manage global security (Gholz, Press and Sapolsky 1997; Posen 2014). Others believe that the expansion of the welfare state, especially with the implementation of national health insurance and the aging of the population, will curtail military spending in the United States and the temptation to be the world’s sole superpower (King 2013). The economy too is a potential restraining factor as the American global policing wars of the post-Cold War era have been financed through extensive borrowing that someday will need to be repaid. The United States may be the world’s leading economy, but it has debts of approximately $20 trillion.

If not the United States in the lead, then who? The alternatives are not robust. The United Nations makes itself responsible for significant peacekeeping, particularly in Africa. But it is limited in resources and also by the Security Council’s veto system whereby any of the five permanent members can reject an action. This can lead to gridlock and indecision in even the most pressing of cases. There are also persistent problems related to member participation, troop training, discipline, equipment and sustainment for UN peacekeepers. And although they have been forced to do some serious fighting at times to separate or suppress warring factions, they cannot conduct sustained combat operations without the military weight of a major power. The United Nations is also dependent on financial contributions from member states to keep it afloat – it does not have an independent income. The United States is the largest donor. Regional organisations such as the African Union and the European Union are also active in peacekeeping, both in conjunction with the United Nations and on their own. Supplementing their work are relief organisations such as the International Red Cross, Doctors without Borders and the International Rescue Committee. All of this is vital, but it is not enough when the United States is removed, financially and militarily.

Serious change can only come about if the United States actually does less international intervening and those states (or organisations) closer to trouble spots are forced to act when their security is at risk. Other large rich nations will have to fill the vacuum if the United States pulls back from managing global security. Test cases are interventions in Libya (2011) and Syria (2013–), where American reluctance to act has been particularly evident, even though both are marked by a degree of US engagement. The vast regions of North Africa and the Middle East are beset by security problems that outsiders can seemingly neither settle nor fully escape (Engelhardt 2010). Colonialism left behind non-viable boundaries. Although there are many natural resources, the most exportable is oil, which usually enriches rulers, not the masses. Sectarian divides and a rising tide of extremism afflict Islam, the dominant faith. It is territory governed weakly or exploitatively but rarely democratically. But the rich nations of the world are responsible for at least part of the chaos as they are all consumers of oil, former colonialists and/or occasional interveners. They also get some of the refugees and see all of the images of the suffering. The United States will likely find its interventionist urges in the Middle East and North Africa tamed by memories of past failed efforts, high casualty rates, wasted assistance and lack of effective international and local partners (Bacevich 2016). Certain former colonial powers may feel a continuing obligation to help, but they too have memories of past failures. Some states in both Africa and the Middle East can defend themselves, but most cannot. The rise of a regional hegemon is possible, but the area is full of competitors marked out by the long rivalry between Saudi Arabia and Iran – which are also the leading states, each representing one of Islam’s two major branches. What is left is continuing turmoil and perhaps disaster. And given that scenario, the question should be asked: who will assist if not the United States?

For other regions of the world a post-US framework of security is more readily available or more easily constructed than it is in the Middle East and North Africa. The European Union (or a NATO minus the United States) can easily control security in Europe or even deal with a resentful Russia should it find the political will. The European Union has more people than and is approximately as rich as the United States. It should have no need for or any claim on American troops for the security of Europe. There are more serious challenges relating to security arrangements for South America, Africa, the Middle East, and Asia. For South America the problem some might see is keeping the United States out. But US interest in South America after the Second World War was largely prompted by fear of the spread of communism and the influence of the Soviet Union, both of which are fading from memory. The South American nations themselves have several boundary problems but little inclination to settle them through the use of force, at least in recent years. Most South American nations focus their attention on economic growth, which is sporadic but not non-existent. Fortunately for all concerned, self-restraint has tempered the competition for regional dominance and arms racing. In Asia the prime security issue is how to accommodate the rise of a richer, more assertive China. But many other nations in Asia also have large populations and growing economies. Most advantageous for regional security would be the development of regional institutions that can temper territorial disputes without interrupting the pathway to continued prosperity. Some nations seem to want to keep the United States engaged in Asia to balance an ever more powerful China. No doubt the United States needs to think of ways to adjust to China, but getting involved in regional disputes is not likely to be one of them.

Conclusion

It is important to understand that the United States cannot be taken for granted. This is equally true whether it continues – or tries to continue – the role that it established for itself in the twentieth century or becomes a ‘normal’ power much as the United Kingdom did following the Second World War. The rivalry of superpowers that we saw in the past was a certain kind of world order. The hubris of one rich and powerful nation, the United States, is another. Should the United States change its priorities, the large, rich nations of the world may collectively find the need and will to create yet another form of order – one in which they share the decision-making and costs of taking necessary actions. If this does not occur, it is likely that dominant regional powers will provide local security – as meagre or brutal as that may be. The North Africa and Middle East regions lack a plausible candidate for this role and will likely remain in turmoil until one emerges. There could be a struggle among potential contenders, in those and in some other regions, that escalates into more serious conflict. Thus, a large part of the world may continue to be torn by instability, with few voluntary interveners for the foreseeable future. The question many will ask is can more stable regions such as Europe and North America isolate themselves from this instability? Or, does peace and security at home require – as those in America who favour intervention abroad claim – a constant foreign military involvement? Considering such issues as the migration crisis in Europe, which has at its roots instability outside Europe, brings real focus to these questions. Another worry is competition among regional powers. Once a nation gains dominance locally, will it have an irresistible temptation to expand as the United States did after the Second World War? Again, this question brings us back to the issue of China’s rise. With all of this in mind, some may come to remember ‘Pax Americana’, for all its faults, as an era of peace and stability.

Chapter 16: Feeding the World

How should we think about global food politics? It is tempting to start with big moments on the world stage such as the United Nations discussing famine in Ethiopia or Syria. But this approach can be alienating. It locates global politics far away from daily life and sees food as just another issue that international leaders address on our behalf. So rather than this top-down approach, this chapter offers a bottom-up approach, beginning with everyday people like you and me. Through this perspective we can better appreciate the meaning of ‘big’ statistics like the estimate of the United Nations that 795 million people in the world are undernourished. What kind of lives do these individuals lead, and what is it like to go without food? We can also see that it is not just problems of hunger that food politics concerns itself with, but those relating to food safety, nutrition and livelihoods as well. Being attentive to everyday voices shows that these issues affect people in developed countries just as much as those in developing countries. Who in the world gets fed, with what, and by whom are fundamental questions that concern us all.

The bottom-up approach

When I started writing this chapter I was sitting in my local café, a Cuban-themed place with Latino music on the stereo and pictures of Communist revolutionary Che Guevara on the wall. In the newspaper was a story about the multinational drinks company SABMiller avoiding taxes in Africa. Visiting the supermarket later on with my family, we picked up sausages from Ireland, tinned tomatoes from Italy and peppers from Morocco. For dinner we cooked up a casserole, a dish with French roots, and sat in front of the television to eat. A celebrity chef was presenting a programme about diets in Japan and how the British could learn a lot from their healthy lifestyles. We wondered whether we might try sushi for our next family meal.

These encounters with national cultures, current affairs and global supply chains can be thought of as the social foundations of international relations. They are foundational in two senses. First, they create the cross-border flows of ideas, people and goods that make international relations, or how people in different nations see and relate to one another. For example, debates about how to govern the international trade of food wouldn’t exist if people didn’t buy foreign products to begin with or care about the effects of doing so. Second, it is through these interactions that individuals come to know their political community and form opinions about what is best for it, helping to construct ‘the national interest’. This happens through multiple subject positions. In the story above, for instance, I was sometimes thinking from the perspective of a consumer, but at other times as a worker, a citizen, a cook, or a family member. This is important because different subject positions create different political priorities. Thinking as a consumer, I would prefer supermarkets to stock a wide variety of foods and keep prices as low as possible. But thinking as a citizen, I would prefer them to supply more food from local farmers and make sure everyone earned a decent living out of it. The bottom-up approach thus provides an alternative way of thinking about global food politics by analysing its social foundations. It recognises that important political decisions do not happen ‘above’ society, separately from it, but rest on the beliefs, opinions and actions of those who would be governed.

Sudden food shortages and the disenfranchised citizen

In 2007/8, and again in 2011, the world market prices of cereals, meat and dairy products, vegetable oils and sugar all began to increase rapidly. This was blamed on a variety of causes. These ranged from poor harvests in agricultural producing countries like Australia and Russia; policies in the United States and Brazil that encouraged food crops to be replaced by biofuel; rising gas prices that pushed up the cost of fertilisers; and financial speculation leading to volatile prices. Commentators spoke of a ‘global food crisis’ as the effects were felt in every country, albeit to differing degrees. In the United Kingdom (UK) the average cost of a loaf of bread doubled from £0.63 in January 2005 to £1.26 just four years later; an increase way ahead of inflation and an unwanted burden for those on lower incomes. In states with greater dependency on food imports and higher levels of poverty, though, the impact was felt even more deeply. These states could mainly be found in the Middle East and Africa, and in city after city riots broke out as people found it difficult to access basic staples at prices they could afford.

One of these cities was Algiers, the capital of Algeria. As elsewhere, people took to the streets not simply because food was hard to get hold of but also because of the injustice they perceived in the way their country had been run. Demands for affordable food ran alongside calls for jobs, political freedoms and an end to government corruption. Banners were written saying things like ‘Give us back our Algeria’ and ‘No to the police state’. At first the Algerian government responded to these events with repression. The police fired tear gas and water cannons at youths who had angrily taken to the streets and set up roadblocks. Football matches were suspended as it was thought the crowds might turn political and become a threat to public order. However, aware of the Arab Spring revolutions and fearful that the uprisings seen in Egypt and Tunisia would be repeated in Algeria, the government soon relented. Import taxes on sugar and cooking oil were slashed and prices capped for flour and vegetables. The government also renounced the 19-year-old State of Emergency law that had prohibited peaceful protest in the country. The forcible removal of long-standing president Abdelaziz Bouteflika was thus averted, although widespread disapproval of his autocratic regime continued to simmer.

What effect did these food riots have on international relations? First of all they created the sense that there was a ‘global food crisis’ to resolve. It is important to note here that if a food crisis were to be simply defined as the existence of widespread hunger, then the situation would have been nothing new. Throughout the 1990s and 2000s there were consistently between 800 million and 1 billion people in the world who were chronically undernourished. Living largely in rural areas in Asia and Africa, these people suffered away from the spotlight. However, based on the position of the disenfranchised citizen, the food riots that broke out in volatile urban areas directly challenged the legitimacy of political leaders and forced a response (Bush 2010). This kind of hunger could not be ignored.

Attempting to manage the food crisis, world leaders gathered at the United Nations’ High-Level Conference on World Food Security. They produced a declaration to provide more emergency aid, prevent international agricultural trade from being disrupted, and increase global agricultural production. Critics saw this as a conservative response that did not address the root causes of the crisis. Instead of ensuring people had decent incomes and accountable leaders, reflecting the demands of the protestors, the focus was simply on bringing down world market prices. This also reproduced the misleading idea that hunger is best dealt with by growing more food rather than changing existing power relations. Oxfam, a confederation of charitable organisations, made this point when they said that there was already enough food to feed everyone. For Oxfam the problem unveiled by the riots was not so much lack of supply but unequal distribution (Oxfam 2009). During 2008, the height of the food crisis, there was a global average of 2,826 calories produced, per person, per day according to official United Nations data. The recommended intake for an adult is between 2,000–2,500 calories. So, if the data is taken at face value, there was no actual shortage of food. Rather, political decisions had created a situation where some people could acquire food more easily than others.

Chronic hunger and the civic participant

A different approach to governing hunger can be seen in Brazil. Although the country was for a long time a net exporter of agricultural products, it also had huge numbers of undernourished people living within its borders. This reaffirms the point that, in and of themselves, food surpluses do not prevent hunger – even at state level. So, when the left-wing Workers’ Party was elected to power in 2003, their leader Luiz Inácio Lula da Silva made the Zero Hunger programme a cornerstone of his government’s social policy. He declared in his inauguration speech: ‘We are going to create appropriate conditions for all people in our country to have three decent meals a day, every day, without having to depend on donations from anybody’ (cited in da Silva et al. 2011, 9).

This commitment came out of the country’s re-democratisation process in the 1990s, when civil society began to exert a greater influence in national politics after two decades of oppressive military dictatorship. The Council on Food and Nutritional Security, which was supported by Lula, was a particularly important institution in this respect. Composed of 54 representatives, two-thirds from civil society and one-third from federal government, the Council drove forward a number of policies, including increased funding for school meals and support for family farmers. It also promoted the National Law on Food and Nutrition Security, which obliged the federal government to uphold people’s right to food and create food councils at more localised levels. Along with cash transfers given to poor mothers and an increase in the minimum wage, these reforms lifted millions of people out of chronic undernourishment. The Zero Hunger programme could claim real success. In contrast to Algeria, diverse groups in Brazilian society – including teachers, farmers, clergy and health professionals – were able to play a more proactive role in national food politics. Indeed, their collective contribution also reshaped international policy. When the minister for food security in the Lula government, José Graziano da Silva, was elected to the head of the UN’s Food and Agricultural Organisation in 2011, he began to promote many of the same policies that had been developed in Brazil. A twin-track strategy based on investments in rural areas to boost the incomes of farming families and basic welfare payments to protect the most vulnerable in society was advocated.

Backed by other United Nations agencies and the UN Secretary-General, Ban Ki-moon, over the next three years Zero Hunger Challenge programmes were launched in a number of countries including St. Lucia, Laos and Zambia. This approach also informed the 2015 UN Sustainable Development Goals, which set out a roadmap for the end of world hunger by 2030. That said, it is a lot easier to make policies and plans than to achieve them. Key in the Brazilian case was the mobilisation of national civil society, which brought forward people willing to play a role in political affairs. In countries where this is not encouraged, it is hard to see plans for the reduction of poverty and hunger taking effect. Moreover, Brazil itself is far from perfect, with mass protests and political upheavals in 2016 reflecting the nation’s slide into ever-deepening recession. Chronic hunger may have diminished but temporary hunger and poor diet remain, especially in the impoverished areas of Northeast Brazil and among indigenous communities. Ensuring their right to food is an ongoing struggle, and one that will have to overcome the significant domestic political and economic challenges that Brazil faces.

Adulterated milk and the protective parent

In September 2008 news broke that the industrial chemical melamine had been found in powdered milk infant formula in China. Within two weeks, more than 50,000 babies had fallen ill and developed kidney stones. The mass poisoning became a national scandal and within the space of a year the Chinese government had overhauled its food safety laws and inspection systems. Provincial courts also sentenced 21 people involved, ultimately executing two of the traders caught selling adulterated milk. On the face of it this was a sudden crisis that had been swiftly dealt with. In actual fact, the melamine milk scandal was long in the making and slow in the breaking.

Milk consumption had been encouraged in China from the late 1990s by the government and by dairy companies as a way for people to become healthy and ‘modern’. Competition to supply this growing market thus intensified. Milk was watered down and melamine was added so as to make the protein content appear normal, but the practice was knowingly covered up – a fact disclosed by company executives in the subsequent trials. Neither the dairy industry nor government officials wanted the public to panic as this would be disastrous for sales and the country’s reputation, especially while hosting the 2008 Olympic Games. It was largely thanks to the parents of affected children that the problem was finally acknowledged. Some took to the internet to raise awareness and vent their anger while others held impromptu press conferences to give their side of the story and gain assurances about their children’s long-term health. In both instances there were cases of parents being detained or jailed by the police for inciting social disorder.

The scandal had profound international consequences. Government authorities in Asia and Europe began to pull Chinese dairy and baby food products from the shops, while the United States had so little faith in the Chinese food safety system that they installed their own officials in the country to check US-bound exports. Doubts about the safety of Chinese milk also spilled over into diplomatic tensions with Hong Kong and Taiwan. In Hong Kong, there was a public backlash against travellers and smugglers who began buying up infant formula to take back to China, leaving little for local consumption. In Taiwan, demonstrators used the milk scandal to publicly contest the wisdom of plans by the Taiwanese ruling party to forge closer ties with Beijing. Finally, the World Health Organization tried to agree on an international standard for safe infant formula at the same time as its director-general reinforced the message that breast milk is best for babies, implicitly criticising the Chinese government for promoting the use of powdered milk in the first place.

From the protective parent’s point of view, then, ways of feeding the infant population had become criticised and politicised. However, Chinese parents should not be treated as a homogenous group. For example, one response by richer parents worried about using unsafe infant formula was to hire other new mothers to breastfeed their children. Most of these ‘wet-nurses’ were migrants from the countryside and so poor that they chose to sell their breast milk for money, feeding their own babies potentially harmful formula instead. Another class dimension of the scandal was the fact that many of the Chinese businesses involved were part-owned by multinational companies. Sanlu, the Chinese company at the centre of it all, was in fact only able to expand its operations thanks to a large investment by a dairy cooperative based in New Zealand called Fonterra. A political question for global capitalism is thus to what extent such transnational companies should help protect consumers in other countries as well as profit from them.

Childhood obesity and the bad mother

Concerns over food safety can also be extended to include foods high in salt, sugar and fat. These do not cause immediate harm in the same way that melamine-tainted milk does, but their cumulative effects can still be dangerous. The World Health Organization has warned that unhealthy diets are a leading global risk to health because of their link to illnesses like heart disease and stroke. In fact, these are the two biggest killers in the world, each causing more deaths every year than HIV/AIDS, lung cancer and road accidents combined. This aspect of food malnutrition – ‘mal’ meaning bad rather than insufficient – should be just as worrying as the existence of food shortages. In the United Kingdom, the public debate about malnutrition has paid particular attention to children’s diets. Some of the debate has focused on problems experienced during childhood itself. For example, in 2014 it was reported that the consumption of sugary foods and drinks had contributed to 25,000 children aged five to nine being admitted to hospital to get rotten teeth pulled out. But mostly it has focused on childhood obesity and the risk this poses for children later in life. Under pressure from campaigners, including doctors and other health professionals, successive British governments have introduced policies to promote dietary change. Restrictions have been placed on junk food adverts, minimum nutritional standards have been applied to school meals, families have been targeted with healthy lifestyle campaigns, and food manufacturers have been asked to lower the salt, sugar and fat content of their products. To cap this off, a ‘sugar tax’ on high-sugar soft drinks was announced in 2016.

Despite first impressions, these internal debates have actually had an international dimension. In this respect it is important to remember that the United Kingdom is a nation-state made up of four countries (England, Scotland, Wales and Northern Ireland), with the latter three each having some devolved political powers of their own. As such, policy debates about diet have often become proxy wars over the further devolution of power away from the central state. This happened in 2014 when the first minister of Scotland declared that the Scottish policy to offer more free school meals to pupils showed that Scotland would be better off as an independent country. International data has also been used to defend or discredit domestic policy proposals. The successful campaign to tax sugary drinks, spearheaded by the celebrity chef Jamie Oliver, constantly referred to a similar policy introduced in Mexico to show that what worked there could work in the United Kingdom.

International comparison has also been used in depictions of national identity. British newspapers have run countless stories saying it has become a nation beset by increasing obesity. For some people, especially those with right-wing political views, this has been taken as evidence that the British are becoming lazy and that standards of parenting have worsened. Since childhood obesity is positively correlated with poverty, meaning that children from poorer backgrounds are more likely to be overweight, this interpretation also produced a divisive image of the nation. Put simply, it implied that poor parents were to blame for the country’s moral failings. Moreover, since it is women that tend to be the primary caregivers, the figure of the bad parent inevitably assumed a female face.

Low wages and the deserving worker

So far in this chapter we have focused on food consumption, on what people eat. But how that food is produced and exchanged is important in its own right. Indeed, if we include all the jobs involved in providing food – from farming and fishing through processing and distribution, right up to retailing and cooking – then it is arguably the most important income-generating sector in the world. In the United States (US), there has been a long history of struggles over food work. John Steinbeck captured a slice of it in his 1939 book Grapes of Wrath, writing about a family of tenant farmers evicted from their home in Oklahoma and who end up working on a peach plantation in California for a pittance. This fictional book based on real events echoes in the lives of farmworkers in the United States today. Jobs like picking fruit and weeding vegetables are still tough and still done by migrants – only now they typically come from Latin America. In 2012, their average pay was less than $19,000 a year. The US government’s own statistics would place this income thousands of dollars below the minimum threshold for meeting the basic needs of a family of four. In other words, even though they were living in the world’s richest nation, they were living in relative poverty.

There are some differences between Steinbeck’s story and contemporary events, though. In Grapes of Wrath, a preacher called Casy tries to organise his fellow workers into a trade union and is murdered by the police for his troubles. For the Coalition of Immokalee Workers, a group of immigrant tomato pickers based in Immokalee Florida, their initial meetings in a local church grew into something much bigger. They first used tactics like work stoppages and hunger strikes to demand higher wages from their employers, but as their public profile grew they sought to reorganise the food supply chain itself. In 2011 the Coalition launched the Fair Food Program. Major restaurant and supermarket chains were encouraged to pay a few cents more for a pound of tomatoes and to buy these tomatoes from suppliers who pledged to follow labour law and put the extra money in their workers’ wage packets. The Coalition scored its biggest success when the biggest retailer in the world, Walmart, agreed to join the Fair Food Program and to extend it beyond just tomatoes.

But while Walmart made commitments to these workers, with its own workers it has been less forthcoming. In 2012 its regular employees like cashiers, cleaners and warehouse assistants were paid on average just $8.81 an hour (Buchheit 2013). This meant that they, too, were paid a poverty wage and thus qualified for additional social security benefits like food vouchers, many of which were then spent by workers back in Walmart stores! This costs the government billions per year and is surely the grand paradox of the American economy. For all its wealth and Wall Street millionaires, the national minimum wage is so low that many people in full-time work still cannot make ends meet. Nor is it just Walmart where this happens. Supermarket cashiers, farm labourers, fast food servers, cooks, dishwashers, bartenders and waiting staff are all among America’s lowest paid workers. The price of cheap food in the country has been gross inequality.

In both the Walmart case and that of the Coalition of Immokalee Workers, the position of the deserving worker has been crucial in contesting this inequality. We can see this first in the way that immigration policy has been conducted. For years US farm companies lobbied the government to allow them access to cheap foreign labour, which the government achieved by issuing temporary immigration visas and turning a blind eye to the use of additional undocumented workers. This created tensions with the general public, some of whom were worried about wages being undercut and others about the decline of ‘American values’. A 2013 proposal by Republican and Democratic Party senators to offer permanent citizenship to undocumented farm workers thus had to cast them in a particular light. They were not called ‘illegal immigrants’, as was more usual in political discourse, but portrayed as ‘individuals who … have been performing very important and difficult work to maintain America’s food supply’ (Plumer 2013). What the politicians were implying was that these were honest and hardworking people that could and should be made into Americans.

A second example is the way that trade unions have tried to organise Walmart employees across national borders. The company’s takeover of food retailers in other countries has given it a truly global workforce. Walmart now employs over two million people worldwide; only the United States and Chinese militaries employ more. Concerned that the labour standards in its American operations might be adopted in these supermarkets and their supply chains too, groups like the UNI Global Union have thus tried to link people together through the shared subjectivity of the deserving worker and create a sense of international solidarity between them. As a UNI coordinator put it: ‘When I can connect a Chinese worker with a Mexican worker then it doesn’t become about a Chinese worker taking their job. Workers can see, “Oh they [Walmart] are screwing us both. We have to unite to win”’ (Jackson 2014).

Land dispossession and the traditional peasant

The examples from the United States were about waged work, but most of the jobs in the food sector are unwaged. People who farm, fish, herd, hunt or forage for food are effectively self-employed: they sell some of what they get for money and keep the rest to eat. As far as farming goes, there are an estimated 570 million agricultural plots in the world, the vast majority of which are small-scale family farms (Lowder et al. 2014). Whether these rural livelihoods will disappear as farming becomes mechanised and people migrate to cities is much debated (see Weis 2007 and Collier 2008). Either way, it is evident that the transition from small-scale peasant agriculture to large-scale industrial agriculture can be extremely violent. This can be seen in a case from Cambodia.

In 2006, large areas of land were granted by the Cambodian government to private holders to transform into sugar plantations so they could export this ‘cash crop’ to the European Union. However, the plan ignored the fact that many people already lived on the land and didn’t want to be evicted. But the protestations of the existing tenants fell on deaf ears. In part this was because they did not have legal title to the land as a previous government, the Khmer Rouge, had banned private property and burned land records. Things got worse still. Financial compensation and alternative land that the current government was meant to provide was either inadequate or not forthcoming. When people resisted, force was used to remove them. Buildings were burned, land was bulldozed and animals shot. Over 1,700 families lost their land (see Herre and Feodoroff 2014). Responding to these events, community groups and human rights organisations formed the Clean Sugar Campaign. Given that the Cambodian government was itself involved in the land sale, the campaign’s search for justice took on an international dimension. First of all they tried to pressure the investing companies by filing complaints with the National Human Rights Commission of Thailand. Then they turned their attention to the rules and relationships incentivising sugar export. They pressurised the European Union to suspend the free trade access it gave to Cambodia, began legal proceedings against Tate & Lyle in the UK for importing illegally produced sugar, and publicly shamed the project’s financial backers, Deutsche Bank and ANZ Bank, to make them withdraw their money. This can be described as a form of ‘boomerang activism’ (Keck and Sikkink 1998) – working through institutions in other countries meant that the campaign first left Cambodia but then came back.

In the course of their activism, campaigners did not just point out the breaches of law involved in the ‘land grabs’ but also made a political argument about why this way of producing food should be opposed. This turned on the fact that it was not just people’s livelihoods that were being threatened but also their identity. The land that was lost was used not only to grow rice and collect water but also to worship ancestral graves. It was their home as well as their workplace. This is a common experience of people displaced by commercial agriculture – they are not just victims of dispossession but see their very way of life destroyed. The position of the traditional peasant adopted in the campaign thus gave it a broader resonance in global civil society. For example, the charity Oxfam has used the plight of the Cambodian peasants as an example of the dangers facing rural dwellers the world over, and has lobbied companies like Coca-Cola to make sure they source ingredients like sugar in a responsible fashion. However, there is still a long way to go for full restoration or compensation for the land loss, and unfortunately much of the damage has already been done.

Conclusion

The cases presented in this chapter show that political authority over food is globally dispersed. People in each case were affected by decisions taken in the state, in international organisations and in corporations. This constellation of institutions, sometimes referred to as global governance, reminds us that power does not lie in any one single site, even though in certain situations some actors take on greater significance than others. Thanks to our bottom-up approach, we also saw how individuals outside these central institutions can inform and challenge the way that governance is organised. The chapter demonstrated how professional networks, charities, trade unions, political groups and even celebrity chefs all claimed their own kind of authority on the basis of expertise, morality, membership or personality. This allowed them to speak for large numbers of ordinary people; the kind of people often excluded from top-down accounts of global politics. The chapter also showed how looking at different subject positions can help explain how collective action happens. Some positions were based on political identity (the disenfranchised citizen, the civic participant), some on familial identity (the protective parent, the bad mother), and some on economic identity (the deserving worker, the traditional peasant). What is important about each of these is the way they spoke to people in a particular way, giving them a shared lens on the world and a common language to articulate it. These positions are also important in shaping international relations, along with class relations, race relations and gender relations. They show how global food politics are built from the bottom up, based on contested ideas about who we are and what is in our best interests.

Chapter 15: The Environment

Today, our planet carries over seven billion people. Yet its capacity to provide for each one of these individuals is threatened by population growth, climate change, deforestation, collapse of fisheries, desertification, air pollution and scarcity of fresh water. The full extent of our shared global environmental problems goes far beyond the well-publicised challenge of global climate change (or global warming). In fact, one of the elements often forgotten is the complicated relationship between human beings and their environment. In the early years of the conversation around environmental protection, some argued that the planet’s resources were there for our collective consumption. However, there are limits to growth and this raises a range of important issues for international relations. Our population quadrupled between 1900 and 2000. This growth, coupled with abrupt climate change events and further compounded by rapid industrialisation and fast urban expansion, have combined into a perfect storm of negative environmental processes that put pressure on the capacity of Planet Earth to sustain life. As students of IR, we ought to recognise that the environment is one of the areas where much work remains to be done, particularly because cooperative approaches to environmental protection have had a very mixed record despite the grave implications of failure.

The relationship between international relations and environmental problems

It is often hard to assess whether international cooperation efforts have had any real effect on society’s wellbeing, the quality of our environment, or even the construction of long-term relationships between states. One form of evaluation takes place through the study of environmentally focused ‘megaconferences’. These large-scale events bring together representatives of national governments, intergovernmental secretariats, non-governmental organisations, academics and industry actors to engage in conversations about the state of the environment. They usually focus on a particular issue at hand. What makes these megaconferences interesting is that their goal is to engage in productive collaborative efforts to reach agreement and consensus on specific strategies to protect the environment and solve global challenges.

Historically, the two environmental issues that have gained the most attention have been climate change and biodiversity. Both of these issues came up at the Earth Summit in Rio de Janeiro in 1992 – formally called the United Nations Conference on Environment and Development. Nevertheless, most scholars will recall the 1972 United Nations Stockholm Conference on the Human Environment as the first large-scale environmentally focused megaconference. The Stockholm Conference was also the starting point for the first global coordination mechanism for environmental protection, the United Nations Environment Programme (UNEP). This conference was also the first one where participants explicitly linked human health with environmental and ecosystem health in their discourses.

The second milestone in global environmental governance is the publication of the Brundtland Report in 1987. This report outlined the need for a new model for development that brought into play the notion that we cannot simply use (and misuse) the resources we have at our disposal. The new model, coined sustainable development, became an enduring part of the global conversation about environmental protection. The Brundtland Report defines sustainable development as having three main components: economic, environmental and social – an idea that was then put forward for implementation at the Earth Summit.

The third milestone was the 1992 Earth Summit. A major outcome of this meeting was the recognition of two of the most important environmental issues – the loss of biodiversity and rapid climatic change – and the need for intergovernmental secretariats and agreements to respond to these twin challenges. The bulk of the world’s states, 161, signed a declaration on the need for a model of global development that enabled future generations to live within their means but also facilitated current generations’ livelihoods. The fact that so many states reached an agreement on the concept of sustainable development, and the need to operationalise it, became the key contribution of the Earth Summit. Activist involvement became the norm in international conferences on environmental issues starting with the Rio Summit. Non-governmental organisations were considered part of the negotiations from the very beginning and over 2,000 non-governmental representatives attended.

The fourth milestone was the 2002 Johannesburg World Summit on Sustainable Development. The goal was to establish collaborative intergovernmental, cross-disciplinary and cross-sectoral partnerships. In theory, this would strengthen the way in which environmental activists interact and partner with national governments. Different types of partnerships were elucidated and non-state actors were considered from the design stage up to implementation. However, following the summit there was a widespread perception that there had been very little progress on the implementation side, leading to a feeling of megaconference fatigue. To remedy this, the 2012 UN Conference on Sustainable Development (also known as Rio+20) created mechanisms for follow-up of commitments to sustainable development. It also highlighted the relevance of specific targets for development and the need for transition towards broader-reaching sustainable development goals. Moreover, the outcome document of this conference defines specific regional initiatives towards the implementation of sustainable development.

The 2015 Paris Agreement represented consensus among a number of countries that something needed to be done to maintain the level of global warming below two degrees centigrade. The fact that an agreement was reached was groundbreaking for the global climate negotiations community. Prior negotiations were marked by disagreements and lack of consensus on a strategy to compel nations to reach internationally agreed targets in their carbon emissions. This is important as carbon dioxide, released primarily by burning fossil fuels such as oil, natural gas and coal for energy, is the main cause of global warming. Nevertheless, Paris showed that many countries were able to agree on specific goals, targets and policies needed to combat rapid and impactful global environmental change. The process it established is yet to be fully realised, but in the years to come the expectation is that states will comply.

Climate change isn’t the only ecological issue facing our planet. But its role in catalysing global action to protect the environment cannot be overstated. One of the most neglected issues is water. While the earth is two-thirds covered by water, the proportion that is fresh (drinkable and useable for agriculture) is sometimes highly contested by neighbouring states and in short supply for growing populations. When added to the effects of climate change, access to water is an issue of real concern. While many other challenges remain in the areas of climate and environment, it is likely that a framework for global water governance will be a major issue on the agenda in the near future.

Common pool resource theory

With a brief history of megaconferences now complete, we can move on to discussing the substance of the debates on climate and the environment. The notion of public goods comes from the original definition of a good that is non-excludable and non-rivalrous. Think of it as something that anyone can access at any point in time without making it any less available for anyone else to consume. The best example of a public good is knowledge; in this case we can use the example of information that we find on the internet. All knowledge, once freed and put online for public consumption, is non-excludable and non-rivalrous in consumption. You cannot exclude anyone from consuming knowledge and learning, unless they do not have access to the means for knowledge transmission, which may be the case in some countries where specific websites are banned. You also experience non-rivalry in consumption. Air is another example of a public good. Under normal circumstances nobody can stop you from breathing air into your lungs, and the fact that you breathe air does not stop someone else from having the opportunity to enjoy it. This is the definition of a perfect public good: one that is always non-rivalrous in consumption and non-excludable in access.

Common pool resource theory derives from Garrett Hardin (1968), who said that if left to our own devices we would exhaust all the resources available for our consumption. Imagine if you were a shrimp fisher. You need to fish and sell your catch to sustain your family. Let’s say that there are 10,000 shrimp in the small catchment that you fish in. But there are 99 other fishers in the sea at the same time as you. If everyone cooperated and consumed only 1/100th of the total available shrimp, each would have 100 shrimp to sell. If at any point any fisher catches more than 1/100th, there will be other fishers negatively affected. Hardin used a similar metaphor to make the point that if resource consumers behave selfishly, they would exhaust the resources they were supposed to preserve. Hardin called this the tragedy of the commons. Closed bodies of water, plots of land and large-scale areas of forests are all common pool resources. They are rivalrous in consumption, but non-excludable.

One can summarise the theory of common pool resources by placing goods in four specific categories: private goods, common goods, club goods and public goods. This categorisation framework has two dimensions. The first dimension is excludability. If you can prevent someone from accessing a good, that good is excludable. The second dimension is rivalry in consumption. Goods that are depleted are rivalrous in consumption. If I consume an apple, you cannot consume that same apple because I have already eaten it. Private goods, such as food, clothing and other material objects, can be purchased and acquired because they are tradable. As a result, these goods are both rivalrous in consumption (if I buy a car, nobody else can buy that exact same car) and excludable (you cannot buy a car unless you have the money to purchase it).

Goods that are non-rivalrous in consumption and non-excludable are called public goods. These are the things that everybody can enjoy. Consuming them does not reduce the possibility of someone else having the same opportunity of consumption. Air is a public good. Everybody can breathe air without worrying that at some point they will not be able to breathe simply because somebody else is also breathing. Finally, common goods, which are also called common pool resources, are those goods that are non-excludable but rivalrous in consumption. Fish in a fishery, trees in a forest, water in an aquifer or a lake. All these natural resources are common goods and, therefore, common pool resources. What makes common pool resources so interesting is that the theory, developed by Elinor Ostrom (1990), argues that despite the fact that humans are supposed to be selfish, faced with conditions of scarcity we are able to self-organise and govern our common pool resources (our ‘commons’) in a sustainable manner. One of the reasons why Ostrom’s work had such an impact was because her theory of cooperative approaches to resources governance contradicted Hardin’s tragedy of the commons model. Instead of being so selfish that they would want to fish all the shrimp (for example), Ostrom found that fishers would build a shared agreement to reduce their own consumption for the wellbeing of the collective. Obviously, this is an example on a relatively small scale. What remains to be seen is whether we can achieve global cooperation to protect our global commons. One way to think about this is through the lenses of global public goods, as discussed below.

The global environment as a global commons

Perhaps you would agree that a shared environment would be a resource community and individuals would work collaboratively to protect. But there is another view, which is that responsibility for care of the environment rests with governments. One way of thinking about this is to use the concept of the global environment as a global commons. After all, global environmental problems are by their very nature global. However, international cooperation is hard to achieve. As the example of the US shows, there are powerful countries that will avoid cooperation for various reasons. For many years the US refused to sign the international agreement on climate change, the Kyoto Protocol (the forerunner to the 2015 Paris agreement), thus blocking many international efforts to reduce global carbon emissions. There are several other examples that can be cited, but suffice it to say that a powerful country’s refusal to collaborate to solve a global issue is concerning. It is hard to make countries commit to specific conservation goals (in forest policy) or emission-reduction targets (in climate policy) or standards for pollution in rivers (in water policy) because each nation has its own national development objectives that may conflict with other countries’ goals, thus making it hard to find common ground for collaboration.

Given that cross-national collaboration is so difficult, we create international environmental agreements that build a framework to help these countries talk to each other and agree on specific targets for environmental protection. Some of the most popular international environmental agreements are specific to the area of climate change (like the Kyoto Protocol), but other, less well-known examples – such as the Aarhus Convention on Access to Information, Public Participation in Decision-Making and Access to Justice in Environmental Matters – are equally relevant. One of the biggest problems for human beings acting at the individual level on environmental issues is the lack of information. Countries that are signatories to the Aarhus Convention make an agreement to share data that will enable citizens of a country to understand the potential risks that they face with regard to chemicals’ processing and emissions. This information also helps environmental activists bring industries to account and ensure that they reduce their polluting emissions.

Global rights and domestic environmental politics and policy

The right to a healthy environment and the global commons are ideas that suggest that it is our shared duty to take care of our collective environment because everyone has a right to enjoy their environment and use some of its resources for their survival. It is possible to link human rights with global environmental regulation through the implementation of the international norm of a right to a healthy environment. This is a new avenue of research for scholars of international relations, and it is founded on the basis of a popular idea, or norm, that every individual on the planet has a right to a healthy environment. Despite states having different abilities and varying degrees of technical expertise to implement the norm, the number of countries with constitutional environmental rights has expanded radically (Gellers 2015). Eighty states now have such legislation in their constitutions, but we are still quite a long way away from having this norm as a fundamental human right.

There are also, of course, many other concerns that divert government focus from environmental issues. Increasing regulation on certain heavy-polluting industries, such as steel and coal, can have a negative effect on jobs. Setting ‘green’ taxes, either directly or through such things as energy tariffs, can also cause a burden on taxpayers and businesses. Thus, there has sometimes been a tendency to see environmental legislation as damaging to economic growth and prosperity. By extension it can also be unpopular in domestic settings, making legislation difficult to pass – or even propose in some cases. It is consequently encouraging to see so much domestic legislation gaining traction. The number of countries where the human right to a healthy environment is enacted constitutionally may help build collaborative transnational networks to protect the global commons. The starting point would be a shared understanding of the need to reduce human impact on national and global ecosystems. Sharing a paradigm that pushes the human right to a healthy environment may also induce national governments to actively seek participation in international environmental agreements. Nevertheless, it is important to find a way to coordinate these agreements, and this challenge raises the question of whether we need a global environmental organisation to make sure states comply.

The best situation for Planet Earth’s citizens are solutions that are made not just in each state, but internationally. And, most importantly, complied with. IR is often concerned with the phenomenon of states cheating on, or withdrawing from, agreements. Perhaps nowhere is compliance more important for our long-term prosperity and security than in the areas of climate and environment.

Do we need a global environmental organisation?

Who is in charge of protecting our global environment? To answer this question, you may recall from previous sections that there is now a consensus regarding one specific tool that may help achieve the lofty goal of providing global public goods: international environmental agreements. These agreements, often produced at megaconferences, help protect our global commons by requiring nations to acknowledge and respect the human right to a healthy environment. However, the next big question is an equally important one – who is in charge of implementing these international environmental accords? Some have argued that in order to force countries to cooperate in the protection of our shared environment, we need a global intergovernmental secretariat. This would take the form of a far-reaching international institution whose sole purpose would be coordinating efforts to improve environmental quality.

For many years there was a collective belief that the United Nations Environment Programme had been tasked with the challenge of protecting our global network of ecosystems and shared resources. This may have been true in the early stages of its creation following the 1972 Stockholm Conference, but protecting our global environment has proved to be an impossible task for a small agency with a limited budget and no power to compel states to act in a particular way. The reality is that even though there is increasing interest in strengthening international cooperation across countries to protect the global environment, it is the number of institutions, agencies and programmes dealing with environmental issues at other levels that grows in size and complexity. Regrettably, the frequent mention of abrupt climate change events, increasing deforestation and growing levels of pollution in oceans, rivers and lakes makes it clear that we have yet to solve these complex global environmental problems. And while there is still no agreement as to whether the United Nations Environment Programme is the agency that should be tasked with protecting the global environment or whether we should create a new global environmental organisation (see Biermann 2000), we must ensure that we focus on collective solutions at the international level rather than state, regional or local level – we all share the earth.

To strike an optimistic note, we can find at least one instance of global environmental cooperation, the Paris Agreement of 2015. This was led by the chair of the United Nations Framework Convention on Climate Change Secretariat, Christiana Figueres, and is an example of what can be achieved in global cooperation for environmental protection by just one intergovernmental secretariat. The fact that the majority of the world’s states were able to reach agreement on the specific tactics and strategies that every state needs to undertake in order to reach the stated goal of holding increases in the earth’s temperature below two degrees centigrade is to be lauded. Even more important is that the agreement has secured the support of the world’s two biggest state polluters, the US and China. The Secretariat is probably not the global environmental organisation we need right now, but it played a pivotal role at a crucial time.

The debate around whether or not we should have a global environmental organisation may never be settled. However, if we were to establish such a thing it would need full and complete cooperation from all states to stand any chance of success. The example of Paris, which built on the example of earlier megaconferences and movements, suggests that international collaboration to protect our environment is on the rise. This offers hope for the future despite rising tensions in some nations over the nature of climate agreements.

Conclusion

It is clear that we still have a lot of work to do with regard to our shared understanding of what constitutes strong, robust, effective and efficient global environmental governance. We need to better integrate regional and transnational initiatives with domestic policy strategies to tackle environmental problems. This means creating the conditions for a model of governing the environment that is flexible and cuts across different levels, from the local to the global. It is also clear that frameworks based on ideas of global public goods and global commons are very useful. However, at the same time they are daunting, since collective action on any scale is clearly an enormous challenge. Trying to find mechanisms, models and strategies to ensure cooperation across different levels of government, across a broad variety of issue areas and across a range of political and policy actors is a problematic and difficult process, as experience has shown. Today, the world’s states have been able to find common ground in relation to certain goals for environmental protection, including the flagpole issues of global warming and climate change. The hope is that this trend continues so we can continue to live healthily and happily on Planet Earth.

Chapter 14: Transnational Terrorism

As had been explored in previous chapters, globalisation has brought with it not only unprecedented opportunities and progress in human development but also greater risks. Events in one economy can quickly spiral to others and the same can be said of social, cultural and political events. One theme that we have not explored in detail is how terrorism has evolved in the era of globalisation. Rather like the way in which the dark web piggybacks on the internet, a shadow side of globalisation gives criminal and violent groups the ability to spread their message and widen their operations. The impact of this shadow form of globalisation alters not only the organisation, resources and methods of such groups but also their reasoning and motivations. Under these conditions we have seen the proliferation of transnational terrorist groups with globalised agendas whose operations involve many countries or have ramifications that transcend national borders.

What is transnational terrorism?

Terrorism, whether transnational or not, is a highly contested arena. To date there is very little consensus regarding its definition. Disagreements emerge over the purpose and function, the perpetrators, the victims, the legitimacy and the methods and targeting of terrorist actors. Perhaps the most widely accepted attribute of the term ‘terrorism’ is that it is derogatory and a sign of disapproval. Typically, labelling a group as terrorist negatively affects our perception of the group’s legitimacy, legality and how they should be addressed. Therefore, how we differentiate a terrorist group from any other group is important. For the purposes of this chapter, terrorism is understood as the use or threat of violence by non-state actors to influence citizens or governments in the pursuit of political or social change. This is not only a semantic or academic debate; the label gives states considerable power to act and use violence against a group and it significantly guides how a state should act. Wrong definitions can lead to flawed counter-terrorism strategies. Moreover, as states cannot agree on the definition, they argue over both the nature and the cause of terrorism as well as who can be called a terrorist. With no agreed international law governing state responses, they struggle to work together to remove the threats. According to Acharya (2008), this permits states to act like vigilantes, or cowboys in the Wild West, on the global stage.

Rapoport (2004) divided the history of terrorist groups into four successive waves, each characterised by the global politics of the day. He noted that nationalist and anti-colonial groups emerged with a force at the end of the first and second world wars, while anti-communist and anarchist movements proliferated during the Cold War. Today it is argued that a new, or fifth, wave of modern terrorist groups are both products of and challenges to key ideas associated with globalisation, thereby giving terrorism a transnational character. It is important to note that some terrorist groups in the past had transnational goals, but they lacked the tools of the modern world to widen and deepen their message. Today’s transnational terrorism is seen to operate in many states, utilising the ‘shadow globalisation’ flows of people, weapons and information to further their cause. The causes of this new type of terrorism reflect the deepening of human interconnectedness worldwide. Peter Mandaville (2007), writing on one of the first groups to be designated as a ‘fifth wave’ terrorist group, Al-Qaeda, argued that their initial success was because they operated a global technology, mythology and ideology. Specifically, it was the mythology of military success against the United States in the form of the spectacular attacks of 9/11 and then drawing it into costly military activities abroad. Combined with the franchise-like nature of their organisation, they were able to claim responsibility for attacks all over the world by financially, logistically and materially assisting smaller groups that affiliated themselves to the organisation. Such affiliations were possible because Al-Qaeda promoted a global ideology that linked local causes together via an image of world politics that presented Muslims worldwide as victims of Western oppression. These components enabled them to function and replicate on a global scale.

Today’s terrorism is therefore transnational in cause, operation and effect. Its essential features ensure its importance within international relations because it represents a whole new security concern for states: the risk of attack does not just come from other states (war) but from mobile criminal groups that move between states and are dispersed globally (transnational terrorism). States perceive this new wave of terrorism as threatening core elements of their sovereignty – their capacity, legitimacy and autonomy within a particular jurisdiction. This all-encompassing threat has led to a range of responses. These have included the creation of new criminal offences, broadened legal definitions of terrorism, the granting of greater powers of detention and arrest, as well as improving funding for state agencies involved in countering terrorism. In light of the transnational elements, states have also sought closer cross-border cooperation between government agencies, most notably in policing and intelligence, in order to prevent the spread of terrorism. States have also reacted to the new threats by seeking to prevent or disrupt the emergence of ideas that might support terrorist violence through anti-radicalisation initiatives. These are sometimes referred to as ‘soft measures’. Overseas these include supporting development goals of other countries to facilitate their stabilisation and the production of moderate voices in politics. Within domestic jurisdictions, ‘soft’ counter-extremism policies include placing greater emphasis on challenging particular extreme ideas in schools and universities, monitoring citizens for signs of radicalisation and making illegal the ownership and distribution of material that glorifies violence. These forms of intervention bring the state more directly into contact with the everyday lives of citizens, often regardless of any laws broken. Such efforts demonstrate how terrorism is a concern for human security as well as state security because of the manner in which it affects everyday life.

Motivation and goals

Individuals join terrorist groups for a variety of personal and political reasons. They may join because most of their friends have, or for the feeling that membership of the organisation brings benefits. For example, the group Islamic State (also known as Daesh, ISIS and ISIL) seeks to establish a new theologically driven state in the Middle East and promises fighters from all over the world better living conditions and pay than they might achieve in their home countries. The ability to travel across borders more freely because of globalisation and the economic resources available to Islamic State in the form of oil make this possible. Individuals may also join a terrorist organisation because they strongly empathise and identify with the group even if they are not directly affected by the cause. Global online media can facilitate this identification by giving a cause a global appeal. It is important to note that what motivates individuals to join and remain in transnational terrorist organisations is not necessarily the same as the wider goals of those groups.

A key way of understanding why individuals join and remain part of transnational terrorist groups is radicalisation theory. Radicalisation is understood to be ‘everything that happens before the bomb goes off’ (Neumann 2013). It suggests that there are pathways to becoming a radical or terrorist and that it is a dynamic and very individualised process. Because of its individual nature, there is no single terrorist profile in today’s transnational world, even in particular countries. Terrorists may be female, married, old, rich, have children – or not. Attempts to profile behaviours have therefore not been successful. The New York Police Department produced one of the early guides for ‘spotting’ radicalisation, which led to some seemingly bizarre characteristics (inability to grow pot plants, enjoying camping out) being identified as ‘signs’ of radicalisation (Silber and Bhatt 2007). The signs were problematic because they were so broad in their scope that almost everyone was potentially a suspect. What radicalisation research does show is that a quest for identity and greater significance in the world together with empathy for those who are suffering makes an individual more vulnerable to terrorist messages that appear to offer solutions (Silke 2008). Research also shows that an individual with friends or family involved with terrorism or supportive of terrorist views is more likely to join a terrorist organisation than someone with no connections at all (Wiktorowicz 2006). As a result, transnational lone-wolf actors are extremely rare despite their high profile and the media attention they receive.

At the group level, goals are also transnational. This is best illustrated by looking at Al-Qaeda and Islamic State. These groups utilise a global religious language to create an understanding of global politics that divides the world in two. On one side is the world of Islam. This is a place of goodness, where religious laws are upheld and Muslims are not oppressed. On the other side is the world of war where Muslims are oppressed by unjust and tyrannical leaders. They argue that, because of the global connection Muslims have with each other as a community of believers (Umma), all Muslims should join them in their fight against the ‘Oppressors’, regardless of where they live. They also argue that because the ‘Oppressors’ are everywhere and attack Muslims everywhere, their cause and fight is global. They refer to the ‘near enemy’ (local governments) and the ‘far enemy’ (governments of global powers) as possible aggressors against whom a member of their organisation might fight. This enables them to tap into local political grievances and give them a global religious veneer, or to highlight global incidents and claim that they are related to their local cause. What is notable is the degree to which such an understanding of the world replicates (or is replicated by) some Western governments’ thinking that also sees the world as ‘either with us or against us’.

It is important to note that the logic of worldwide oppression that shapes Islamic State and Al-Qaeda thinking is not representative of the bulk of the world’s Muslim population and is widely condemned by Islamic scholars. It is also important to note that while most of the coverage of terrorist events seems to focus on high profile events in Western states, the majority of those killed in terrorist attacks worldwide since 2001 have actually been Muslims, living in Muslim-majority countries. This is because of a range of factors. First, it is easier to target less well-protected and defended sites in poorer Muslim-majority countries. Second, ideologically, Muslims that resist jihadist violence are demonised as unbelievers by those groups and therefore become ‘enemies’ who can be killed. Finally, violent actions are often targeted to alter the relations between governments and citizens in the Muslim world and improve the strategic position of the terrorist group (Mustafa and Brown 2010).

Activities

Despite the consequences of transnational terrorism primarily being felt in Muslim majority-countries, fear and awareness of the threats is felt strongly in Europe and North America. Terrorism is a ‘communicative act’, by which we mean it seeks to send a message that goes beyond the actual destruction caused to life and property. That message is to be heard by three groups of people. The first are civilians either local or globally who witness the events. The second are governments which are called upon to respond to the terrorist violence. Finally, the third are potential supporters who are attracted to join by the terrorist actions. We will now look at each of these three groups in turn.

Transnational terrorist groups focus on the location of attacks as much as, if not more than, who is attacked in order to generate a wide message. The importance of location is demonstrated by the attacks in Paris in 2015 by the Islamic State group. Paris is one of the most visited cities in the world and the group targeted ‘everyday’ places – bars, a football stadium and a rock concert. This signalled the idea that anyone and anywhere is a target, increasing fear of and publicity for the group’s actions. This targeting strategy is in contrast to that of groups which may act across borders – such as the Tehrik-e-Taliban, working in both Afghanistan and Pakistan, or Boko Haram, operating in Nigeria and neighbouring countries – but for which the local political scene remains key. With the Tehrik-e-Taliban, their actions, while linked to a global cause of ‘jihad’, are local. They target beauty shops, police stations and market squares because they see these as opposed to the way of life they want to establish in their lands. Boko Haram too targets villages across different countries’ borders and punishes those who don’t conform to their new laws, which are about ‘everyday living’ even as they claim allegiance to a wider global political cause. However, this is not to say these groups do not target individuals. The Tehrik-e-Taliban tried to kill the activist Malala Yousafzai because of her support for girls’ education and Boko Haram kidnapped hundreds of Christian schoolgirls in Northern Nigeria. Schools are targets because they are seen to promote state agendas, and schoolgirls are targets because these groups wish girls to have an Islamic education that focuses exclusively on domestic responsibilities and learning the Quran. Malala Yousafzai has gone on to campaign against this understanding of Islamic education and promote women’s schooling the world over, winning a Nobel Peace Prize for her efforts. In addition, the Nigerian military was forced to take a more active stance against Boko Haram due to global outrage over the kidnappings. Thus, while these are ‘local’ causes and local targets, they are global and transnational in their wider effects.

The second feature of transnational terrorism is that activities are sometimes designed to provoke states into action as well as generate fear in populations. Attacks are frequently symbolic in purpose and often have a high casualty rate for maximum shock value. It was inconceivable, for example, that the United States would not respond to the 9/11 attacks or that France would not react to the Paris attacks. Here, attacks are designed to provoke states into doing something to prove they are protecting civilians, even when that action may undermine the values they live by or end up being so costly that popular support for government is eroded. This terrorist strategy was first formulated by Che Guevara, a leader of revolutionary communist movements in Cuba against the American-sponsored authoritarian Batista government. The approach is known as ‘focoist’ (or focoism), whereby terrorists imagine themselves as the ‘vanguard’ of popular revolutions. The Uyghur ethno-separatist groups (which now have links to regional Islamist terrorism) operating in China’s north-western provinces have been applying this strategy for over a decade. Their attacks are seen to have provoked ever-greater Chinese crackdowns on the civil liberties of people living in affected provinces in order to provide security and to demonstrate the strength of the central government. Yet the government has failed to reduce the number or severity of the attacks and also failed to stop people joining the separatists. Some have argued that European counter-terrorism policies are more reactionary than effective because they follow the same pattern of government suppression of human rights in the name of security as the Chinese example. The disproportionately felt impact of counter-terrorism legislation on Muslim communities across Europe is, critics argue, providing more propaganda for the Islamist groups’ recruitment campaigns.

The expectation of many terrorist groups is that, in time, ever greater numbers will realise they are oppressed and join resistance groups or that, with sufficient coverage, the international community will come to support their cause. The example of Palestine underlines this well, since, despite decades of political struggle – which has included terrorist tactics – to establish Palestinian independence from Israel, the Palestinian cause remains relatively popular domestically and internationally. On the other hand, rather than creating something (an independent Palestine), this tactic may also be used to destroy something. Here, we can point to the 9/11 attacks and the many years of terrorism that followed as bait to lure the United States into engagement in the Middle East as a means of undermining their political and economic stability. By this logic, first Al-Qaeda and later the Islamic State group pursue strategies that aim to grind down the global power and image of the United States so that it may no longer be willing or able to interfere in Muslim lands.

In the past, countries have managed to resist reacting to these sorts of violent action by terrorists. Consider Italy’s reaction to the assassination and kidnapping of the popular prime minister Aldo Moro by the socialist Red Brigades: during the investigation of Moro’s kidnapping, General Carlo Alberto Dalla Chiesa reportedly responded to a member of the security services who suggested torturing a suspected Brigade member, ‘Italy can survive the loss of Aldo Moro. It would not survive the introduction of torture’ (Dershowitz 2003, 134). However, with public and media scrutiny operating at speed and levels not previously encountered, the ability of governments, especially democratically elected ones, to resist pressure is significantly reduced. The crossover with popular culture is interesting too, with military ethicists reporting a ‘Jack Bauer effect’ – referring to the tendency of this character in the TV series ‘24’ to torture individuals as time runs out to stop a terrorist attack. Bauer’s tactics often reflect (albeit in dramatised form) the enhanced interrogation tools that many governments have used in response to terrorism. Pressure is also placed on governments by allies and neighbours demanding support and action. For example, there has been a considerable chilling of relations between Thailand and Malaysia since 2004 because Thai authorities believe Malaysia to be turning a blind eye to Thai Muslim separatists operating across the border.

Finally, the third reason for terrorist violence is to recruit members and reinforce loyalty and membership among existing supporters. Extremely violent or highly technical attacks demonstrate the capability and will of the group carrying out the attack and its overall support. We see support for Islamic State coming from citizens in nations of every region because their attacks are dramatic and spectacular, which raises the profile of the group and demonstrates their military mastery. Mandaville (2007) calls this the myth of success. Islamic State group videos and propaganda frequently assert the weakness of the opposition as demonstrated by their deaths. The videos dehumanise their opposition, treating them like cattle or computer game characters in first-person shooters. The use of videos that mimic computer game imagery is supplemented by Islamic State creating its own ‘skins’ or ‘maps’ for popular computer games. In its version of Grand Theft Auto, the city is Baghdad and the people opposing you are the police and the military. As one British supporter said of their life in Syria under Islamic State, ‘it’s better than that game, Call of Duty’. Members say how they will ‘respawn in Jannah’ – ‘respawn’ being a gamer word for ‘reincarnation’ or ‘being reborn’, and Jannah is paradise in Islam. This is clearly designed to recruit and sustain membership by linking to Western masculine experiences (Kang 2014).

Organisation and resources

Managing such a transnational organisation and connecting to multiple locations and identities requires considerable logistical and organisational capability. The practice of tapping into the local and the global can be described as a ‘plug and play’ approach. Transnational terrorist organisations not only have an ideology that ‘plugs’ into local grievances, their organisational structures and resources also operate in this manner.

One of the main claims about transnational terrorist groups is that they are not hierarchical in structure but rather cell-like and even anarchical, lacking a formal leader. This led Marc Sageman to talk about a ‘leaderless jihad’ (2008). He characterised Al Qaeda as a loose-knit amorphous organisation, a position which was hotly contested by Bruce Hoffman (2006). Hoffman seems to have lost the argument, as terrorist organisations are becoming increasingly decentralised as they take advantage of new technologies, forms of communication and other aspects of globalisation. Consequently, communicating with transnational terrorist groups can be difficult. Negotiators cannot be sure the people they are talking to are representative of the group or have sufficient leverage to influence other members of the group, and splinter groups are more likely under these conditions. There are risks and vulnerabilities for terrorist organisations associated with this approach, notably in relation to information and operational security, coordination issues and resilience. There are also advantages in terms of longevity: the lack of central leadership gives them a greater scale and scope of operations and makes opposing or destroying them very difficult.

Rather than focusing on individuals, it is more helpful to focus on processes. One of the key processes within transnational terrorist organisations is the distribution and acquisition of money and equipment. Here we see the connections to transnational crime – particularly the smuggling of human organs, drugs and guns and human trafficking. Criminals can provide terrorist groups with whatever they require, provided the price is right, and terrorists will engage in or tolerate criminal activities when it serves their needs. Failed states offer fertile ground for possible and profitable connections between terrorism and criminality. The US government’s National Strategy for Combating Terrorism (2006) contends that terrorists exploit failed states, using them to ‘plan, organize, train, and prepare for operations’. However, some scholars disagree, noting that few international terrorists emerge from failed states (Simons and Tucker 2007) and most failed or failing states are not predisposed to exporting terrorism (Coggins 2015) – though they generate significant security problems for their own citizens and neighbouring states. What is worth noting is that states that are weakly governed, rather than failing, are also implicated. Pakistan is one such example – and was where Al-Qaeda’s leader Osama bin Laden was living when he was killed by the US military in 2011 during a covert operation. This occurred, incidentally, without Pakistan being informed: the United States could not assume that he was there without the knowledge of elements of Pakistan’s government, which is often accused of having state links to terrorism.

Countering transnational terrorism

The consequence of terrorism operating transnationally is that states have been presented with a number of decision points about when and how to intervene, and these are intimately connected. The first set of decisions is about where to intervene. Some Western states have been tempted to intervene internationally in order to prevent the emergence of terrorist groups or minimise the efficacy of existing terrorist groups in ‘frontline’ states. Such intervention comes in the form of international aid, military advice and training, and financial and military support to governments. This has entailed the risk of supporting undemocratic governments and engaging in militarised activities in contested spaces. The use of drones by the United States in Pakistan is one instance that has given rise to considerable controversy. First, because of the transnational element potentially undermining Pakistani sovereignty. A second point is that it imposes a state of fear on ordinary civilians, who find themselves under threat of strikes termed ‘surgical’ or ‘targeted’ by those operating them but which feel and are perceived as random by civilians in these areas (Coll 2014). Such operations can actually help terrorist groups by giving them a narrative to spin their agenda around, reinforcing local fears of an aggressive Western intervention in their societies that must be opposed.

A parallel approach has been to intervene at home by increasing state powers to minimise the effects and capability of terrorist groups to attack in Western societies. The consequence however, whether at home or overseas, has been to reduce civil liberties and restrict human rights. It is presumed that there is a necessary balance between human rights and human security and that protecting citizens, namely their security, is the first duty of government. However, a counter-argument is that failures to uphold these basic principles reward terrorist behaviours by treating them as ‘outside’ usual criminal processes, while at the same time punishing law-abiding citizens. Indeed, the human experience of counter-terrorism and counter-radicalisation policies and processes has been overwhelmingly negative. We can see this in the crackdown on protestors in Egypt, including journalists and civil rights groups, in the name of fighting terrorism. Human Rights Watch (2015) has reported that Egypt is undergoing the most serious human rights crisis in its modern history, with the government invoking national security to muzzle nearly all dissent. Egypt has attempted to justify these policies in light of transnational terrorist actions and the existence of opposition groups that appear to have overseas links with terrorist organisations. Similar patterns are seen in Turkey, especially following a failed coup attempt in 2016.

In Western nations, state attempts to impose security have often disproportionately affected certain groups – especially Muslims. The transnational element is perhaps most keenly felt at airports. Blackwood, Hopkins and Reicher (2013) found there was a ‘prototypical’ Muslim story of travelling through airports that was characterised by discrimination, humiliation and fear because of the actions by airport and border authorities. The ability of states to use violence so that a ‘state of fear’ is produced for (a section of) a population even when in the name of countering terrorism has even led some to call for the definition of a terrorist actor to include states (Jackson 2011, Blakeley and Raphael 2016). Those researching in the field of critical terrorism studies advocate this approach, arguing that the only significant difference between terrorism by state and terrorism by non-state actors is the agent carrying out the act of violence. For example, when the Israeli military attacks a Palestinian group this is commonly seen as ‘defence’ or ‘national security’. But, when a Palestinian group attacks an Israeli troop convoy, which they perceive as invaders or occupiers, they are commonly deemed ‘terrorists’. If we remove the binary of state and non-state actors, we might see this instead as a conflict between two opposing forces – both sharing legitimate aims and objectives. Due to examples such as this, complex and emotive as they are, there is often a failure to fully examine state actions that critical scholars blame for a significant cause of human insecurity worldwide. It is also important to look beyond the state toward civil society and everyday acts of resistance.

Conclusion

Terrorism, and terrorists, are transnational in three ways: their goals, their actions and their organisational form. However, we must be cautious before assuming that this is the new, and only, form of terrorism. Not all terrorism is transnational. Terrorist groups like the Irish Republican Army (IRA) and Euskadi Ta Askatasuna (ETA) still operate at the national level, targeting just one state. States too have shown themselves capable of inflicting forms of terrorism. Furthermore, while examples of transnational terrorism since 2001 may appear to be mostly religiously inspired, one cannot conclude that there is anything inevitable about this, or that Islam specifically is the significant factor. Rather, it is in this instance that Islam provides a framework for some marginal groups to construct a convincing worldwide counter-narrative to that of a world dominated by Western political, social and economic models. For that reason, it is perhaps no surprise that Islamic terrorism, over and above other types of terrorism, has become a sustained issue of concern in international relations. An important note to conclude on is that countering terrorism does not fall exclusively to the state: civil society and everyday acts by ordinary people also have a role. These can include examples of popular culture, inter-faith dialogue and moments of solidarity that break down the oppositional and binary world view that dominates transnational terrorist ideology. Nevertheless, terrorist groups are products of their time and, just like us, live in a globalised world. They are both shaped by globalisation and contribute to it by their actions.

Chapter 13: Voices of The People

The people referred to in this chapter are those citizens who want more say in what their rulers do and are not content with current political arrangements – even in the context of an existing democracy. Popular protests have been an issue in international relations for a very long time. An early example was the French Revolution of 1789 when the old order was overturned and replaced, at least for a while, with a popular, revolutionary government. Today, popular movements are not only growing in frequency but also in importance due to how they shape international relations. When considered alongside the availability of instant communication via the internet, as explored in the previous chapter, the phenomenon of ordinary people mobilising to bring about meaningful – and sometimes abrupt – political change raises important questions for IR about how change occurs at the domestic level and the wider implications of that change at regional and global levels.

Change in a globalising world

In today’s world there are numerous examples of popular demand for political change. They generally arise at a time when politicians seem unable to deliver on their promises. Take, for example, the year 2008 – described by Amartya Sen (2009) as ‘a year of crises’. First, there was a food crisis that impacted on poorer consumers, especially across African states, as the staples of their diet often became unaffordable. Second, there was a spike in oil prices that raised the cost of fuel and petroleum products globally. Finally, in the autumn of 2008, there was an economic crisis in the United States that quickly spread, compounding prior issues, and the global economy faltered. What does economic downturn have to do with the ‘voices of people’? The answer lies in the newly interconnected nature of our world.

For the bulk of the world’s population, daily life is characterised by easy and speedy communications. Of course, some areas of the developing world still suffer from poverty and infrastructure issues and so lack the benefits of global communications. That said, it is not uncommon to find mobile phones, which are ever cheaper, proliferating in the poorest regions of the world – such as across sub-Saharan Africa. Improved communications are a fundamental aspect of a wider phenomenon: globalisation. Globalisation enables us, via the communications revolution, to learn quickly and consistently about events all over the world, almost as soon as they happen. Globalisation has in a real sense shrunk the world and made it interactive. When something happens in one country, it can quickly affect others. This may be an economic matter, such as the global economic downturn referred to above, but terrorism is also an issue.

The era of deepening and sustained globalisation coincides with global events following the end of the Cold War. When the Soviet Union dissolved in the early 1990s it gave way to a range of newly independent post-communist states that redrew the map from central Europe to central Asia. Fifteen new states were created, including Russia. It also initiated a dynamic phase of globalisation which affected our understanding of international relations in a number of ways. First, the end of the Cold War threw the study of international relations into a state of flux. Soon after the Cold War ended, there was talk of a new international order. This reflected a widespread optimism that there could be improved international co-operation and a fresh commitment to strengthening key international organisations, especially the United Nations. The aim was to achieve various goals, including better, more equitable development; reducing gender inequalities; defusing armed conflicts; lessening human rights abuses, and tackling environmental degradation and destruction. In short, to manage multiple global interdependencies it would be necessary to improve processes of bargaining, negotiation and consensus-seeking, involving both states and various non-state actors, including the United Nations.

It soon became clear, however, that there was a lack of ideas as to how the desired international improvements might be achieved. During the 1990s there were serious outbreaks of international conflict. Many were religious, ethnic or nationalist conflicts that spilled over into neighbouring states. When these events occurred, local or national issues quickly spiralled into regional or international crises. Examples of these include conflicts in Africa – in Burundi, Haiti, Rwanda and Somalia – and also Europe, where Yugoslavia tore itself apart during the 1990s, eventually splitting into seven states. All these led to serious, and in many cases still unresolved, humanitarian crises requiring external intervention. These conflicts showed how difficult it is proving to move from the problems of the old international order that had characterised the Cold War to a new era marked by international peace, prosperity and cooperation.

‘Colour’ and ‘umbrella’ revolutions

Between 2000 and 2005, a series of popular protests, which later became known as ‘colour revolutions’, swept away authoritarian and semi-authoritarian regimes in Serbia, Georgia, Kyrgyzstan and Ukraine. The common trigger for these revolutions was an attempt by leaders to falsify election results in their favour. Via various non-violent regime-change strategies, the protests sought to change political configurations in a democratic direction. The ‘Orange Revolution’ in Ukraine was archetypical. In 2004–2005, the Orange Revolution – so called because this was the colour worn by many protesters to illustrate their solidarity – helped bring to power a pro-Western president, Viktor Yushchenko, who defeated his rival Viktor Yanukovych in a repeat run-off election. Protesters claimed that the integrity of the initial election, which Yanukovych ‘won’, was undermined by massive corruption, voter intimidation and direct electoral fraud. Subsequently, thousands of protesters demonstrated daily, in events characterised by widespread civil disobedience and labour strikes.

Events in Ukraine echoed wider examples of vote rigging, voter intimidation and electoral irregularities that characterised many countries in Central and Eastern Europe following the collapse of Communist governments in the 1990s. In addition, the colour revolutions demonstrated the increasing volatility of international relations, the spread of ideas and the associated demands by citizens for political and economic change. In some countries, the colour revolution swept away the authoritarian or semi-authoritarian regime. In others, it did not. Thus, the issue of the ‘voices of the people’ is not just about success but also failure and the causes of failure. Today’s political and economic protests tend to have both longevity and wide ramifications. At the very least they change the relationship between ruled and rulers. If harnessed fully they can lead to profound political upheaval.

In other Central and Eastern European states, attempts to replicate successful strategies in the earlier colour revolutions, such as peaceful protests, public demands for democratisation, the use of election monitoring and post-election mass protests to contest fraudulent elections, failed. Moreover, in those states where no serious attempt to launch a colour revolution was made, governments took action to avoid the possibility of regime change by espousing policies sometimes referred to as ‘anti-colour insurance’. For example, rulers in Russia, Belarus and Azerbaijan adopted strategies such as strongly attacking local, independent civil society and political activists as ‘foreign agents’, unfairly limiting electoral competition and portraying colour revolution ideas and techniques as subversive and alien to the country’s culture and traditions. Thus, to understand why some protests succeeded and others failed, we need to take into account the ability of authoritarian regimes to prevent democratisation and significant economic reform. This amounted to the ability of the regimes to study democracy promotion techniques at the heart of protests and directly combat these techniques. As there was variation in activists’ choice of strategies across the various protesting nations, rulers’ responses also differed according to the perceived seriousness of the threat to regime survival and the regime’s strength in relation to the opposition it faced.

Although not connected by geography, time or culture to the colour revolutions, Hong Kong’s ‘Umbrella Revolution’ (also known as ‘Occupy Central’ and the ‘Umbrella Movement’) in 2014 similarly involved popular protests against authoritarian rule and lack of democracy. The name ‘umbrella’ refers to the fact that many activists held umbrellas as a symbol of protest during the events. Hong Kong is a semi-autonomous island territory and a former British colony. It passed from British to Chinese control in 1997 and part of the deal was that China would allow at least a measure of democracy to continue. China, of course, is ruled by a Communist government and is a one-party state that strictly limits political competition. Protesters believed that the Chinese government was going back on an agreement to allow Hong Kong to have open elections and was progressively governing Hong Kong more like mainland China. There were also underlying economic issues, with Hong Kong’s citizens experiencing some of the highest levels of wealth and income inequality in the world. For several weeks, Hong Kong’s ultra-modern business centre was transformed into a conflict zone, with up to 200,000 protestors confronted by police in riot gear. The protests eventually fizzled out, with the protesters not only failing to persuade the government of China to accede to their demands but also experiencing dwindling support as people grew tired of the disruption to their lives. As was the case in some Central and Eastern European countries, this highlights the ability of entrenched rulers to stay in power without making significant concessions. Yet it is also clear that the protests have had an impact on how many Hong Kong citizens view their political future. This may be significant in years to come as a large proportion of the protesters were students and young people.

Although separated by a decade, the colour and ‘Umbrella’ revolutions were both indicative of a wide sense of disconnection from power. When this is matched by an ability for people to use their voice to influence political and economic outcomes, mass action can quickly follow. Here, we can see the double-edged impact of globalisation at work. On the one hand, the end of the Cold War unleashed the forces of democratisation and economic reform that many authoritarian elites did their best to prevent – sometimes with success. On the other hand, ideas set free by the end of the Cold War found resonance in diverse cultural contexts and expression in the form of street protests that reflected the power of the voice of the people. In fact, so extensive was the spread of such thinking that even established democracies in the West were affected.

The Occupy movement

The United States is a country that allows its citizens full participation in politics – a place where the people determine the direction of the nation via their mass participation in elections. Such slogans as ‘land of the free’ and ‘anyone can be president’ come to mind. But, like many other similar political regimes it faces degenerating into a system that favours the rich. In the US today, the top one per cent of people are in receipt of 21 per cent of national income. Over time, this proportion has been changing for the worse. In the 1970s the top one per cent’s income share was ‘only’ about 10 per cent. The issue became acute following the 2008 financial crisis, which laid bare the degree of inequality in American society and the lack of influence over public policy felt by the majority of the population (see Picketty 2014). Two million Americans lost their homes in the so-called ‘sub-prime mortgage’ collapse, which then spiralled into a much bigger crisis affecting the entire financial system. The US government bailed out some large corporations and banks to the tune of hundreds of billions of dollars to prevent the whole financial system from collapsing. This was accompanied by austerity measures that eroded benefits and public services as the government had less money available due to the economic crash. This general pattern was also seen in other liberal economies, including the United Kingdom. Hence, a picture emerged in some circles that the government had given money to the richest and taken money from the poorest. The Occupy movement was a diffuse and diverse reaction to this perception. It was a reaction against the ineffectiveness of the traditional tools of democratic politics and government such as political parties, elections and lobbying.

The Occupy movement protested against Wall Street, home of the US financial industry, as a symbol of ‘unearned’ privilege and wealth – even though it was politicians who were coming up with and implementing austerity cuts. The movement began in Zuccotti Park, near Wall Street, on 17 September 2011. Critics noted the activists’ lack of a clear set of demands and their tendency to only highlight grievances. However, a clear set of values did emerge:

  • Solidarity – society’s institutions should aim to maximise mutual benefits.
  • Diversity – diverse solutions to pressing problems.
  • Equity – in terms of solutions and distribution.
  • Control – especially self-management, freedom and autonomy.

Following the emergence of the Occupy movement, there were hundreds of similar occupations all over the world – though mainly in the United States and Western Europe. Years later, it remains clear that the problems that prompted these protests have not gone away. However, much of the energy has dissipated from the movement. This is partly because the protesters could not develop and articulate a common platform that would enable a clear pathway to action to be advanced (which would have been the priority of a political party or revolutionary movement). Instead, they just produced a slogan, ‘We are the 99%’, highlighting the growth of inequality since the 1970s that disproportionately affects women, young people and minorities. The Occupy movement splintered following the decision of the mayor of New York to break up the protest in November 2011. Without leaders or specific demands, it turned into an unfocused protest against everything that was ‘wrong’ with the world.

While the Occupy movement’s social critique resonates with many people, the question remains whether it offers a practical and achievable means to accomplish goals. How best to mobilise people to alleviate poverty? Many would argue that action aimed at poverty alleviation – for example, building public housing projects or preventing cuts to food stamps – has to involve mainstream politics. Critics claim that the new generation of activists may have forgotten, abandoned or overlooked the progressive ideal of a reform-minded government raising up the poor and mitigating discrimination. What is clear is that the Occupy movement has given voice to concerns about systemic divisions in the economic and social structure in the United States and other Western states. These concerns have touched a nerve that continues to resonate – much like the aftermath of the Umbrella Revolution in Hong Kong. And, also like Hong Kong, the adverse reaction of certain political leaders and senior police officers suggested to some the hypocrisy of those with power. Post-2008, it is now common for politicians seeking election in the United States to profess their support for ‘main street’ rather than Wall Street as a means of rallying popular support.

The Arab Spring

The Arab Spring is a collective term for a series of political protests that began in late 2010 in Tunisia. Over the next few years, a number of countries saw their political situation greatly affected as protests broke out across the Middle East and North Africa against the corrupt and authoritarian leaders that were typical of the region. While Arab peoples live in very different states, the protesters were united by a feeling of alienation from political power. Despite this, it is unclear whether the Arab Spring events will lead to more democracy in the region. That is, there has been no uniformity in what subsequently occurred. In some cases, old dictators remain in power, while in others new leaders acquired power via the ballot box. In Egypt, things are more complex still as there have been several changes of power. What is clear is that rebellions occurred that have reshaped the region. Libya’s Gaddafi regime was overthrown by rebels aided by international intervention in the form of a NATO bombing campaign. There were also major political upheavals in Syria and Yemen and smaller, though still noteworthy, expressions of dissent in other states such as Bahrain, Algeria and Morocco.

The events of the Arab Spring highlighted the importance of stability, security and regime longevity. They also directed attention to the prospects for democratisation and economic and social improvements for ‘ordinary’ citizens. The pressing question is whether governments can deal with the challenge of fast-growing populations demanding more jobs and improved welfare. This is almost certainly the key concern of the tens of thousands of people in the Middle East and North Africa who were active and vocal in the Arab Spring protests. Such people – like their counterparts elsewhere in the world – expect political change that improves their lives. However, while Arab peoples have been lumped together in accounts of the Arab Spring due to their apparently common political and economic plight, it is important to note that widespread divisions characterise the region. This involves conflict between different religious expressions, including intra-Muslim struggles (Iraq, Syria, Bahrain) and Muslim–Christian (Tunisia, Egypt) tensions. Despite the coming together of people of all faiths in the Arab Spring protests, sectarian tensions and conflict have followed. The stand-out case here is Syria, which in 2011 spiralled into a deeply polarising sectarian conflict that has since been fuelled by regional (Iran–Saudi Arabia) and also global (US–Russia) rivalries. The conflict has caused the deaths of hundreds of thousands and the displacement of millions. It represents the extreme edge of what was unleashed by the Arab Spring.

Not since the end of communism a generation ago has the role of religion in democratisation and post-authoritarian political arrangements been so centrally and consistently to the fore. The Middle East and North Africa are regions often characterised as places where religion – especially Islam – is a key component of demands for political and social change. However, it is not obvious what the role of religion has been in the Arab Spring. Across the Middle East and North Africa, identifiable religious actors have been, and continue to be, conspicuous in anti-authoritarian and pro-democratisation movements. But there appears to be no clear pattern in terms of outcomes related to democratisation. What we do know is that rebellions in Egypt and Tunisia unseated incumbent governments and initially ushered in recognisably democratic elections which, in both cases, Islamists won. Yet, we saw an apparent transition to a recognisably democratic regime only in Tunisia. In Egypt, the primary struggle was between democrats and non-democrats. Over time, this shifted to a fight between secularists and the Islamists who had triumphed in a popular election. As things became polarised the military felt emboldened to crack down on the Islamists, who were perceived by the secularists as following a more extreme version of political Islam than was tolerable for Egyptian society at large. Eventually, the elected president Mohamed Morsi was ousted from power in a coup led by military chief Abdel Fattah el-Sisi. Sisi was subsequently elected as president via the ballot box in June 2014, receiving a popular mandate.

Overall, evidence suggests that the likelihood of the Arab countries of the Middle East and North Africa taking a clear path to democratisation is currently poor and the chances of widespread democratic consolidation still worse. In this midst of the picture is the serious proliferation of transnational terrorism that is explored in the next chapter. The unwelcome but most likely outcome is a gradual slide into entrenched and long-term political instability culminating in some cases in state failure, with serious ramifications for regional and international instability. The plight of Syria is a worrying case in point. In this context, the voices of the people of the Arab Spring can be seen to have had a very mixed set of results.

Conclusion

The aim of the chapter was to show how, in various parts of the world, the voices of ordinary people – intensified and encouraged by globalisation and the attendant communications revolution – challenged the status quo. In some cases this resulted in significant regime change; in others, rulers were able to hang on to power. While the picture may appear more gloomy than cheerful in terms of evidence of change, it is important to understand that none of the examples of the protests covered in this chapter are definitively concluded. Unlike earlier revolutions – for example, those in France, Russia and China, all of which ushered in definitive regime changes – none of the examples covered in this chapter amount to clear-cut jumps from one political system to another. What we can observe is the connectedness and shared ideas that collectively characterise today’s popular protests. We can expect to see more such protests in the years to come as people across the world raise their voices and demand change

Chapter 12: Connectivity, Communications and Technology

In the words of Rucker (1983, 108) ‘the human race is a single vast tapestry, linked by our shared food and air’. In this sense, it is correct that the entire human race is connected through the material world. It is wrong, however, to assume that such connections create any kind of unity. In international relations, when we think of humanity, we do not think of a single, homogenous, peaceful body, but of a number of distinct factions competing, coercing and cooperating to achieve their own end goals. These factions may be groupings such as ethnic, racial or religious divisions or they may be nation-states. They can also be anywhere on a scale from very large to very small. Importantly, however, none of these groupings exist independently of the individual humans within them. The individual is the basic unit at which humanity exists. In this way, individuals are symbiotic with the wider system, with each playing a role in shaping and influencing the other. Humanity consists not only of human bodies, but also of the ideas, the convictions, and the wills contained within human minds. Given this definition, what does it mean for humanity to be connected? In a physical sense, a disconnection has always been present. Each human mind is contained within a human body that exists separately from all others. It is, however, on the metaphysical plain – that of ideas, convictions, and wills – that humanity can be connected. The uniting of many individuals for a common cause, for example, represents a connection of minds leading to action. Such unity can of course arise by complete chance or through non-conscious actions. However, more powerful connections arise when the unity stems from conscious interaction. Central to the concept of connectivity, therefore, is the ability to communicate with others, which we do more and more today via digital means.

The internet

The internet is a collection of connected computer networks, linking tens of billions of devices across the globe. These include servers, personal computers, mobile telephones and video game consoles. Increasingly, other devices are also being connected to the internet, such as cars and domestic appliances. Devices connected to the internet are connected to each other through network links. These links can be either physical cables or wireless connections. Physical cables come in an array of shapes and sizes, ranging from small cables used to directly link two computers together, to large undersea cables connecting continents. Wireless connections, though not visible, work on similar scales, from Wi-Fi networks in the home to links to satellites in space. Communications on the internet may traverse any combination of these network links, and they have become a hotly contested topic in international relations.

Though often used synonymously, the internet is not the same as the ‘world wide web’ (www). The web is just one of many services operating on the internet, accessed through a web browser to display documents containing text, images and other media. Examples of other services on the internet include email, voice and video communications and online gaming. The distinction between the internet and the web is important as conflating technological concepts can have severe repercussions in the area of laws and regulations where precise wording is paramount. Throughout this chapter, the internet should be envisaged as the whole gamut of connected digital devices and services. When individual devices or services are discussed in detail, it will be made explicitly clear which device or service is being talked about.

Digital commerce

Commerce is a cornerstone of human interaction. Throughout history the trade of goods and services has provided opportunities for humans to connect and necessitated methods of communication. Bartering, agreements and contracts have been made possible through verbal, written and visual means. With the exponential growth of the internet, it was inevitable that merchants and private traders would adopt this channel for commercial purposes. The shift of commerce from offline to online has repercussions for human interaction and communication. In the modern economy, commerce involves a long supply chain and multiple agents that affect the production and transport of goods. To take a product from idea to conception to finally reaching purchasers requires first raw materials, then a manufacturer, a distributor, a seller and a customer (with possibly a marketer or two thrown in for good measure). Each step in this process requires individual human beings interacting with one another, especially at the point of sale. Through digital commerce, however, many of the middlemen in the process can be eliminated. Customers can purchase goods directly from the manufacturer with a few clicks or taps without ever (directly) interacting with another human being. To buy a television, for example, would previously have required a person visiting a more generalised retail outlet such as an electronics store, speaking with a sales representative and making the purchase. The retail store would in turn have procured the television from a distributor, who would have acquired it from the manufacturer. Thanks to the internet, however, a prospective buyer can now simply visit the manufacturer’s web page, purchase the television and have it delivered to their door, effectively cutting out most of the traditional commerce chain and with very limited interpersonal communication.

In some ways this method of conducting commercial activities is reminiscent of trade before the advent of mass production. From the days of the ancient Athenians gathered in the Agora, a central square for meetings and business, commerce was typically a highly personal affair. The public marketplace as a central site for commerce has now been re-enabled by the internet through websites like Amazon and eBay. Here, manufacturers and producers can reach customers directly, without requiring an established long chain of suppliers and agents. Though Amazon may be analogous to the Agora, a perhaps better example of how digital commerce affects international relations is the Silk Road. In ancient times, the Silk Road was a 6,000-kilometre trade route connecting Europe and Asia. It not only facilitated commercial trade but also enabled the flow of ideas, and even religions, between cultures. It was in effect a widely dispersed network of traders and outposts through which flowed both goods and information. Importantly, these flows were embodied through personal interaction between those who travelled along the Silk Road.

The ancient Silk Road shares its name with a modern digital counterpart. First established in 2011, Silk Road was an online marketplace that could be accessed and operated using software provided by the ‘Tor network’ in the form of a special web browser that preserves users’ anonymity. This allowed shoppers to make purchases without revealing any personal information, including bank card details, as payments were made in bitcoin – a decentralised digital currency. Vendors operated under pseudonyms. The anonymity aspects of the transaction process differentiate the modern Silk Road from the ancient one, exemplifying the depersonalisation of commerce in the internet era. Silk Road and Tor are also emblematic of the growth of a part of the internet called the ‘dark web’ that can only be accessed by specific software, or specific means such as access passwords. The effect of this in the sphere of international relations is most starkly evident in the police operation that eventually shut down Silk Road. A holding page displayed after the seizure of Silk Road’s website was emblazoned with the crests of a number of US and European law enforcement agencies, bordered by the flags of 13 countries between them speaking 11 languages. The internet has provided a place for shady activities, and the task of combating these has in turn taken on an international scope.

Digital communications

At least as old as the idea of commerce is the idea of communicating with other humans across geographical divides. A primary means for doing so is through the written word. The most direct of these means is the letter, because it is sent from one individual to another individual carrying a specific message. As such, letters represent a key connection between humans. In the digital age, email and instant messaging have usurped letters as the primary means of written communication, with hundreds of billions of digital messages sent from one person to another each day. The process of mailing a letter resembles the protracted commercial chain described in the section above. There is a sender who authors the letter and drops it in a post box. A postal worker then collects the letter and brings it to a sorting centre where a machine (though previously a human) directs the letter towards the right address. The letter is then transported by land, sea and/or air to a distribution centre where more sorting happens. Finally, a delivery person deposits it at the stipulated address, where the receiver accepts and reads the letter. Through a convoluted series of middlemen, the sender and receiver can thereby communicate with each other. With email and instant messaging, the human middlemen are completely removed from the process. The only step between sender and receiver is some technological wrangling that ensures the email or message arrives intact at the correct destination. In this way, sender and receiver can communicate directly and, importantly, with near instantaneousness. A written letter can take anything from a day to a week, or more, to arrive at its destination. By comparison, an email usually takes a matter of seconds, regardless of how much of the planet it has to traverse. Even emails to the International Space Station take only a few seconds to transmit.

You may take the speed at which you can message others for granted. But it is worth putting this in perspective with a historical comparison. According to legend, when Martin Luther set in motion the Protestant Reformation in 1517, he did so by nailing a polemical document to a church door in Wittenberg. This act began a process of violent upheaval that culminated in 1648 with the end of the cataclysmic Thirty Years’ War. The full effects of Luther’s public posting thus took some 130 years to come to fruition. The modern equivalent of his document would be a social media post. Given that digital communications travel with almost no delay, messages can be quickly delivered to millions of people to spread ideas and organise movements. Perhaps the best example of this is the Arab Spring, also called the Twitter Revolution due to the widespread use of social media to propagate ideas and organise a response. While the Thirty Years’ War took over a hundred years to materialise and play out, the revolution in Tunisia took just a few weeks. It is clear that digital communications have played some role in speeding up such events.

Reach

One important theory, only made possible by the digitisation of commerce and communications, is that of the ‘long tail’ (Anderson 2004). In a nutshell, the theory suggests that because products can be distributed and sold more cheaply, vendors can now stock a broader range of goods each of which appeals to a small customer base (the tail), rather than focus on a narrow range of goods that appeal to a large number of customers (the head). For example, the virtual shelves of Amazon contain almost every type of product conceivable, whereas the physical shelves of a retail outlet are limited by the space available. Through the internet, niche products can appear alongside mainstream ones. With a literally global audience reachable through the internet, even the most obscure ideas (about, for instance, political ideology, religious convictions, business ventures) can find someone to appeal to. There are both benefits and drawbacks to this phenomenon.

On the one hand, people living under repressive regimes may be limited in their ability to communicate both within and outside their country. With digital technologies this repression can be sidestepped, allowing the expression of grievances and bringing to light issues that might otherwise be shrouded from view. The Arab Spring, as discussed above, is a case in point. In Egypt, the Mubarak regime even switched off the country’s internet services in acknowledgement of the role they were playing in the organisation of protests. The fact that protesters were nevertheless able to bring down Mubarak’s regime shows how the internet can empower people to overcome repression. This is also true in cases where communication is not actively repressed, but simply ignored or lost. With a ‘long tail’ to communicate to, people have a greater chance of making themselves heard. With greater reach of communications, the presentation of a novel idea is more likely to garner support, dissent, or comments than an idea presented to a smaller audience. Consider, for example, ‘crowdfunding’ platforms, where budding entrepreneurs can present their ideas to the public and appeal for funding to make them a reality. The idea does not have to be a physical product, it can also be the manifestation of a political or religious conviction. The internet makes it possible for ideas to gain traction that in the past might have fallen by the wayside. In this way, digital communications can increase shared knowledge and foster conversations that lead to the reformulation and improvement of ideas.

On the other hand, the long tail also gives a voice to unsavoury constituents of society. Just as the repressed can make themselves heard, extremists may find a foothold in the murky depths of the internet where bad ideas can be picked up and amplified. Perhaps the most notorious beneficiary of this has been the Islamic State group (also known as ISIS, ISIL and Daesh). Much has been made of their mastery of the internet to radicalise and recruit new members and spread propaganda – particularly through social media. There is no shortage of people, including Muslims, who renounce the group and actively seek to combat its message, but in the online world the majority view does not necessarily eliminate others being expressed. Previously, a bad idea might have faded into obscurity for lack of an audience, but with a long tail even the most heinous ideas can find adherents.

Affordability

More people than ever before are partaking in commerce and communications thanks to digitisation lowering the barrier to entry. The traditional lengthy logistics chain to move products adds more cost. At each step along the chain the handling party requires a fee, which will be passed on to the customer by increasing the price of the product. By shortening the logistics chain and cutting out middlemen, manufacturers make cost savings. Although the cost of producing a product might stay the same, savings can be made when it comes to distributing, selling and marketing the product. These savings can be passed on to customers in the form of a lower price, with the manufacturer maintaining the same profit margin. This lower price can potentially attract customers who were previously barred by high prices. The digitisation of commerce can thus open up markets by making products more affordable.

The digital communications chain has been shortened in similar ways, with the same sort of cost benefits. However, the monetary cost of communications was never really high enough to pose a barrier to entry. The benefits of the digitisation of communications are not primarily price, but rather the lowering of the skills required to partake. Communicating via letters as outlined above requires the ability to both read and write. Until the spread of mass education in the twentieth century, these skills were limited to a relatively small subset of humanity. Now, since literacy levels are high in most developed states, digital communications have the power to make a difference for people with learning difficulties or in areas where education is limited. Courtesy of video messaging applications, real-time long-distance correspondence can be achieved via face-to-face communication. This bypasses any need to be able to read and write, requiring only the interpersonal communication skills every person has. It does of course require a device, such as a laptop or smartphone, on which to run the application. However, devices are becoming cheaper, and a single device can be shared and passed around. Shared ownership not only spreads the initial cost of purchasing the device, but is in itself a means for people to connect with one another. The ability of a family to gather around a laptop and video call with relatives on the other side of the world is a powerful way to maintain relationships otherwise challenged by distance and time.

Those previously separated by geographical distance and/or access to means of communication are now able to reconnect with lost acquaintances and even forge relationships with strangers on the other side of the globe. In this way, digital communications have the potential to increase humanity’s homogeneity. If everyone is connected, divisions between locations, races, nationalities, classes and wealth can be blurred. Rather than emphasise the things that have traditionally separated humanity, it is possible to concentrate on those things that unite us: the shared values that make us human.

Reliance

Digital devices are inseparable from the new logistics and communications that are increasingly underpinning human activity. Devices come in a wide array of shapes and sizes and have an equally wide range of functions. Probably the most ubiquitous and familiar devices are personal computers and smartphones. For many people it is impossible to imagine life without the instant connectivity and wealth of information provided by the internet and accessed through such devices. Devices have thus become an integral, perhaps indispensable, part of human life. As these devices permeate society, it is conceivable that humans cede some of their humanity to the digital realm. Using the internet for many of our basic human functions, both individual and societal, effectively requires the internet to make up part of what it means to be human. In 1945 Vannevar Bush introduced his idea of a ‘memex’, which he described as

a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory. (Bush 1945)

Eerily prescient, Bush’s description accurately describes smartphones. The implication of this is that, thanks to such a device, the limited human mind can be freed up to perform the uniquely human capacities to imagine, associate and experiment.

Of course, such reliance on technology can have negative consequences. If the technology was to disappear or be denied to us, we could potentially lose some of our humanity. The example of Egypt’s internet services being cut off demonstrates the large-scale vulnerability of the technology, as do the cyber-attacks on Estonia in 2007 that lost citizens access to essential services such as banking. Consider Facebook, a social networking platform with over one billion users. Facebook, and its subsidiary Instagram, are used today as photograph repositories. Hundreds of millions of people upload photos as they are taken, effectively replacing the physical photo albums that older generations typically kept in their homes. Facebook thereby becomes an archive of visual memories. If the internet malfunctioned, Facebook, and the memories it contains, would be inaccessible. Memories, both individual and societal, are a key constituent of what makes us human: losing them would amount to losing some of our humanity. The example of memories shows how over-reliance on technology for important human functions may be unwise.

Control

The issue of internet control has recently come to the fore, chiefly due to revelations in documents leaked by the whistle-blower Edward Snowden in 2013. The documents showed the extent of the United States’ intelligence capabilities in cyberspace, many of which were predicated on the fact that most internet traffic originates from, terminates in, or transits through servers based within America. This of course gives the United States a huge advantage, as it enjoys unprecedented access to the flow of information on the internet. Recognising this disparity, and also reacting to alleged infringements of their own citizens’ rights, several countries have called strongly for the nationalisation of the internet. By this they mean moving to a model in which countries ensure data stays within their own borders. Where this is not possible, data should be handled in accordance with the law of its origin state, backed up by an international governance framework. Though this could redress the imbalance of power, it also has the potential to Balkanise the internet. Many of the benefits of the internet rely on the technology being uniformly functioning and accessible across disparate geographical areas. A Balkanised internet would inevitably produce a range of operating standards that might well be difficult to integrate. China is an example of a country that does operate a national internet policy, although for different reasons to those expressed above. Through the ‘Great Firewall’, the Chinese government blocks access to sources of uncensored information such as foreign news outlets and prominent websites like Facebook, Google and Wikipedia. The full benefits of the internet are clearly not available to the bulk of Chinese users, showing how control of the technology can be a powerful tool for controlling a population.

Conclusion

The internet is a truly revolutionary technology which has empowered individuals to connect with other individuals, systems to connect with other systems, and individuals to connect with systems on scales previously unknown. Though issues such as those around reliance and control demonstrate that modern technology is still a work in progress, the key point to remember is that through participation in logistics and communications, digital or otherwise, each person has the potential to affect the process and progress of international relations. Interacting with other humans through the written and spoken word and through trade is what makes humanity flourish. The internet has made this possible for more people, in more locations, more of the time, more quickly. We are therefore connected not merely by shared food and air, but also by a shared capability to meaningfully shape both our own lives and those of others.

Chapter 11: Protecting People

The United Nations (UN) was established in 1945 with a charter that set out to ‘save succeeding generations from the scourge of war’ and ‘uphold faith in fundamental human rights’. Three years later, the Universal Declaration of Human Rights was signed at the United Nations, calling for states to work together to ensure that everyone enjoys ‘freedom from fear’ and ‘freedom from want’. Added to the issues of global inequality and poverty addressed in the previous chapter, finding ways of protecting people from harm is a major contemporary debate. While the picture overall might be improving, too often the international community does too little, too late to protect people from atrocities, civil wars, and other human-made ills. In the twentieth century tens of millions of people were killed in wars between states, while an even higher number were killed by their own governments. Facts like these pose a major challenge for the way we think about world politics. Our contemporary international order is based upon a society of states that enjoy exclusive jurisdiction over particular pieces of territory and rights to non-interference and non-intervention that are enshrined in the United Nations charter. This system is in turn prefaced on the assumption that states exist primarily to protect the security of their citizens. In other words, the security of the state is considered important, and worth protecting, because states provide security to individuals. But, as countless examples show, not every state protects the wellbeing of its population. From recent examples like Syria to examples from the past century, threats to individual security have tended to come more from one’s own state than from other states. Facts like this pose a major challenge to international peace and security and raise questions about whether there are circumstances in which the security of individuals should be privileged over the security of states.

Key positions

The debate about human protection hinges on the issue of whether a state’s right to be secure and free from external interference should be conditional on its fulfilment of certain responsibilities to its citizens, most obviously protection from mass violence. We might plot various responses to this question along two axes – the first relating to our conception of whether moral progress is possible in world politics (more optimistic or more pessimistic) and the other relating to which actors should be privileged (states or individuals). The first axis refers to the way we understand the potentiality and limits of world politics. Some approaches are prefaced on an optimistic vision that dialogue between communities makes moral consensus and shared purposes possible (Linklater 1998). The alternative is a fatalistic or ‘tragic’ conception of world politics based on the view that the world is composed of culturally distinct units with different values that pursue their own, distinct goals with limited possibility for cooperation (Lebow 2003). This account is sceptical of progress, doubts that morality does (or should) play a role in world affairs, and predicts that efforts to spread moral values will prove costly and counter-productive. The second axis relates to what sort of actor should be privileged – states or individuals. It is common for theories of International Relations to privilege the state on the grounds that it is the principal actor in world affairs, the main source of order, and the bearer of international rights and responsibilities. An alternative perspective privileges individuals as the only irreducible actor. Individuals cannot be means to an end; they must be seen as ends in themselves. From these two axes, we derive four ethical positions.

Optimistic and state-centred: a rule-governed international society

This accepts that progress in international affairs is possible, but that in a world characterised by radical difference the basis for progress should be voluntary cooperation between states in a rule-governed international society of states. Perspectives housed in this quadrant hold that the common good is best served by privileging the rules of co-existence found in the UN charter. This focuses especially on the legal ban on the use force and ensuring that the two exceptions to that ban are not abused (Articles 42 and 51). According to this view, allowing states a free hand to promote human protection in other states would create disorder by allowing wars to protect and impose one state’s values on others. Disorder would weaken the international system, undermine human development, and make cooperation between states more difficult. This view dovetails with the commonly held legal view that there is a general prohibition on interference except when authorised by the UN Security Council. This account is unnecessarily pessimistic about the capacity of states to reach consensus about shared moral principles. There is relatively little evidence to suggest that the incremental expansion of collective action into new areas of peace and security, such as human protection, has given rise to greater disorder. This account also overlooks the flexibility built into the Security Council to redefine its role in international peace and security to take account of changing conditions, should it decide to do so.

Tragic and state-centred: the realities of life in an international state of nature

This perspective espouses a communitarian view about the diversity of communities and the relativity of values, but rejects even basic claims about the capacity of states to agree meaningful rules of co-existence, let alone substantive rules. This account suggests that norms and rules are irrelevant as causes of behaviour when set against material factors such as economic gain, territory and the national interest. To paraphrase a prominent realist, Edward Hallett Carr, international interference for ‘protection’ would in fact be nothing other than the interests and preferences of the powerful masquerading as universal morality. This account counsels against humanitarian activism. It doubts the capacity of states to be altruistic and thus sees all state action as exercises in the self-interested use of power that undermines world order. Few, if any, states openly subscribe to this approach. Accepting that states tend to do only what they perceive to be in their interests does not get us very far analytically. To understand why states act in certain ways we need to understand variation in the way that states (even similar states) construct their interests and this requires a deeper understanding of the factors that guide national decision-making.

Optimistic and individual-centred: defending humanity and our common values

The third perspective is the one most positively disposed to advancing human protection. It is usually associated with liberalism and a broader cosmopolitan view that all humans belong to a single world community. It holds that states have positive duties to protect foreigners from tyranny as well as a right to do so since human rights are universal rights that ought to be defended everywhere. According to theorists in this tradition, states have agreed certain minimum standards of behaviour. As such, action across borders to support human protection is not about imposing the will of a few powerful states but about protecting and enforcing basic values and/or the collective will of international society. While this view is on strong ground when it comes to the theoretical right of the UN Security Council to mandate enforcement action, when it comes to a more generalised right to intervention the theory is contradicted by strong bodies of legal thought and state practice that counsel against it. Not surprisingly, therefore, liberal cosmopolitans tend to be divided on whether there is such a general right of intervention outside the boundaries of existing international law.

Tragic and people-centred: the distinctiveness of humanitarian action

These accounts tend to privilege traditional forms of humanitarian assistance and exhibit deep scepticism about military intervention on the grounds that it tends to make situations worse and reinforces the militarist ideals that are among the chief underlying causes of humanitarian crises in the first place. Precisely because of this scepticism, however, these accounts help to widen our understanding of the tools that might be used to protect populations. In exposing some of the intrinsic limitations of forcible action to promote human protection, these approaches emphasise that interventions are selective, partial and never solely humanitarian. That said, critics question how suffering can be alleviated let alone prevented without taking a political stance and so there are real limits to the physical protection that can be afforded by humanitarian action alone. This ‘individual-centred’ approach is vulnerable to many of the criticisms levelled against the ‘tragic’ conception. Notably, its prescriptions often fall well short of what is needed to protect vulnerable populations.

Emerging norms of human protection

Since the end of the Cold War, the practice of human protection has evolved through at least eight interconnected streams of norms, rules, practices and institutional developments. Each of these emerged to address the problem of civilian suffering, especially during war and will be addressed in turn.

International humanitarian law

International humanitarian law had its origins in the nineteenth century with the development of the US Government’s ‘General Orders No. 100’ (better known as the Lieber code), which were military laws designed to limit the conduct of soldiers – and the emergence of the Red Cross movement. After the Second World War, international humanitarian law was developed and codified in a series of international treaties. In 1948, the newly established UN General Assembly approved the Genocide Convention, which prohibited the crime of genocide and assigned all states a legal duty to prevent it and punish the perpetrators. The International Court of Justice (ICJ) was established as the judicial arm of the United Nations and is responsible for adjudicating on disputes between states and other legal matters. It judged that as a result of this convention, all states have a legal responsibility to do what they can, within existing law, to prevent genocide.

The laws of war were further codified in the four Geneva Conventions (1949), two additional protocols (1977), and in a range of protocols covering the use of Certain Conventional Weapons. Of particular importance was Common Article 3 of the 1949 Geneva Conventions, which committed parties to respect the human rights of all non-combatants; and the Convention on the Protection of Civilian Persons, which offered legal protection to non-combatants in occupied territories. The Geneva Protocols (1977) extended the legal protection afforded to non-combatants to situations of non-international armed conflict. They also insisted that armed attacks be strictly limited to military objectives and forbade attacks on non-combatants or their property. These principals provided the legal and moral foundation for subsequent campaigns for conventions banning weapons, such as landmines and cluster munitions, that were considered inherently indiscriminate. International humanitarian law has thus created a normative standard of civilian protection that not only prohibits attacks on non-combatants and restricts the use of certain weapons but also calls for the prevention of particular crimes, such as genocide, and the punishment of perpetrators.

Protection of civilians

The UN Security Council’s formal engagement with this theme dates back to 1998 when, at Canada’s request, it adopted a presidential statement calling for the Secretary-General to submit periodic reports on how the UN might improve the protection of civilians. Since then, it has held a series of open meetings on the protection of civilians, establishing it as one of its major thematic interests. In 1999, the Security Council unanimously adopted Resolution 1265 expressing its ‘willingness’ to consider ‘appropriate measures’ in response ‘to situations of armed conflict where civilians are being targeted or where humanitarian assistance to civilians is being deliberately obstructed’. In addition, the Security Council expressed its willingness to explore how peacekeeping mandates might be reframed to afford better protection to endangered civilians. In 2006, it adopted Resolution 1674, which built further on this progress by demanding that parties to armed conflict grant unfettered humanitarian access to civilians.

As it has developed its thematic interest in the protection of civilians, the Security Council has also developed and strengthened its practices of protection. In doing so it has broken new ground. In Resolution 1973, passed in 2011, the Security Council authorised the use of force for human protection purposes in Libya. This was the first time in the history of the Security Council that such an action had been passed without the consent of the host state. Through this resolution, and the one that preceded it (Resolution 1970) the Security Council utilised the full range of the collective security powers granted to it by the UN Charter. Three years later, Resolution 2165 authorised the delivery of humanitarian assistance into Syria without the consent of the Syrian government – the first time that the Council has done this. Hence, two very important issues of precedent were established, built on a new understanding of the need to protect civilians.

Before the turn of this century, civilian protection was typically not considered a core part of peacekeeping. Starting in 1999 with the UN mission in Sierra Leone, the Security Council has invoked Chapter VII of the UN Charter with increasing regularity to authorise peacekeepers to use all means necessary to protect civilians. Chapter VII of the Charter gives the UN Security Council the authority to authorise whatever means it deems necessary, including the use of force, for the maintenance of international peace and security. By design, it was intended as a key deterrent to international aggression. Today, civilian protection and the authorisation of ‘all means necessary’ to that end are core aspects of UN peacekeeping and central to many of its new mandates. In the Democratic Republic of the Congo (DRC), the Security Council went even further by tasking a ‘Force Intervention Brigade’ to take the fight to non-state armed groups that were employing mass violence against civilians. Today, the bulk of the UN’s 120,000 peacekeepers are deployed with mandates to use all necessary means to protect civilians from harm.

Addressing specific vulnerabilities

Since the end of the Second World War, international society has periodically recognised groups that are exposed to particular vulnerabilities and has established mechanisms aimed at addressing or reducing those vulnerabilities. Of these, the best developed is the international refugee regime, which is governed by the 1951 Refugee Convention and subsequent 1967 Protocol. It is overseen by the UN High Commissioner for Refugees (UNHCR). This system grants people facing persecution the right to claim asylum and receive resettlement in third countries and mandates the UNHCR to ensure that refugees have access to protection and durable solutions to their displacement. During the 1990s, it became apparent that this system was unable to cope with a new displacement crisis – that of internal displacement. Internal displacement occurs when people are forced from their homes by mass violence and other ills but remain within their host country. As a largely domestic issue there was little appetite for an international convention governing the displaced. Instead, the UNHCR extended its mandate to cover the protection of all displaced persons and United Nations officials developed ‘guiding principles’ for their treatment.

Another longstanding facet of mass violence that gained political prominence only in the 1990s was sexual and gender-based violence. The use of rape as a weapon of war in various cases pushed the UN Security Council to establish the protection of women and girls as one of the principal elements of its ‘Women, Peace and Security’ agenda adopted in the year 2000 via Resolution 1325. Since then, the United Nations has created the post of Special Representative of the Secretary-General to give permanent focus to the issue, and has instituted a series of annual reports that identify where these crimes are committed and advocate for steps to be taken in response. The United Nations has also begun to ‘mainstream’ the protection of women and girls through, for example, the deployment of women’s protection advisers. Beyond the United Nations, the British government launched its Preventing Sexual Violence in Conflict Initiative which, amongst other things, has helped persuade two-thirds of the world’s states to support a ‘Declaration of Commitment to End Sexual Violence in Conflict’. These developments have been paralleled by a range of initiatives focused on protecting children in armed conflict. Also led by the Security Council, the United Nations has appointed a Special Representative for the protection of children, which reports on the unique protection challenges facing children and related issues such as the recruitment of child soldiers. In 2014, the UN’s ambassador for the promotion of education, former UK prime minister Gordon Brown, launched a global initiative to establish a contingency fund to support the provision of education to children during humanitarian crises, be they caused by natural disasters or mass violence.

Human rights

While human rights as a whole are subject to a great deal of questioning, their higher profile has undoubtedly made an important contribution to human protection. Two aspects in particular stand out, but they are illustrative rather than definitive since the overlap is extensive and complex. First, emerging principles and practices of peer-to-peer review, where states evaluate and comment on each other’s performance (mainly through the compulsory review process of the UN’s Human Rights Council), create expectations about the type of steps that states ought to take in order to protect their populations from various forms of abuse, including mass violence. While the most intransigent states remain largely unmoved, there is increasing evidence that peer review activities are influencing many states and pushing them towards greater compliance with their human rights obligations due to the pressure that being ‘watched’ places on them. Second, over the past two decades, international society has made increasing use of permanent and ad hoc arrangements for human rights monitoring and reporting in its decision-making on mass violence. Through a variety of different mechanisms, such as independent commissions and inquiries, special rapporteurs and fact-finding missions, international society is increasingly utilising human rights mechanisms to monitor and prevent mass violence. Most obviously, this reporting helps support decision-making on mass violence by furnishing key institutions with reliable information. It also encourages states to respect human rights by raising international awareness of domestic human rights practices.

International criminal justice

The idea that some crimes are so serious that the prosecution of perpetrators should be universal has advanced significantly in the past two decades through the activities of the International Criminal Court and a series of special tribunals. These institutions have proliferated since the mid-1990s and contribute to individual perpetrators being held accountable for their actions. Proponents argue that by ending impunity such institutions help deter would-be perpetrators and also give some legal protection to the victims. The first tentative steps were taken in the mid-1990s when the Security Council established tribunals to prosecute the perpetrators of grave crimes in Bosnia and Rwanda. The Rome Statute establishing the International Criminal Court in 1998 held that the Court’s jurisdiction could be invoked when a state party proved unwilling or unable to investigate evidence pointing to the commission of widespread and systematic war crimes, crimes against humanity and genocide. The Court’s prosecutor can initiate proceedings in cases where s/he or she is able to persuade a panel of judges that a case fell under the Court’s jurisdiction, where a complaint was made by a signatory state, or when a case was referred to the prosecutor by the Security Council. To date, the Court has indicted 39 individuals and counts 124 states as members – though importantly the United States, Russia and China have yet to join. While it is important to state that developments like the International Criminal Court are still embryonic, the evidence suggests that transitional justice measures make reoccurrence less likely and improve general human rights within states. It also has a deterrent effect that spills over into other countries, including those that are not (yet) members of the International Criminal Court.

Humanitarian action

The notion that civilians ought to receive humanitarian assistance in wartime dates back to the nineteenth century and was integral to the development of the humanitarian idea of providing lifesaving assistance to whomever needed it. Those rights and expectations were incorporated into international humanitarian law but their applicability gradually expanded during the 1990s. The UN Security Council began authorising peacekeeping missions to support the delivery of humanitarian aid and, in the cases of Somalia and Bosnia, authorised the use of force to achieve this end. Since then, the Security Council has regularly authorised force for these purposes. What is more, however, in successive resolutions on the protection of civilians and in substantive resolutions on crises, the Security Council has demanded that parties to armed conflict grant unfettered access to humanitarian agencies.

Regional initiatives

The foundations for Europe’s engagement with civilian protection were laid in the 1970s with the Helsinki Accords. Over time, these provided the basis for a Conference on Security and Cooperation in Europe mechanism that by the 1990s incorporated specific references to protection issues, including the protection of children and protection against torture. When this was transformed into the Organisation for Security and Cooperation in Europe in 1995, it was given additional responsibility and capacities to protect human rights including the post of High Commissioner for National Minorities.

As part of its common foreign and security policy the European Union also started to develop a civilian protection role, exemplified by the French-led multinational force in the Democratic Republic of the Congo in 2003 and a range of other operations. The African Union has established a comprehensive regional system for crisis management and response that includes a specific focus on the protection of civilians from mass violence. Article 4(h) of the Union’s Constitutive Act enshrines its right to intervene in the affairs of its member states in issues relating to genocide and mass atrocities. Although this article has not been formally acted upon, owing to African leaders’ continuing commitment to sovereignty, the African Union’s peacekeeping operation in Darfur included a civilian protection mandate and its missions in Mali, the Central African Republic and Somalia have also supported civilian protection. In Latin America, states have established a comprehensive regional human rights mechanism. Even the Southeast Asian region, which is formally committed to the principle of non-interference in the domestic affairs of states, has begun to develop its own mechanisms for promoting human rights and protection through the ASEAN Intergovernmental Commission on Human Rights. These mechanisms might not understand or pursue ‘rights’ in precisely the same fashion, but they do rest on a shared understanding of atrocity crimes as grave human wrongs and a commitment to the prevention of these crimes.

Responsibility to Protect

In late 2005, world leaders unanimously adopted the Responsibility to Protect (R2P) in paragraphs 138–140 of the UN World Summit Outcome Document. This commitment was subsequently reaffirmed by both the UN Security Council and the UN General Assembly, which also committed to ongoing consideration of its implementation. The Responsibility to Protect rests on three pillars. The first is the responsibility of each state to use appropriate and necessary means to protect its own populations from genocide, war crimes, ethnic cleansing and crimes against humanity (hereafter referred to collectively as ‘atrocity crimes’). The second pillar refers to the commitment of the international community to encourage and help states exercise this responsibility. The third pillar refers to the international responsibility to respond through the United Nations in a timely and decisive manner when national authorities are manifestly failing to protect their populations from the four atrocity crimes. The principle was initially considered to be controversial, as it countenanced the potential use of force and other transgressions of sovereignty. Over time, however, international consensus on the principle has widened and deepened.

More tellingly, the Responsibility to Protect has become part of the working language that frames international engagement with political crises and the Security Council has referred to it in more than forty resolutions. It has reminded governments of their protection responsibilities (e.g. Resolution 2014 on Yemen); demanded active steps to protect civilians (e.g. Resolution 2139 on Syria); tasked peacekeepers with assisting governments to protect their own populations (e.g. Resolution 2085 on Mali) and demanded that perpetrators of mass violence be held legally accountable (e.g. Resolution 2211 on the Democratic Republic of the Congo). The Security Council has also connected its work on the Responsibility to Protect with its international efforts focused on preventive diplomacy and conflict prevention through such measures as the control of small arms and light weapons, the prevention of genocide, counter-terrorism and international policing. With this changing focus, debate amongst states turned to focus less on the principle of the Responsibility to Protect and more on its implementation.

Problems and challenges

The world is more likely to respond to human protection crises today than it once was, but as Syria shows we are nowhere close to solving the problem of human insecurity. Even when the normative and political context allows for it, the effective protection of populations from atrocity crimes confronts significant practical challenges. It is important to be upfront about what these challenges are.

The first point is to recognise that there are significant limits to what outsiders can do to protect people in foreign countries. Many internal conflicts are not readily susceptible to outside mediation as they are so complex and fraught with danger that they can defy easy resolution. Concerted international action can sometimes protect populations or prevent mass atrocities, but the primary determinants of violence or peace typically rest within the country itself and the disposition of its leaders. From the United Nations’ perspective, this problem is compounded by the fact that it tends to be confronted only by the world’s most difficult cases. Situations usually reach the UN Security Council only when others have tried, and failed, to resolve them. As a rule of thumb, where conflicts have an easy remedy, solutions tend to be found at the local, national or regional level. The world body tends to assume the lead only on those crises for which others have found no solution. In such circumstances, a modest success rate might partly reflect the sheer difficulty of the cases presented to the United Nations system.

A second challenge is that human protection operates in a world of finite global capacity and competes with other cherished norms and values for attention and resources. This problem of limited resources is compounded by a climate of financial austerity arising out of the 2008 global financial crisis. Many major donors have cut their own national budgets and have imposed austerity measures on their own populations, putting pressure on their support for the protection of people in other countries. The harsh reality, therefore, is that in the near-term, the cause of human protection will not be able to call upon significant new resources.

A third challenge is to recognise that the pursuit of human protection is politically sensitive. Human protection is both enabled and constrained by politics and can generate acute controversies and disputes by, for instance, requiring that some states be identified as being at risk of a crisis and demanding actions that some governments might object to. Often, even long-term preventive measures entail a significant degree of intrusion into the domestic affairs of states, which is not likely to be always welcome. States jealously guard their sovereignty and are sensitive to perceived incursions on their rights or criticisms of their conduct or domestic conditions. As such, they rarely invite assistance or look kindly upon external efforts to prevent atrocities within their jurisdiction. It is important to remember that the United Nations’ activities are overseen by political (as opposed to judicial) organs comprised of sovereignty-wielding member states. One facet of the problem is that states sometimes judge that their own interests are best served by not preventing atrocity crimes. This can be seen over a wide range of cases, but perhaps none have been as striking as the Syrian example, where from 2011 the Security Council failed to act decisively as hundreds of thousands were killed and millions displaced. Historically, the United Nations has struggled to assert its primacy in such situations where the interests of powerful states, especially permanent members of the Security Council, are engaged with competing aims.

Another facet of the problem of ‘political will’ is that states are self-interested actors that prioritise the wellbeing of their own citizens. As such, they are generally reluctant to commit extensive resources to prevent atrocity crimes in other countries. The issue here is not whether governments support atrocity prevention as a goal, but the depth of their support relative to their other goals – including cherished domestic objectives such as healthcare and social welfare. Political and diplomatic capital is also a finite resource. Sometimes, states may judge that trade-offs have to be made to achieve the greatest good or least harm overall. For example, at the outset of the crisis in Darfur in 2003, several states decided not to press the government of Sudan too hard, fearing that this might jeopardise negotiations to end the government’s war with rebels in the south – who eventually seceded and founded their own state in 2011 with the creation of South Sudan.

Conclusion

Whichever position one holds on the virtue and practicality of international action to protect humans from imminent peril, it is indisputable that the past few decades have seen a proliferation of mechanisms, institutions and practices aimed at improving protection. This has gone hand in hand with a global decline in both armed conflict and mass atrocities. Through at least eight distinct but connected streams of practice, we have seen the codification of norms of acceptable behaviour, the establishment of responsibilities for third party states and international institutions, and the emergence of a range of practices aimed towards the protection of vulnerable populations. As a result, mass violence today is typically met with complex – if not always entirely effective – responses from a range of different types of actors. Nevertheless, international practices of protection have improved markedly over the past few decades, contributing to an overall decline in both the incidence and lethality of atrocity crimes. The most important point is that this all remains unfinished business. Not only are there a number of political issues left to address, we have barely begun to scratch the surface of the practical issues connected to implementation. Questions of which strategies offer most protection in what kinds of circumstances will need to be addressed if the promise of protecting people globally is to be turned into a lasting reality.

Chapter 10: Global Poverty and Wealth

Poverty and wealth are often found side by side. They are two dimensions in our world that are interrelated because they affect each other and influence both the willingness and capacity of states to ensure a stable global system. Traditional approaches to IR are premised on the notion of state sovereignty. But, sovereignty as an absolute concept that reinforces separation between states has been tempered through the many processes of globalisation, including economic agreements and the establishment of international organisations, as well as with the emergence of human rights thinking as captured through the Universal Declaration of Human Rights. With respect to the emergence of human rights thinking, the premise goes that in the context of a common set of universal rights based on the individual, the sovereignty of the state can be challenged if a government does not respect or maintain these rights. Here, sovereignty means that a state does not only maintain rights, it also meets its responsibilities. In relation to poverty, globalisation raises the question of the obligation the wealthy owe to the poor and vulnerable. One of today’s most pressing international problems is what to do about poverty and the approximately one billion people living in such a condition. As we start our scan of key global issues, it is appropriate to open this second section of the book by addressing an issue of this magnitude.

Poverty matters as a subject for reflection in IR on many levels, one of which is a prominent set of ideas around global justice that considers what states owe each other in the process of international cooperation. After all, it can be said that those with the power and ability to assist have a moral and ethical obligation to try and solve problems like poverty. This stems from what Peter Singer (1972) calls the ‘rescue case’, noting there is an obligation for someone to assist an infant drowning in a shallow pond if the child can be saved with minimal effort or inconvenience. In the context of global poverty, the logic flows that developed states have an obligation to help poor states because they can, with minimal effort. However, the obligation of developed states to help alleviate poverty is not just relevant because they can assist; it is also because they are very often implicated in creating the conditions for its existence. For example, Thomas Pogge (2008, 2010) argues that poverty exists due to a coercive global order – which includes international governmental organisations such as the World Bank and the International Monetary Fund – that disadvantages the poor and reinforces a context of poverty. This means that developed states and multilateral institutions contribute to the persistence of global poverty due to both the way they have structured the international system and how they operate in it. These perspectives indicate that a global problem like poverty requires a global solution that developed countries have both a moral, and strategic, responsibility to address.

Defining poverty

Defining poverty begins with a consideration of conditions that prevent regions, states and peoples from having access to wealth. Though there are many elements to this, there are four key structural conditions to consider.

1. History of exploitation

Many of today’s poorest nations were previously exploited through colonialism and/or slavery. These actions have had lasting impact through entrenching inequalities between socio-ethnic groups within states. A prescient example is South Africa, which, under British and Dutch rule, restricted the rights of indigenous African groups in the areas of education, land ownership and access to capital. At the same time there was a concentration of wealth in the hands of the white colonising minority. Such actions were eventually enshrined in the creation of the apartheid system of racial segregation. However, even since its dismantling in 1994, poverty amongst the indigenous population is disproportionately high in comparison to white groups due to the fact that capital and land continues to be concentrated in the hands of a select few. Of course, some former colonial nations have emerged from their exploitation to become some of the world’s leading economies – consider the US and Australia. Yet, even in these ‘Western’ societies there remains a legacy of colonialism that often affects indigenous peoples disproportionately. In more absolute terms, as decolonisation unfolded in the second half of the twentieth century, many new nations, particularly in Sub-Saharan Africa, were left with inadequate or weak political structures that soon gave way to other types of exploitation via dictatorship or corruption. In these cases, the bulk of the population experienced exploitation. In some states, these problems still persist.

2. War and political instability

When thinking of the fundamental conditions for economic development to take place in a state, security, safety and stability often come to mind. This is because peaceful conditions permit a government to focus on developing natural resources, human capacity and industrial capabilities. War and political instability often act as significant distractions as efforts are directed at combating violence or insecurity. For example, think of the conflict in Syria that began in 2011. This has led to a mass flow of millions of refugees seeking to escape the conflict, leaving behind a war-torn state that lacks the human and economic resources to govern itself effectively. It is a pattern that has been seen before – for instance, in the 1990s in Somalia, where instability still persists. The outlook for Syria in the years to come could well be even worse. It is also something that can be seen in the developed world, though to a different degree. Consider the United States: it spent upwards of $3 trillion on the invasions and occupations of Iraq and Afghanistan as part of its ‘Global War on Terror’ while, simultaneously, relative poverty and inequality increased within its own society, in part due to the government prioritising public spending on warfare. It is no surprise, then, that when surveys on citizens’ qualities of life are undertaken, stable nations which do not typically engage in warfare – such as Switzerland and Denmark – are often top of the list.

3. Structural economic conditions

The way in which the international economic order is structured can either reinforce or ease poverty. Institutions like the World Bank and the World Trade Organization are dominated by wealthy nations. This has placed them under scrutiny due to embedded practices that often place developing countries at a disadvantage. For example, before the World Bank issues a loan to a low-income nation, certain conditions must be met. These are known as conditionalities. They can include policy changes such as the privatising of public services – for instance, the provision of water, sanitation and electricity. Imposing such conditions, or structural adjustments as the World Bank calls them, have frequently been shown to cause more harm than good.

4. Inequality

Inequality is an important contributor to poverty as it can reinforce divisions between the so-called ‘haves’ and ‘have-nots’. In a relative sense, it can result in certain elements of a population lacking the tools and resources needed to counter the challenges they face. In an absolute sense it can render a whole state unable to rescue its citizens from dire circumstances because it lacks the financial resources. For example, in the United States approximately 16 million children live in poverty. This is despite the fact that it is one of the richest countries in the world. Inequality can be measured by looking at how much income a family has relative to the cost of living in that society. It is not the same as the absolute poverty a child living on less than $2 a day would experience in the Democratic Republic of the Congo, one of the world’s poorest nations. Yet, it is still poverty when viewed in a relative sense through the lens of inequality. The nature of the problem is thus extensive since it is something that exists at both the domestic level (inequalities within states) and the international level (inequalities between states). Although there is a vibrant international charity system and a range of international assistance programmes, inequality remains a key structural condition associated with poverty.

Measuring and reducing poverty

Since the end of the Second World War states have come together to find ways to reduce poverty through prompting economic growth. As discussed earlier in the chapter, concepts of global justice underpin international poverty-reduction strategies, giving focus to approaches that seek to enhance the rights of the marginalised. The extent to which these efforts have been successful is highly debatable – but the intent has certainly been there. States have attempted to address the challenges of poverty at a global level in various ways. We discuss four approaches below.

1. Official development assistance (aid)

Typically, aid comes from developed states and is either channelled bilaterally (or directly) from one state to another or diverted multilaterally through international organisations like the United Nations. It is one way in which wealthy nations have attempted to meet their moral obligation to assist poorer nations. Indeed, developed countries have spent a great deal on official development assistance over the years. In 2014 alone, states spent over $135bn on aid according to a report from the Organisation for Economic Co-operation and Development (OECD). However, the success of such efforts has been inconsistent, and in some cases poverty has actually got worse. The reasons for this are complex but some examples may be helpful.

First, inappropriate types of aid can be sent. Instead of sending money that a developing country can use to address poverty, developed states sometimes provide goods that may or may not be helpful. For example, in Gambia a number of oxygen devices were donated to a hospital, but unfortunately they were not compatible with the local electricity voltage. This rendered the devices unusable, highlighting how aid needs to be properly thought through. Second, corruption in some countries has seen aid syphoned off into the offshore bank accounts of the political elite. For example, the New York Times claimed that over $1 billion in foreign aid intended to help Bosnia rebuild itself after years of destructive war was stolen by Bosnian officials for personal gain (Hedges 1999).

Aid has also been used for the political purposes of the providing state. For example, during the Cold War the United States and the Soviet Union used aid to prop up states that were sympathetic to their own political cause. In many of those places this did little to address poverty; rather, it helped fund regional wars that led to further instability and poverty. For example, the 1975–2002 civil war in Angola saw the Soviet Union and the United States provide aid in the form of military assistance to opposing forces. Aid has also come from developed countries or international institutions with specific conditions for use (‘conditionalities’) that have only served to make things worse. As already mentioned, such aid requires the receiving nation to restructure its economy in ways that may not benefit the most vulnerable people. For example, during the structural adjustment programmes of the 1980s in Latin America, income per capita fell in 18 countries. During similar programmes in sub-Saharan Africa, income per capita fell in 26 countries over the same period (Stewart 1991).

2. Trade and investment

The trade in goods and services together with foreign direct investment by private corporations can play an important role in poverty reduction. One of the ideas behind free trade and reducing barriers to investment between countries is to provide opportunity for states in the international system to grow economically. International trade in goods and services has risen significantly since 1945. Investment between states, or so-called foreign direct investment, has been a major source of that economic growth. But these global activities frequently hide an inconvenient reality: developing countries are often only involved in a minor way in global trade and investment activities. This is due to a number of reasons ranging from inadequate infrastructure such as roads, rail, and ports to limited access to financial capital. In comparison to developed nations, many developing countries have a higher proportion of lower skilled or undereducated workers in their workforce. As a result, investment opportunities that require high-skilled and high-income employment are more often found in developed countries and investment by corporations in developing nations typically targets a low-skilled and low-wage workforce. This reality is difficult to overcome. Although nations such as China and India are investing heavily in an attempt to level the playing field, they are more fortunate than others due to their comparative wealth and high historic levels of economic growth. Despite some notable exceptions, the general picture is that trade and investment have not assisted poverty reduction to any significant extent.

3. Money lending

A third poverty-reducing strategy is lending developing countries money, or capital, so that they can invest in areas that will help them develop economically. Money lending is different from aid as loans need to be paid back, with interest. Loans can be provided for key infrastructure projects like bridges, roads, electricity lines and power plants. These can typically act as catalysts for economic development, but they require significant access to capital. The importance of access to capital resulted in the establishment of the World Bank in 1944. Its mission was to lend developing countries money at below market interest rates and also provide expert advice on the establishment of sound economic policies. On paper, the idea is a good one. However, the practices of the World Bank are not without controversy. As we explored earlier in the chapter, there has been criticism of the conditions attached to the loans. Although the most censured of the policies have been abandoned, damage has been done. In addition, the provision of interest-bearing loans to developing countries has created a huge problem of indebtedness. Many developing countries cannot afford to invest in important domestic programmes such as education and healthcare due to the burden of their debt repayments. This has sparked calls to cancel the debt of developing countries and allow them a fresh start. To date, although some debt has been cancelled, the larger challenges caused by the nature of outstanding loans and how they were imposed remain.

4. United Nations’ goals

In response to the many failings noted above, a new approach emerged in 2000 when the United Nations and its member states moved to eradicate extreme poverty by 2015. The United Nations Millennium Development Goals (MDGs) consisted of eight categories or areas of focus for states to engage in:

  1. Eradicate extreme hunger and poverty
  2. Achieve universal primary education
  3. Promote gender equality and empower women
  4. Reduce child mortality
  5. Improve maternal health
  6. Combat HIV/AIDS, malaria and other diseases
  7. Ensure environmental sustainability
  8. To develop a global partnership for development.

A cross-section of approaches was employed for achieving these goals, including harnessing elements of the three strategies outlined above. The key thing however, was to have a coordinated approach to a set of agreed targets. However, the initiative proved a mixed bag in terms of results. For example, some goals related to education and child mortality have seen real – if uneven – progress, while rates of hunger and malnutrition have actually worsened in some cases. Exacerbating this further, the aftermath of the 2008 financial crisis has reduced the projected amount of money (and jobs) available to many governments. Anthony Lake, Executive Director of the United Nations Children’s Fund (UNICEF), accounted for the mixed picture of success and failure as follows:

In setting broad global goals the MDGs inadvertently encouraged nations to measure progress through national averages. In the rush to make that progress, many focused on the easiest-to-reach children and communities, not those in greatest need. In doing so, national progress may actually have been slowed. (UNICEF, 2015)

Given these unsatisfactory results, the international community agreed that a more robust initiative was needed and the Sustainable Development Goals (SDGs) were adopted at the United Nations in 2015. They have 169 clear targets spread over 17 priority areas, all to be achieved by 2030:

  1. No poverty
  2. No hunger
  3. Good health
  4. Quality education
  5. Gender equality
  6. Clean water and sanitation
  7. Renewable energy
  8. Good jobs and economic growth
  9. Innovation and infrastructure
  10. Reduced inequalities
  11. Sustainable cities and communities
  12. Responsible consumption
  13. Climate action
  14. Life below water
  15. Life on land
  16. Peace and justice
  17. Partnerships for the goals.

Like the Millennium Development Goals, the Sustainable Development Goals can be described as aspirational. Although the newer targets have their critics, one reason that they may offer greater hope in reducing poverty is that the planned interventions are more detailed. The target is not only reducing poverty, but addressing the many conditions that feed and cement conditions of poverty, including poor (or negative) economic growth. And the most vulnerable are now being targeted proactively, addressing one of the criticisms of the Millennium Development Goals.

Globalisation and the wealth–poverty dynamic

Globalisation is an important concept to add to the discussion of global wealth and poverty. It refers to a perception that the world is increasingly being moulded into a shared social space by economic and technological forces. Developments in one region of the world can have profound consequences for individuals and communities on the other side of the world. Central to the idea of globalisation is the perception of intensity. As a concept, globalisation is thus said to be ever increasing in scope, scale and speed to the point that it is effectively irrevocable. As such, globalisation is multi-dimensional. For example, globalisation is more than the goods that flow between geographically diverse communities. Globalisation includes not only the what, but also the how and the why, the frequency with which something occurs, the social consequences of this process and the range of people involved. Although the concept of globalisation is contested and subject to many different interpretations, it has clear relevance to the subject of this chapter.

It can be said that the process of becoming more interconnected as a set of nations has worked towards reducing poverty. Certainly the World Bank argues that globalisation has improved the material circumstances of those who have engaged in the global economy. Though such an analysis is accurate at one level, it fails to account for the structural conditions that influence poverty. An alternative view is that globalisation actually causes poverty by further entrenching inequality and concentrating any gains in the hands of those who are already wealthy and in powerful positions. For example, the internet has allowed many individuals to establish successful businesses and sell their goods all over the world. But how can you take advantage of this technology if you live in an area without access to the internet due to poor infrastructure, poverty or war? These citizens get left further behind and the inequalities that already exist are aggravated. Certainly, any analysis of the impact of globalisation on the wealth–poverty dynamic must recognise both of these perspectives. But, globalisation is a complex issue. If globalisation is only viewed in terms of ‘good’ and ‘bad’, we will not appreciate the multifaceted nature of global processes.

For the purposes of our analysis, globalisation has opened up many (primarily economic) opportunities, and this is evident in the reduction in numbers of those living in extreme poverty. This has dropped from over half the world’s population in 1981 to within reach of ten per cent today. This figure, from the World Bank, takes into account issues like inflation. But, it can also be said that globalisation has entrenched power relationships and inequalities and this has had material effects on poverty and inequality. A common critique relevant to our discussion on poverty is that globalisation is another word for ‘Americanisation’. According to this critique, many of the economic policies that supposedly ‘open up’ international markets are of benefit to US-based multinational corporations and create fertile ground internationally for American foreign policy objectives. On the other hand, globalisation can also be seen as hybridisation. This view was initially based on the creation of ‘new’ cultures and identities due to colonisation and the destruction of traditional indigenous groups. Applied to the processes of globalisation, hybridity has taken on a more positive character – framing globalisation as a series of processes that serve to benefit all sides involved in the exchange by promoting intercultural development and harmony.

Globalisation and neoliberalism

One reason that poverty has remained a key characteristic of the global economy is a suite of policy initiatives based on the economic philosophy of neoliberalism that have arguably failed the world’s poorest and most vulnerable. Since the 1970s, according to Stewart Firth (2005), the priority of the state has been to create and implement policies that promote a neoliberal economic agenda. That is, the opening up and deregulation of markets and the privatisation of essential services. In his book Globalization and its Discontents (2002), former World Bank chief economist and Nobel laureate Joseph Stiglitz provides a number of examples that highlight how the free market neoliberal agenda has driven the agenda of international institutions such as the International Monetary Fund and the World Trade Organization since the 1970s. This has seen trade deals and reforms that minimise the role of government, the removal of trade barriers – even ones that protect workers’ rights – and a reliance on the flawed belief that economic growth and increases in wealth will eventually trickle down to all segments of society. These organisations have fundamentally altered the traditional role of the state, whose priority has been more with the promotion and protection of an open, market-orientated system. States focused on the market often fail to meet the needs of the majority of the population and address poverty. Hence, the philosophy of globalisation, if viewed through the lens of neoliberal policies, has resulted in the welfare of citizens being diminished at many levels.

The global financial crisis of 2008 highlights a bigger challenge for globalisation in addressing the poverty issue. This event began in one nation and quickly reverberated across the world. Due to the interconnected nature of the global economy, what started out as a collapse of the American subprime mortgage market ended up having implications for markets outside the United States. Efforts to reduce poverty were impacted as recession and wealth contraction led to less money being available. Nations prioritised spending at home and foreign direct investment fell as corporations delayed or cancelled projects. These events had negative outcomes with regard to poverty levels in developed nations, but even more so for citizens in developing countries. While significant economic events like this are not common, the risk always remains that in an interconnected global economy the poorest will suffer the most when economic shocks occur.

Conclusion

It is one of the major conundrums of our world that poverty still exists amidst extreme and growing wealth. Today, the richest 1 per cent of the world’s population hold half the world’s wealth. In contrast, the bottom 80 per cent owns just 5.5 per cent. What is worse, statistics like this seem to be getting worse over time with regard to inequality and wealth distribution. It seems that while economic processes have helped lift many out of poverty, they have largely failed to mitigate income and wealth inequality. This result poses serious moral and ethical questions. What cannot be disputed is that the interdependence of our economies is best accompanied by an equal measure of ethical concern. That is, we owe each and every person a debt of responsibility for the actions we take and the policies we promote within our own states. Hopefully the recognition of this, perhaps best marked out by the United Nations 2015 Sustainable Development Goals, will lead to a more just world in the years ahead.