A first attempt to create an economic and monetary union between the members of the European Economic Community (EEC) goes back to an initiative by the European Commission in 1969. The initiative proclaimed the need for “greater coordination of economic policies and monetary cooperation” and was introduced at a meeting of the European Council. The European Council tasked Pierre Werner, Prime Minister of Luxembourg, with finding a way to reduce currency exchange rate volatility. His report was published in 1970 and recommended centralization of the national macroeconomic policies, but he did not propose a single currency or central bank.
In 1971, U.S. President Richard Nixon removed the gold backing from the U.S. dollar, causing a collapse in the Bretton Woods system that affected all the world’s major currencies. The widespread currency floats and devaluations set back aspirations for European monetary union. However, in 1979, the European Monetary System (EMS) was created, fixing exchange rates onto the European Currency Unit (ECU), an accounting currency introduced to stabilize exchange rates and counter inflation. In 1989, European leaders reached agreement on a currency union with the 1992 Maastricht Treaty. The treaty included the goal of creating a single currency by 1999, although without the participation of the United Kingdom. However, gaining approval for the treaty was a challenge. Germany was cautious about giving up its stable currency, France approved the treaty by a narrow margin, and Denmark refused to ratify until they got an opt-out from the planned monetary union (similar to that of the United Kingdom’s).
In 1994, the European Monetary Institute, the forerunner to the European Central Bank, was created. After much disagreement, in 1995 the name euro was adopted for the new currency (replacing the name ecu used for the previous accounting currency) and it was agreed that it would be launched on January 1, 1999. In 1998, 11 initial countries were selected to participate in the initial launch. To adopt the new currency, member states had to meet strict criteria, including a budget deficit of less than 3% of their GDP, a debt ratio of less than 60% of GDP, low inflation, and interest rates close to the EU average. Greece failed to meet the criteria and was excluded from joining the monetary union in 1999. The UK and Denmark received the opt-outs while
Sweden joined the EU in 1995 after the Maastricht Treaty, which was too late to join the initial group of member-states. In 1998, the European Central Bank succeeded the European Monetary Institute. The conversion rates between the 11 participating national currencies and the euro were then established.
The currency was introduced in non-physical form (traveler’s checks, electronic transfers, banking, etc.) at midnight on January 1, 1999, when the national currencies of participating countries (the eurozone) ceased to exist independently in that their exchange rates were locked at fixed rates against each other, effectively making them mere non-decimal subdivisions of the euro. The notes and coins for the old currencies continued to be used as legal tender until new notes and coins were introduced on January 1, 2002. Beginning January 1, 1999, all bonds and other forms of government debt by eurozone states were denominated in euros.
In 2000, Denmark held a referendum on whether to abandon their opt-out from the euro. The referendum resulted in a decision to retain the Danish krone and also set back plans for a referendum in the UK as a result.
Greece joined the eurozone on January 1, 2001, one year before the physical euro coins and notes replaced the old national currencies in the eurozone.
The enlargement of the eurozone is an ongoing process within the EU. All member states, except Denmark and the United Kingdom which negotiated opt-outs from the provisions, are obliged to adopt the euro as their sole currency once they meet the criteria. Following the EU enlargement by 10 new members in 2004, seven countries joined the eurozone: Slovenia (2007), Cyprus (2008), Malta (2008), Slovakia (2009), Estonia (2011), Latvia (2014), and Lithuania (2015). Seven remaining states, Bulgaria, Croatia, Czech Republic, Hungary, Poland, Romania, and Sweden, are on the enlargement agenda.
Sweden, which joined the EU in 1995, turned down euro adoption in a 2003 referendum. Since then, the country has intentionally avoided fulfilling the adoption requirements.
Several European microstates outside the EU have adopted the euro as their currency. For the EU to sanction this adoption, a monetary agreement must be concluded. Prior to the launch of the euro, agreements were reached with Monaco, San Marino, and Vatican City by EU member states (Italy in the case of San Marino and Vatican City and France in the case of Monaco) allowing them to use the euro and mint a limited amount of euro coins (but not banknotes). All these states previously had monetary agreements to use yielded eurozone currencies. A similar agreement was negotiated with Andorra and came into force in 2012. Outside the EU, there are currently three French territories and a British territory that have agreements to use the euro as their currency. All other dependent territories of eurozone member states that have opted not to be a part of EU, usually with Overseas Country and Territory (OCT) status, use local currencies, often pegged to the euro or U.S. dollar.
Montenegro and Kosovo (non-EU members) have also used the euro since its launch, as they previously used the German mark rather than the Yugoslav dinar. Unlike the states above, however, they do not have a formal agreement with the EU to use the euro as their currency (unilateral use) and have never minted marks or euros. Instead, they depend on bills and coins already in circulation.
Following the U.S. financial crisis in 2008, fears of a sovereign debt crisis developed in 2009 among fiscally conservative investors concerning some European states. Several eurozone member states (Greece, Portugal, Ireland, Spain, and Cyprus) were unable to repay or refinance their government debt or bail out over-indebted banks under their national supervision without the assistance of third parties like other eurozone countries, the European Central Bank (ECB), or the International Monetary Fund (IMF).
The detailed causes of the debt crisis varied. In several countries, private debts arising from a property bubble were transferred to sovereign debt as a result of banking system bailouts and government responses to slowing economies post-bubble. The structure of the eurozone as a currency union (i.e., one currency) without fiscal union (e.g., different tax and public pension rules) contributed to the crisis and limited the ability of European leaders to respond. As concerns intensified in 2010 and thereafter, leading European nations implemented a series of financial support measures such as the European Financial Stability Facility (EFSF) and European Stability Mechanism (ESM). The ECB also contributed to solve the crisis by lowering interest rates and providing cheap loans of more than one trillion euro to maintain money flow between European banks. In 2012, the ECB calmed financial markets by announcing free unlimited support for all eurozone countries involved in a sovereign state bailout/precautionary program from EFSF/ESM through yield-lowering Outright Monetary Transactions (OMT).
Return to economic growth and improved structural deficits enabled Ireland and Portugal to exit their bailout programs in 2014. Greece and Cyprus both managed to partly regain market access in 2014. Spain never officially received a bailout program. Nonetheless, the crisis had significant adverse economic effects, with unemployment rates in Greece and Spain reaching 27%. It was also blamed for subdued economic growth, not only for the entire eurozone, but for the entire European Union. As such, it is thought to have had a major political impact on the ruling governments in 10 out of 19 eurozone countries, contributing to power shifts in Greece, Ireland, France, Italy, Portugal, Spain, Slovenia, Slovakia, Belgium, and the Netherlands, as well as outside of the eurozone in the United Kingdom.
The tensions between Georgia and Russia, heightened even before the collapse of the Soviet Union, climaxed during the secessionist conflicts in South Ossetia and Abkhazia.
From 1922 to 1990, the South Ossetian Autonomous Oblast was an autonomous oblast
(administrative unit)
of the Soviet Union created within the Georgian Soviet Socialist Republic. Its autonomy, however, was revoked in 1990 by the Georgian Supreme Council. In response, South Ossetia declared independence from Georgia in 1991.
The crisis escalation led to the 1991-92 South Ossetia War.
The separatists were aided by former Soviet military units, now under Russian command. In the aftermath of the war, some parts of the former South Ossetian Autonomous Oblast remained under the Georgian control while the Tskhinvali separatist authorities (the self-proclaimed Republic of South Ossetia) were in control of one-third of the territory of the South Ossetian Autonomous Oblast.
Abkhasia, on the other hand, enjoyed autonomy within Soviet Georgia e when the Soviet Union began to disintegrate in the late 1980s. Simmering ethnic tensions between the Abkhaz, the region’s “titular ethnicity,” and Georgians, the largest single ethnic group at that time, culminated in the 1992-1993 War in Abkhazia, which resulted in Georgia’s loss of control of most of Abkhazia, the de facto independence of Abkhazia, and the mass exodus and ethnic cleansing of Georgians from Abkhazia. Despite the 1994 ceasefire agreement and years of negotiations, the dispute remained unresolved.
The region of Transcaucasia lies between the Russian region of the North Caucasus and the Middle East, forming a buffer zone between Russia and the Middle East and bordering Turkey and Iran. The strategic importance of the region has made it a security concern for Russia. Significant economic reasons, such as presence or transportation of oil, also affect Russian interest in Transcaucasia. Furthermore, Russia saw the Black Sea coast and the border with Turkey as invaluable strategic attributes of Georgia. Russia had more vested interests in Abkhazia than in South Ossetia, since the Russian military presence on the Black Sea coast was seen as vital to Russian influence in the Black Sea. Before the early 2000s, South Ossetia was originally intended as a tool to retain a grip on Georgia. Support for the Abkhaz from various groups within Russia such as the Confederation of Mountain Peoples of the Caucasus, Cossacks, and regular military units, and support for South Ossetia by their ethnic brethren who lived in Russia’s federal subject of North Ossetia, proved critical in the de facto secession of Abkhazia and South Ossetia from Georgia.
Vladimir Putin became president of the Russian Federation in 2000, which had a profound impact on Russo-Georgian relations. The conflict between Russia and Georgia began to escalate in 2000, when Georgia became the first and only member of the Commonwealth of Independent States (CIS) on which the Russian visa regime was imposed. In 2001, Eduard Kokoity, an alleged member of organized crime, became de facto president of South Ossetia. He was endorsed by Russia since he would subvert the peaceful reintegration of South Ossetia into Georgia. The Russian government also began massive distribution of Russian passports to the residents of Abkhazia and South Ossetia in 2002. This “passportization” policy laid the foundation for Russia’s future claim to these territories. In 2003, Putin began to consider the possibility of a military solution to the conflict with Georgia. After Georgia deported four suspected Russian spies in 2006, Russia began a full-scale diplomatic and economic war against Georgia, accompanied by the persecution of ethnic Georgians living in Russia.
In 2008, Abkhazia and South Ossetia submitted formal requests for their recognition to Russia’s parliament. Dmitry Rogozin, Russian ambassador to NATO, warned that Georgia’s NATO membership aspirations would cause Russia to support the independence of Abkhazia and South Ossetia. The Russian State Duma adopted a resolution in which it called on the President of Russia and the government to consider the recognition.
By August 1, 2008, Ossetian separatists began shelling Georgian villages, with a sporadic response from Georgian peacekeepers in the region. To put an end to these attacks and restore order, the Georgian Army was sent to the South Ossetian conflict zone. Georgians took control of most of Tskhinvali, a separatist stronghold, within hours. Georgia later stated it was also responding to Russia moving non-peacekeeping units into the country. In response, Russia accused Georgia of “aggression against South Ossetia” and launched a large-scale land, air, and sea invasion of Georgia on August 8 with the stated aim of “peace enforcement” operation. Russian and Ossetian forces battled Georgian forces in and around South Ossetia for several days until they retreated. Russian and Abkhaz forces opened a second front by attacking the Kodori Gorge held by Georgia. Russian naval forces blockaded part of the Georgian coast. This was the first war in history in which cyber warfare coincided with military action. An active information war was waged during and after the conflict.
On August 17, Russian President Dmitry Medvedev (who took office in May) announced that Russian forces would begin to pull out of Georgia the following day. The two countries exchanged prisoners of war. Russian forces withdrew from the buffer zones adjacent to Abkhazia and South Ossetia in October and authority over them was transferred to the European Union monitoring mission in Georgia. Russian Foreign Minister Sergey Lavrov said that a military presence in Abkhazia and South Ossetia was essential to prevent Georgia from regaining control. Georgia considers Abkhazia and South Ossetia Russian-occupied territories. On August 25, 2008, the Russian parliament unanimously voted in favor of a motion urging President Medvedev to recognize Abkhazia and South Ossetia as independent states, and a day later Medvedev signed decrees recognizing the two states. In 2011, the European Parliament passed a resolution recognizing Abkhazia and South Ossetia as occupied Georgian territories.
The recognition by Russia was condemned by many international actors, including the United States, France, the secretary-general of the Council of Europe, NATO, and the G7 on the grounds that it violated Georgia’s territorial integrity, United Nations Security Council resolutions, and the ceasefire agreement.
Although Georgia has no significant oil or gas reserves, its territory hosts part of the Baku-Tbilisi-Ceyhan pipeline supplying Europe. The pipeline circumvents both Russia and Iran. Because it has decreased Western dependence on Middle Eastern oil, the pipeline has been a major factor in the United States’ support for Georgia.
The 2008 war was the first time since the fall of the Soviet Union that the Russian military had been used against an independent state, demonstrating Russia’s willingness to wage a full-scale military campaign to attain its political objectives. The failure of the Western security system to respond swiftly to Russia’s attempt to forcibly revise the existing borders revealed its weaknesses. Ukraine and other post-Soviet states received a clear message from the Russian leadership that the possible accession to NATO would cause a foreign invasion and the break-up of the country. The construction of the EU-sponsored Nabucco pipeline (connecting Central Asian reserves to Europe) in Transcaucasia was averted.
The war eliminated Georgia’s prospects for joining NATO.
The Georgian government severed diplomatic relations with Russia.
The war in Georgia showed Russia’s assertiveness in revising international relations and undermining the hegemony of the United States. Shortly after the war, Russian president Medvedev unveiled a five-point Russian foreign policy. The Medvedev Doctrine implied that the presence of Russian citizens in foreign countries would form a doctrinal foundation for invasion if needed. Medvedev’s statement that there were areas in which Russia had “privileged interests” underlined Russia’s particular interest in the former Soviet Union and the fact that Russia would feel endangered by subversion of local pro-Russian regimes.
Despite being an independent country since 1991, Russia has perceived Ukraine as part of its sphere of interests. After the collapse of the Soviet Union, both states retained close ties, but tensions began almost immediately. There were several conflict points, most importantly Ukraine’s significant nuclear arsenal, which Ukraine agreed to abandon on the condition that Russia would issue an assurance against threats or use of force against the territorial integrity or political independence of Ukraine. A second point was the division of the Black Sea Fleet. Ukraine agreed to lease the Sevastopol port so that the Russian Black Sea fleet could continue to occupy it together with Ukraine. Furthermore, throughout the 1990s and 2000s, Ukraine and Russia engaged in several gas disputes. Russia was also further aggravated by the Orange Revolution of 2004, which saw pro-Western Viktor Yushchenko rise to power instead of pro-Russian Viktor Yanukovich. Ukraine also continued to increase its cooperation with NATO.
Pro-Russian Yanukovich was eventually elected in 2010 and Russia felt that many ties with Ukraine could be repaired. Prior to the election, Ukraine had not renewed the lease of Black Sea Naval base at Sevastopol, which meant that Russian troops would have to leave Crimea by 2017. However, Yanukovich signed a new lease allowing also troops to train in the Kerch peninsula. Many in Ukraine viewed the extension as unconstitutional because Ukraine’s constitution states that no permanent foreign troops would station in Ukraine after the Sevastopol treaty expired. Moreover, Yulia Tymoshenko, the main opposition figure of Yanukovich, was jailed on what many considered trumped-up charges, leading to further dissatisfaction with the government.
Another important factor in the tensions between Russian and Ukraine was Ukraine’s gradually closer ties with the European Union. For years, the EU promoted tight relations with Ukraine to encourage the country to take a more pro-European and less pro-Russian direction.
In 2013, Russia warned Ukraine that if it went ahead with a long-planned agreement on free trade with the EU, it would face financial catastrophe and possibly the collapse of the state. Sergey Glazyev, adviser to President Vladimir Putin, suggested that, contrary to international law, if Ukraine signed the agreement, Russia would consider the bilateral treaty that delineates the countries’ borders to be void. Russia would no longer guarantee Ukraine’s status as a state and could possibly intervene if pro-Russian regions of the country appealed directly to Russia. In 2013, Viktor Yanukovich declined to sign the agreement with the European Union, choosing closer ties with Russia.
After Yanukovich’s decision, months of protests as part of what would be called the Euromaidan movement followed. In February 2014, protesters ousted the government of Viktor Yanukovich, who had been democratically elected in 2010. The protesters took control of government buildings in the capital city of Kiev, along with the city itself. Yanukovich fled Kiev for Kharkiv in the east of Ukraine, where he traditionally had more support. After this incident, the Ukrainian parliament voted to restore the 2004 Constitution of Ukraine and remove Yanukovich from power. However, politicians from the traditionally pro-Russian eastern and southern regions of Ukraine, including Crimea, declared continuing loyalty to Yanukovich.
Days after Yanukovich fled Kiev, armed men opposed to the Euromaidan movement began to take control of the Crimean Peninsula. Checkpoints were established by unmarked soldiers with green military-grade uniforms and equipment in the capital of the Autonomous Republic of Crimea, Simferopol, and the independently-administered port-city of Sevastopol, home to a Russian naval base. After the occupation of the Crimean parliament by these unmarked troops, with evidence suggesting that they were Russian special forces, the Crimean leadership announced it would hold a referendum on secession from Ukraine. This heavily disputed referendum was followed by the annexation of Crimea by the Russian Federation in mid-March. Ukraine and most of the international community refused to recognize the referendum or the annexation. On April 15, the Ukrainian parliament declared Crimea a territory temporarily occupied by Russia.
Since annexing Crimea, the Russian government increased its military presence in the region, with Russian president Vladimir Putin saying a Russian military task force would be established there. In 2014, Ukrainian Border Guard Service announced Russian troops began withdrawing from the areas of Kherson Oblast. They occupied parts of the Arabat Spit and the islands around the Syvash, which are geographically part of Crimea but administratively part of Kherson Oblast. One such village occupied by Russian troops was Strilkove, located on the Arabat Spit, which housed an important gas distribution center. Russian forces stated they took over the gas distribution center to prevent terrorist attacks. Consequently, they withdrew from southern Kherson but continued to occupy the gas distribution center outside Strilkove. In August 2016, Ukraine reported that Russia had increased its military presence along the demarcation line. Border crossings were then closed. Both sides accused each other of killings and provoking skirmishes but it remains unclear which accusations were true, with both Russia and Ukraine denying the opponent’s claims.
In addition to the annexation of Crimea, an armed conflict in the Donbass region of Ukraine, known as the War in Donbass, began in March 2014. Protests by pro-Russian and anti-government groups took place in the Donetsk and Luhansk oblasts of Ukraine, together commonly called the Donbass, in the aftermath of the Euromaidan movement. These demonstrations, which followed the annexation of Crimea by the Russian Federation and which were part of a wider group of concurrent pro-Russian protests across southern and eastern Ukraine, escalated into an armed conflict between the separatist forces of the self-declared Donetsk and Luhansk People’s Republics, with the support of Russian military forces and the Ukrainian government.
Since the start of the conflict, there have been 11 ceasefires, each intended to be indefinite. As of March 2017, the fighting continues.
There have been a range of international reactions to the Russian annexation of Crimea. The UN General Assembly passed a non-binding resolution 100 in favor, 11 against, and 58 abstentions in the 193-nation assembly that declared Crimea’s Moscow-backed referendum invalid.
Many countries implemented economic sanctions against Russia, Russian individuals, or companies, to which Russia responded in kind.
The United States government imposed sanctions against persons they deem to have violated or assisted in the violation of Ukraine’s sovereignty. The European Union suspended talks with Russia on economic and visa-related matters and eventually added more stringent sanctions against Russia, including asset freezes. Japan announced sanctions, which include suspension of talks relating to military, space, investment, and visa requirements. NATO condemned Russia’s military escalation in Crimea and stated that it was breach of international law, while the Council of Europe expressed its full support for the territorial integrity and national unity of Ukraine. China announced that it respected “the independence, sovereignty and territorial integrity of Ukraine.” A spokesman restated China’s belief of non-interference in the internal affairs of other nations and urged dialogue.
Recall the series of events that led to the financial crisis in 2008.
The financial crisis of 2008, also known as the global financial crisis, is considered by many economists to be the worst financial crisis since the Great Depression of the 1930s. It began in 2007 with a crisis in the subprime mortgage market in the United States and developed into a full-blown international banking crisis with the collapse of the investment bank Lehman Brothers in 2008. Excessive risk-taking by banks such as Lehman Brothers helped to globally magnify the financial impact. Massive bail-outs of financial institutions and other palliative monetary and fiscal policies were employed to prevent a possible collapse of the world’s financial system. The crisis was nonetheless followed by a global economic downturn, the Great Recession. In Europe, it contributed to the European debt crisis and fueled a crisis in the banking system of countries using the euro.
The European debt crisis, known also as the eurozone crisis, resulted from a combination of complex factors, including the globalization of finance, easy credit conditions from 2002-2008 that encouraged high-risk lending and borrowing practices, the financial crisis of 2008, international trade imbalances, real estate bubbles that have since burst, the Great Recession of 2008–2012, fiscal policy choices related to government revenues and expenses, and approaches used by states to bail out troubled banking industries and private bondholders, assuming private debt burdens or socializing losses.
In 1992, members of the European Union signed the Maastricht Treaty, under which they pledged to limit their deficit spending and debt levels. However, in the early 2000s, some EU member states failed to stay within the confines of the Maastricht criteria and sidestepped best practices and international standards. Some governments managed to mask their deficit and debt levels through a combination of techniques, including inconsistent accounting, off-balance-sheet transactions, and the use of complex currency and credit derivatives structures. The under-reporting was exposed through a revision of the forecast for the 2009 budget deficit in Greece from 6-8% of GDP (according to the Maastricht Treaty, the deficit should be no greater than 3% of GDP) to 12.7%, almost immediately after the social-democratic PASOC party won the 2009 Greek national elections. Large upwards revision of budget deficit forecasts due to the international financial crisis were not limited to Greece, but in Greece the low forecast was not reported until very late in the year. The fact that the Greek debt exceeded 12% of GDP and France owned 10% of that debt struck terror among investors. The panic escalated when several eurozone member states were unable to repay or refinance their government debt or bail out over-indebted banks under their national supervision without the assistance of third parties like other eurozone countries, the European Central Bank (ECB), or the International Monetary Fund (IMF). The countries involved, most notably Portugal, Ireland, Greece, and Spain, were collectively referred to by the derogatory acronym PIGS. During the debt crisis, Ireland replaced Italy as “I” as the acronym was originally coined to refer to the economies of Southern European countries.
The detailed causes of the debt crisis varied. In several countries, private debts arising from a property bubble were transferred to sovereign debt as a result of banking system bailouts and government responses to slowing economies post-bubble. The structure of the eurozone as a currency union (i.e., one currency) without fiscal union (e.g., different tax and public pension rules) contributed to the crisis and limited the ability of European leaders to respond. Also, European banks own a significant amount of sovereign debt, so concerns regarding the solvency of banking systems or sovereigns were negatively reinforced.
As concerns intensified in early 2010 and thereafter, leading European nations implemented a series of financial support measures such as the European Financial Stability Facility (EFSF) and European Stability Mechanism (ESM).
The mandate of the EFSF was to “safeguard financial stability in Europe by providing financial assistance” to eurozone states. It could issue bonds or other debt instruments on the market to raise the funds needed to provide loans to eurozone countries in financial troubles, recapitalize banks, or buy sovereign debt.
The ESM was established in 2012 (taking over the functions of the EFSF) as a permanent firewall for the eurozone, to safeguard and provide instant access to financial assistance programs for member states of the eurozone in financial difficulty, with a maximum lending capacity of €500 billion.
The ECB also contributed to solve the crisis by lowering interest rates and providing cheap loans of more than one trillion euro to maintain money flows between European banks. In 2012, the ECB calmed financial markets by announcing free unlimited support for all eurozone countries involved in a sovereign state bailout/precautionary program from EFSF/ESM.
Many European countries, including non-EU members like Iceland, embarked on austerity programs, reducing their budget deficits relative to GDP from 2010 to 2011. For example, Greece improved its budget deficit from 10.4% GDP in 2010 to 9.6% in 2011. Iceland, Italy, Ireland, Portugal, France, and Spain also improved their budget deficits from 2010 to 2011 relative to GDP. However, with the exception of Germany, each of these countries had public-debt-to-GDP ratios that increased (i.e., worsened) from 2010 to 2011. Greece’s public-debt-to-GDP ratio increased from 143% in 2010 to 165% in 2011 to 185% in 2014. This indicates that despite improving budget deficits, GDP growth was not sufficient to support a decline (improvement) in the debt-to-GDP ratio. Eurostat reported that the debt to GDP ratio for the 17 Euro-area countries together was 70.1% in 2008, 79.9% in 2009, 85.3% in 2010, and 87.2% in 2011.
The crisis had significant adverse effects on labor market. From 2010 to 2011, the unemployment rates in Spain, Greece, Italy, Ireland, Portugal, and the UK increased, reaching particularly high rates (over 20%) in Spain and Greece. France had no significant changes, while in Germany and Iceland the unemployment rate declined. Eurostat reported that eurozone unemployment reached record levels in September 2012 at 11.6%, up from 10.3% the prior year, but unemployment varied significantly by country. The crisis was also blamed for subdued economic growth, not only for the entire eurozone but for the entire European Union. As such, it is thought to have had a major political impact on the ruling governments in 10 out of 19 eurozone countries, contributing to power shifts in Greece, Ireland, France, Italy, Portugal, Spain, Slovenia, Slovakia, Belgium, and the Netherlands, as well as outside of the eurozone in the United Kingdom.
Poland and Slovakia are the only two members of the European Union that avoided a GDP recession during the years affected by the Great Recession.
To fight the crisis, some governments have also raised taxes and lowered expenditures. This contributed to social unrest and debates among economists, many of whom advocate greater deficits (thus no austerity measures) when economies are struggling. Especially in countries where budget deficits and sovereign debts have increased sharply, a crisis of confidence has emerged with more stable national economies attracting more investors. By the end of 2011, Germany was estimated to have made more than €9 billion out of the crisis as investors flocked to safer but near zero interest rate German federal government bonds (bunds). By mid-2012, the Netherlands, Austria, and Finland benefited from zero or negative interest rates, with Belgium and France also on the list of eventual beneficiaries.
Despite the substantial rise of sovereign debt in only a few eurozone countries, with Greece, Ireland, and Portugal collectively accounting for only 6% of the eurozone’s gross domestic product (GDP), it has become a perceived problem for the area as a whole, leading to speculation of further contagion of other European countries and a possible break-up of the eurozone. In total, the debt crisis forced five out of 17 eurozone countries to seek help from other nations by the end of 2012. Due to successful fiscal consolidation and implementation of structural reforms in the countries most at risk and various policy measures taken by EU leaders and the ECB, financial stability in the eurozone has improved significantly and interest rates have steadily fallen. This has also greatly diminished contagion risk for other eurozone countries. As of October 2012, only three out of 17 eurozone countries, Greece, Portugal, and Cyprus, still battled with long-term interest rates above 6%. By early 2013, successful sovereign debt auctions across the eurozone, most importantly in Ireland, Spain, and Portugal, showed that investors believed the ECB-backstop has worked.
Return to economic growth and improved structural deficits enabled Ireland and Portugal to exit their bailout programs in mid-2014. Greece and Cyprus both managed to partly regain market access in 2014. Spain never officially received a bailout program. Its rescue package from the ESM was earmarked for a bank recapitalization fund and did not provide financial support for the government itself. Despite this progress, the debt crisis revealed serious weaknesses in the process of economic integration within the EU, which in turn resulted in the general crisis of confidence that the idea of European integration continues to witness today.
Evaluate the effectiveness of the austerity measures advocated by the European Union’s leadership
Under the pressure of the European Union leadership, many European countries embarked on austerity programs in response to the European debt crisis, despite evidence that overspending was only to a certain extent in some cases responsible for the unfolding economic disaster. However, austerity measures became the main condition under which the eurozone countries in the most dramatic economic situation, most notably Greece, Ireland, Portugal, and Spain, would receive financial support from the Troika,
a tripartite committee formed by the European Commission, the European Central Bank, and the International Monetary Fund (EC, ECB and IMF).
On May 1, 2010, the Greek government announced a series of austerity measures to secure a three-year €110 billion loan. The Troika offered Greece a second bailout loan worth €130 billion in October 2011, but with activation conditional on implementation of further austerity measures and a debt restructuring agreement. All the implemented measures have helped Greece bring down its primary deficit but contributed to a worsening of its recession. The Greek GDP had its worst decline in 2011, when 111,000 Greek companies went bankrupt (27% higher than in 2010). As a result, Greeks lost about 40% of their purchasing power since the start of the crisis, spent 40% less on goods and services, and experienced a record high seasonal adjusted unemployment rate that grew from 7.5% in September 2008 to a record high of 27.9% in June 2013. The youth unemployment rate rose from 22% to as high as 62%.
In February 2012, an IMF official negotiating Greek austerity measures admitted that excessive spending cuts were harming Greece. The IMF predicted the Greek economy to contract by 5.5 % by 2014. Harsh austerity measures led to an actual contraction after six years of recession of 17%.
The Irish sovereign debt crisis arose not from government over-spending, but from the state guaranteeing the six main Irish-based banks who had financed a property bubble. Irish banks had lost an estimated 100 billion euros, much of it related to defaulted loans to property developers and homeowners made in the midst of the bubble, which burst around 2007. The economy subsequently collapsed in 2008.
Ireland was one country that initially benefited from austerity measures but subsequent research demonstrated that its economy suffered from austerity. Unemployment rose from 4% in 2006 to 14% by 2010, while the national budget went from a surplus in 2007 to a deficit of 32% GDP in 2010, the highest in the history of the eurozone.
In 2009, the Portuguese deficit was 9.4%, one of the highest in the eurozone.
In 2010, the Portuguese government announced a fresh austerity package through a series of tax hikes and salary cuts for public servants. Also in 2010, the country reached a record high unemployment rate of nearly 11%, a figure not seen for over two decades, while the number of public servants remained very high. In the first half of 2011, Portugal requested a €78 billion IMF-EU bailout package in a bid to stabilize its public finances, affected greatly by decades-long governmental overspending and an over-bureaucratized civil service. After the bailout was announced, the government managed to implement measures to improve the state’s financial situation and seemed to be on the right track. This, however, led to a strong increase of the unemployment rate to over 15% in 2012. The bail-out conditions of austerity also created a political crisis in the country, resolved in 2015 with the anti-austerity left-wing coalition leading the country.
Spain entered the crisis period with a relatively modest public debt of 36.2% of GDP. This was largely due to ballooning tax revenue from the housing bubble, which helped accommodate a decade of increased government spending without debt accumulation. In response to the crisis, Spain initiated an austerity program consisting primarily of tax increases. Prime Minister Mariano Rajoy announced in 2012 €65 billion of austerity, including cuts in wages and benefits and a VAT increase from 18% to 21%. The government eventually reduced its budget deficit from 11.2% of GDP in 2009 to 8.5% in 2011.
A larger economy than other countries that received bailout packages, Spain had considerable bargaining power regarding the terms of a bailout. Due to reforms already instituted by Spain’s conservative government, less stringent austerity requirements were included than in earlier bailout packages for Ireland, Portugal, and Greece.
There has been substantial criticism over the austerity measures implemented by most European nations to counter this debt crisis. U.S. economist and Nobel laureate Paul Krugman argued that the deflationary policies imposed on countries such as Greece and Spain would prolong and deepen their recessions. Together with over 9,000 signatories of A Manifesto for Economic Sense, Krugman also dismissed the belief of austerity-focusing policy makers that “budget consolidation” revives confidence in financial markets over the longer haul.
According to some economists, “growth-friendly austerity” relies on the false argument that public cuts would be compensated for by more spending from consumers and businesses, a theoretical claim that has not materialized. The case of Greece shows that excessive levels of private indebtedness and a collapse of public confidence (over 90% of Greeks fear unemployment, poverty, and the closure of businesses) led the private sector to decrease spending in an attempt to save up for rainy days ahead. This led to even lower demand for both products and labor, which further deepened the recession and made it even more difficult to generate tax revenues and fight public indebtedness.
Some economists also criticized the timing and amount of austerity measures in the bailout programs, arguing that such extensive measures should not be implemented during the crisis years with an ongoing recession, but delayed until after some positive real GDP growth returns. In 2012, a report published by the IMF also found that tax hikes and spending cuts during the most recent decade indeed damaged the GDP growth more severely compared to forecasts.
Opponents of austerity measures argue that they depress economic growth and ultimately cause reduced tax revenues that outweigh the benefits of reduced public spending. Moreover, in countries with already anemic economic growth, austerity can engender deflation, which inflates existing debt. Such austerity packages can also cause the country to fall into a liquidity trap, causing credit markets to freeze up and unemployment to increase.
Supporting the conclusions of these macroeconomic models, austerity measures applied during the European debt crisis negatively affected ordinary citizens. The outcomes of introducing harsh austerity measures included the rapid increase of unemployment as government spending fell, reducing jobs in the public and/or private sector; the reduction of household disposable income through tax increases, which in turn reduced spending and consumption; and the bankruptcy of many small businesses, which contributed to even more unemployment and lowered already low productivity.
Apart from arguments over whether or not austerity, rather than increased or frozen spending, is a macroeconomic solution, union leaders argued that the working population was unjustly held responsible for the economic mismanagement errors of economists, investors, and bankers. Over 23 million EU workers became unemployed as a consequence of the global economic crisis of 2007-2010, leading many to call for additional regulation of the banking sector across not only Europe, but the entire world.
Following the announcement of plans to introduce austerity measures in Greece, massive demonstrations occurred throughout the country aimed at pressing parliamentarians to vote against the austerity package. In Athens alone, 19 arrests were made, while 46 civilians and 38 policemen were injured by the end of June 2011. The third round of austerity was approved by the Greek parliament in 2012 and met strong opposition, especially in Athens and Thessaloniki, where police clashed with demonstrators. Similar protests took place in Spain and Ireland, led by student communities.
The ethics of austerity have been questioned in recent years outside of the European states hit by the harshest measures as a result of the bail-out conditions. For example, the Royal Society of Medicine revealed that the United Kingdom’s austerity measures in healthcare may have resulted in 30,000 deaths in England and Wales in 2015.
Movement across Europe is largely regulated by the Schengen Agreement, by which 26 European countries (22 of the 28 European Union member states, plus
Iceland, Liechtenstein, Norway, and Switzerland) joined to form an area where border checks between the 26 member states (internal Schengen borders) are abolished and checks are restricted to the external Schengen borders. Countries with external borders are obligated to enforce border control regulations. Countries may reinstate internal border controls for a maximum of two months for “public policy or national security” reasons.
Article 26 of the Schengen Convention states that carriers that transport people into the Schengen area shall, if they transport people who are refused entry into the Schengen area, be responsible to pay for the return of the refused people and additional penalties. This means that migrants without a visa are not allowed on aircraft, boats, or trains going into the Schengen area. After being refused passage, many migrants attempt to travel illegally, relying on migrant smugglers. Those who have basis to seek asylum in the EU (asylum seekers) face the rules of the Dublin Regulation, which determines the EU member state responsible to examine an asylum application. This prevents asylum applicants in the EU from applying for asylum to numerous member states and situations when no member state takes responsibility for an asylum seeker. By default (when no family reasons or humanitarian grounds are present), the first member state that an asylum seeker entered and in which they have been fingerprinted is responsible. If the asylum seeker then moves to another member state, he or she can be transferred back to the member state they first entered. Many criticize the Dublin rules for placing too much responsibility for asylum seekers on member states on the EU’s external borders (e.g., Italy, Greece, and Hungary), instead of devising a burden-sharing system among EU states.
Developing countries hosted the largest share of refugees (86% by the end of 2014). Although most Syrian refugees were hosted by neighboring countries such as Turkey, Lebanon, and Jordan, the number of asylum applications lodged by Syrian refugees in Europe steadily increased between 2011 and 2015, totaling 813,599 in 37 European countries as of November 2015. Fifty-seven percent of them applied for asylum in Germany or Serbia.
According to the UNHCR, most people arriving in Europe in 2015 were refugees, fleeing war and persecution in countries such as Syria, Afghanistan, Iraq, and Eritrea. Eighty-four percent of Mediterranean Sea arrivals in 2015 came from the world’s top ten refugee-producing countries: Syria (49%), Afghanistan (21%), Iraq (8%), Eritrea (4%), Pakistan (2%), Nigeria (2%), Somalia (2%), Sudan (1%), the Gambia (1%), and Mali (1%). Asylum seekers of seven nationalities,
Syrians, Eritreans, Iraqis, Afghans, Iranians, Somalis, and Sudanese, had an asylum recognition rate of over 50% in EU states in the first quarter of 2015, meaning that they obtained protection over half the time they applied. Wars fueling the crisis are the Syrian Civil War, the Iraq War, the War in Afghanistan, the War in Somalia, and the War in Darfur. Refugees from Eritrea, one of the most repressive states in the world, flee from indefinite military conscription and forced labor. Some ethnicities or religions from an originating country are more represented among the migrants than others; for instance, Kurds make up a substantial number of refugees from Turkey and Iraq. Fifty-eight percent of the refugees and migrants arriving in Europe by sea in 2015 were men, 17% were women, and 25% were children.
Amid an upsurge in the number of sea arrivals in Italy from Libya in 2014, several European Union governments refused to fund the Italian-run rescue option Operation Mare Nostrum, which was replaced by Frontex’s Operation Triton. The latter involves voluntary contributions from 15 other European nations (both EU member states and non-members). Current voluntary contributors
are Croatia, Iceland, Finland, Norway, Sweden, Germany, the Netherlands, France, Spain, Ireland, Portugal, Austria, Switzerland, Romania, Poland, Lithuania, and Malta. The operation was undertaken after Italy ended Mare Nostrum, which had become too costly for a single country to fund. The Italian government requested additional funds from the other EU member states, but they refused. In the first six months of 2015, Greece overtook Italy as the first EU country of arrival, becoming, in the summer 2015, the starting point of a flow of refugees and migrants moving through Balkan countries to northern European countries, mainly Germany and Sweden.
Since April 2015, the European Union has struggled to cope with the crisis, increasing funding for border patrol operations in the Mediterranean, devising plans to fight migrant smuggling, launching Operation Sophia
with the aim of neutralizing established refugee smuggling routes in the Mediterranean, and proposing a new quota system both to relocate asylum seekers among EU states for processing of refugee claims to alleviate the burden on countries on the outer borders of the Union and to resettle asylum seekers who have been determined refugees. Individual countries have at times reintroduced border controls within the Schengen area and rifts have emerged between countries willing to allow entry of asylum seekers for processing of refugee claims and those trying to discourage their entry for processing. According to Eurostat, EU member states received over 1.2 million first-time asylum applications in 2015, more than double that of the previous year. Four states (Germany, Hungary, Sweden, and Austria) received around two-thirds of the EU’s asylum applications in 2015, with Hungary, Sweden, and Austria the top recipients per capita.
Germany has been the most sought-after final destination in the EU migrant and refugee crisis.
The escalation of shipwrecks of migrant boats in the Mediterranean in 2015 led European Union leaders to reconsider their policies on border control and processing of migrants. The European Commission proposed a plan that included deploying teams in Italy and Greece for joint processing of asylum applications, and German chancellor Angela Merkel proposed a new system of quotas to distribute non-EU asylum seekers around the EU member states. As thousands of migrants started to move from Budapest to Vienna, Germany, Italy, and France demanded asylum-seekers to be shared more evenly between EU states. The European Commission President Jean-Claude Juncker proposed to distribute 160,000 asylum seekers among EU states under a new migrant quota system. Leaders of the Visegrad Group (Czech Republic, Hungary, Poland, and Slovakia) declared they will not accept any compulsory long-term quota on redistribution of migrants. France announced that it would accept 24,000 asylum-seekers over two years. Britain announced that it would take in up to 20,000 refugees, primarily vulnerable children and orphans, and Germany pledged USD $6.7 billion to deal with the migrant crisis. However, also in 2015, both Austria and Germany warned that they would not be able to keep up with the current pace of the influx and that it would need to slow down. By September 2016, the quota system proposed by EU was abandoned after staunch resistance by Visegrad Group countries. The refugee crisis also fueled nationalist sentiments across Europe and the appeal of politicians who oppose the idea of European integration entirely or in its current form, often advocating anti-immigrant and anti-refugee slogans.
Greece remains the only country where a debate over the question of leaving the eurozone has gained serious political traction; its potential exit is referred to as Grexit. Proponents of
the proposal argue that leaving the euro and reintroducing the drachma would
dramatically boost exports and tourism while discouraging expensive imports
and thereby give the Greek economy the possibility to recover and stand on its
own feet. Opponents argue that the proposal would impose excessive hardship on
the Greek people, as the short-term effects would be a significant consumption
and wealth reduction for the Greek population. This may cause civil unrest in
Greece and harm the reputation of the eurozone. Additionally, it could cause
Greece to align more with non-EU states. The debate continues although as of 2017, it
remains mostly political. So far, no practical steps to arrange Greece’s exit from the
eurozone have been taken either by Greece or by the EU.
Under pressure from many of his MPs and the rise of the euroskeptic United Kingdom Independence Party (UKIP), in 2013 Cameron announced that the conservative government would hold an in-out referendum on EU membership before the end of 2017, on a renegotiated package, if he was elected in 2015. The Conservative Party unexpectedly won the 2015 general election with a majority. Soon after, the European Union Referendum Act 2015 was introduced into Parliament to enable the referendum. Cameron favored remaining in a reformed EU and sought to renegotiate on four key points: protection of the single market for non-eurozone countries, reduction of red tape, exempting Britain from certain policies that would strengthen integration, and restricting EU migration.
The referendum took place in the United Kingdom and Gibraltar on June 23, 2016; 51.9% voted in favor of leaving the EU and 48.1% voted in favor of remaining a member of the EU.
The process of withdrawal from the EU has been governed by Article 50 of the Treaty on European Union. No member state has ever left the EU and it remains unclear how the process will unfold. Article 50 provides an invocation procedure whereby a member can notify the European Council, followed by a negotiation period of up to two years after which the treaties cease to apply; however, a leaving agreement may be reached by qualified majority voting. Unless the Council of the European Union unanimously agrees to extensions, the timing for the UK leaving under the article is two years from when the country gives official notice to the EU.
Before the referendum, leading figures with a range of opinions regarding Scottish independence suggested that in the event the UK as a whole voted to leave the EU but Scotland as a whole voted to remain (as happened), a second Scottish independence referendum might be precipitated (the first took place in 2014).
On March 13, 2017,
Scottish First Minister Nicola
Sturgeon announced she would seek Scottish Parliament approval to negotiate with the UK Government for a Section 30 order enabling a second independence referendum, which would take place between the fall of 2018 and the spring of 2019.
Currently, accession negotiations are underway with several states. The process of enlargement is sometimes referred to as European integration. This term is also used to refer to intensified cooperation between EU member states as national governments allow for the gradual harmonization of national laws.
The Western Balkans have been prioritized for membership since emerging from war during the breakup of Yugoslavia. Albania, Macedonia, Montenegro, Serbia, and Turkey (a candidate since 1987, which makes it the longest-waiting candidate) are all recognized as official candidates, and the latter three are undergoing membership talks. Bosnia and Herzegovina and Kosovo are recognized as potential candidates for membership by the EU. In 2014, President of the European Commission Jean-Claude Juncker announced that the EU has no plans to expand in the next five years. Montenegro and Serbia have set a goal to finish accession talks by 2019.
The three major western European countries that are not EU members, Iceland, Norway, and Switzerland, have all submitted membership applications in the past.
Iceland’s application is currently withdrawn by government and Switzerland’s is frozen. Norway rejected membership in two referendums. They all, however, along with Liechtenstein, participate in the EU Single Market as well as the Schengen Area, which closely aligns them with the EU. In 2017, Iceland’s newly elected government announced that it may seek to begin talks with the EU on possible future membership once again.
Moldova, Ukraine, and Georgia signed Association Agreements with the EU in 2014, which deepened their trade and political links with the EU. The European Parliament also passed a resolution recognizing the “European perspective” of all the three post-Soviet countries. Ukrainian president Petro Poroshenko announced 2020 as a target for an EU membership application, but in 2016 Juncker stated that it would take at least 20–25 years for Ukraine to join the EU and NATO. The potential EU membership of Ukraine remains the critical source of tensions between the EU and Russia.
In 2002, the European Parliament noted that Armenia and Georgia may enter the EU in the future. However, in 2015, to the east of the EU, Belarus, Kazakhstan, and Russia launched the creation of the
Eurasian Union, which was subsequently joined by Armenia and
Kyrgyzstan. In 2017, Tigran Sargsyan, the Chairman of the Eurasian Economic Commission stated that Armenia’s stance was to cooperate and work with both the European Union and the Eurasian Union. Sargsyan added that although Armenia is part of the Eurasian Union, a new European Union Association Agreement between Armenia and the EU would be finalized shortly. Both Armenia and Georgia are members of the Council of Europe and the Euronest Parliamentary Assembly, which seeks to foster greater cooperation between the EU and Eastern European states.
Currently, Georgia is the only
country in the Caucasus actively seeking EU membership.
38.2: The Middle East and North Africa in the 21st Century
38.2.1: Democracy and Authoritarianism in the Middle East
Only two Middle Eastern countries are considered democratic and five others as partial democracies, while the rest are categorized as authoritarian regimes. All states in the region face serious human rights challenges.
Learning Objective
Compare democratic and authoritarian countries in the Middle East
Key Points
Key Terms
- Freedom of the Press report
-
A yearly report by U.S.-based non-governmental organization Freedom House, measuring the level of freedom and editorial independence enjoyed by the press in nations and significant disputed territories around the world.
- confessionalism
-
A system of government where high-ranking offices are reserved for members of specific religious groups. It is usually applied to prevent sectarian conflicts.
- Freedom House
-
A U.S.-based and U.S.-government funded non-governmental organization that conducts research and advocacy on democracy, political freedom, and human rights. It was founded in October 1941. Wendell Willkie and Eleanor Roosevelt served as its first honorary chairpersons. It describes itself as a “clear voice for democracy and freedom around the world.”
Democratic Status of Middle Eastern Nations
According to the measure of the level of democracy in nations throughout the world published by Freedom House,
a U.S. Government funded non-governmental organization that conducts research and advocacy on democracy, political freedom, and human rights, the Middle Eastern countries with the highest scores are Israel, Tunisia, Turkey, Lebanon, Morocco, Kuwait, and Jordan. The remaining countries of the Middle East are categorized as authoritarian regimes, though some have certain democratic aspects. The lowest scores are held by Saudi Arabia and Yemen.
Freedom House (data from the 2017 report) categorizes Israel and Tunisia as the only “free” countries of the region.
Tunisia is a representative democracy and a republic with a president serving as head of state, prime minister as head of government, a unicameral parliament, and a civil law court system. The number of legalized political parties in Tunisia has grown considerably since the beginning of the democratic reforms. Rare for the Arab world, women hold a significant share of seats in the constituent assembly (between 24% and 31%).
Israel operates under a parliamentary system as a democratic republic with universal suffrage.
It has no official religion, but the definition of the state as “Jewish and democratic” creates a strong connection with Judaism as well as a conflict between state law and religious law. Interaction between the political parties keeps the balance between state and religion. Some organizations and states, however, see the Israeli treatment of Palestinians as a serious blemish on Israel’s democratic system.
Lebanon, Turkey, Kuwait, Morocco, and Jordan are all categorized as “partly free.”
Until 1975, Freedom House considered Lebanon to be one of only two (together with Israel) politically free countries in the Middle East and North Africa region. The country lost this status with the outbreak of the Civil War and has not regained it since. Even though Lebanon, a parliamentary democracy that includes confessionalism (high-ranking offices are reserved for members of specific religious groups to prevent sectarian conflicts), is now rated “partly free,” the United States still considers Lebanon one of the most democratic nations in the Arab world.
Turkey is a parliamentary representative democracy but recent developments, particularly the efforts to expand the prerogatives of president, cracking down on opposition, and silencing media and individuals who criticize the government, have caused serious concerns that it is taking an anti-democratic turn.
Kuwait is a constitutional emirate with a semi-democratic political system. The emir is the head of state and the hybrid political system is divided between an elected parliament and appointed government. Kuwait is among the Middle East’s freest countries in terms of civil liberties and political rights. Morocco is a parliamentary constitutional monarchy. The Prime Minister is the head of government and a multi-party system is growing.
Jordan, which in 2017 was upgraded from “not free” to “partly free,” is a constitutional monarchy where the King holds wide executive and legislative powers.
None of the Middle Eastern countries received the “free” status from the Freedom House in their 2016 Freedom of the Press report, which measures specifically the level of freedom and editorial independence enjoyed by the press. Israel, Lebanon, Turkey, and Kuwait were determined to be “partly free” while all the other countries in the region received the “not free” status.
Authoritarianism
Apart from the seven states discussed above, all the remaining Middle Eastern states are currently determined to be “not free” (including Western Sahara, which is controlled by Morocco). In some cases, what may seem a democratic model does not stand the test of scrutiny. For example, a number of presidential republics embracing Arab socialism, such as Syria and Egypt, regularly hold elections. However, critics assert that these are not full multi-party systems since they do not allow citizens to choose between many different candidates for presidency election. Moreover, the constitution of modern Egypt has given the president a virtual monopoly over the decision-making process, devoting 30 articles (15 percent of the whole constitution) to presidential prerogatives. Another example is Iran, where the Iranian Revolution of 1979 resulted in an electoral system (an Islamic Republic with a constitution), but the system has a limited democracy in practice. One of the main problems of Iran’s system is the consolidation of too much power in the hands of the Supreme Leader who is elected by Assembly of Experts for life (unless the Assembly of Experts decides to remove him which has never happened). Another main problem is the closed loop in the electoral system; the elected Assembly of Experts elects the Supreme Leader of Iran, who appoints the members of the Guardian Council, who in turn vets the candidates for all elections including those for Assembly of Experts. However some elections in Iran, such as those for city councils, satisfies free and democratic election criteria to some extent.
Absolute monarchy is common in the Middle East. Oman, Qatar, Saudi Arabia, and United Arab Emirates are all absolute monarchies. Authoritarian regimes (not necessarily monarchies) evolving around a powerful individual holding power has long been the critical feature of the Middle Eastern politics. For example, in the past, Saddam Hussein of Iraq or Muammar Gaddafi of Libya were among the most influential figures of the region. Today, Bashar al-Assad, who refused to resign from the presidency of Syria, which is one cause behind the brutal civil war that has been waged since 2011, serves as a symbol of the authoritarian rejection of democratic changes in the region.
Theoretical Considerations
The endurance of authoritarian regimes in the Middle East is notable in comparison to the rest of the world. While such regimes have fallen throughout Eastern Europe or sub-Saharan Africa, for example, they have persisted in the Middle East. At the same time, Middle Eastern history includes significant episodes of conflict between rulers and proponents of change.
Theories on why the Middle East remains essentially undemocratic are diverse. Revisionist theories argue that democracy is slightly incompatible with Middle Eastern values. On the other hand, post-colonial theories propose a number of explanations for the relative absence of liberal democracy in the Middle East, including the long history of imperial rule by the Ottoman Empire, Britain, and France and the contemporary political and military intervention by the United States, all of which have been blamed for preferring authoritarian regimes because they simplify the business environment while enriching the governing elite and the companies of the imperial countries.
Albrecht Schnabel argues that a strong civil society is required to produce leaders and mobilize the public around democratic duties, but for such a civil society to flourish, a democratic environment and process allowing freedom of expression and order is required in the first place. This theory therefore supports the intervention of outside countries, such as the United States, in establishing democracy. Other analysts, however, disagree. Some researchers suggest that independent nongovernmental associations help foster a participatory form of governance. They cite the lack of voluntary associations as a reason for the persistence of authoritarianism in the region. Others believe that the lack of a market-driven economy in many Middle Eastern countries undermines the capacity to build the kind of individual autonomy and power that helps promote democracy. Therefore, the relationship of the state to civil society is one of the most important indicators of the chances of democracy evolving in a particular country. Poverty, inequality, and low literacy rates also compromise people’s commitment to democratic reforms since survival becomes a higher priority.
Human Rights Violations
Nearly all the Middle Eastern states, including those categorized as democratic, violate some of what according to international legal standards falls under the category of human rights.
In regard to capital punishment, the countries of the region can be separated into two categories. Tunisia, Algeria, Morocco, and Israel are considered abolitionist in practice. Aside from Israel, all of the above countries maintain the death penalty for serious crimes although no executions have been carried out in a long time. All other countries in the Middle East execute prisoners for crimes. In the de facto autonomous Rojava federation in Syria, formed during the Syrian Civil War, capital punishment has been abolished.
No country in the region (with the sole exception of the Rojava federation) offers specific protections against spousal rape or domestic violence. There is a lack of official protection of rights within the home and a lack of government accountability. Domestic violence is typically covered up and kept within the family as many women in the region feel they cannot discuss their abuse without damaging their own and their family’s reputation and honor.
Women have varying degrees of difficulty moving freely in Middle Eastern countries. Some nations prohibit women from ever traveling alone, while in others women can travel freely but experience a greater risk of sexual harassment or assault than they would in Western countries. Women have the right to drive in all Middle Eastern countries except Saudi Arabia.
All the states in the Middle East have ratified the United Nations Convention on the Rights of the Child (CRC). Following the ratification of the CRC, Middle Eastern countries have enacted or proposed laws to protect children from violence, abuse, neglect, or exploitation. A number of countries have comprehensive laws that bring together legal provisions for protection of the child. However, child labor, violence against girls and women, gender gaps within education, and socioeconomic conditions continue to be identified areas of concern. Both external and internal conflict, ongoing political instability, and the Syrian refugee crisis remain grave dangers for children. The escalating armed conflict in Iraq has placed more children in peril. Human rights organizations document grave violations against children, particularly in conflict-ridden and politically unstable areas, focusing specifically on discrimination issues, sectarian violence, and abuse of women and girls.
Israel, the most democratic state in the Middle East,
faces significant human rights problems regarding institutional discrimination of Arab citizens of Israel (many of whom self-identify as Palestinian), Ethiopian Israelis and women, and the treatment of refugees and irregular migrants. Other human rights problems include institutional discrimination against non-Orthodox Jews and intermarried families and labor rights abuses against foreign workers. In the last several states, Tunisia, the second most democratic states of the region, has made significant progress by enacting sweeping legislation to protect the rights of many previously vulnerable groups, including women, children, and the disabled. Human rights organizations note the country is currently at the stage of transition and they continue to observe whether the legislation is put in practice.
One group that has not benefited noticeably from the Tunisian turn to democratic reforms is the LGBTQ community.
38.2.2: The Rise of Islamism
The rise of radical Islamism is a result of many complex factors, including Western colonialism in Muslim-dominated regions, state-sponsored aggressive popularization of ultra-orthodox interpretations of Islam, Western and pro-Western Muslim support for Islamist groups during the Cold War, and victories of Islamist groups over pro-Western politicians and factions in the Middle East.
Learning Objective
Connect the rise of Islamism with outside intervention in the Middle East
Key Points
- The concept of Islamism has been debated in both
public and academic contexts. The term can refer to diverse forms of social and
political activism advocating that public and political life should be guided
by Islamic principles, or more specifically to movements that call for full
implementation of sharia. In Western media, the term tends to refer to groups
that aim to establish a sharia-based Islamic state, often with connotations of political extremism and implications of
violent tactics and human rights violations.
-
Islamism is not a united movement. Rather, it takes different forms and spans a wide range of
strategies and tactics. Moderate and reformist Islamists accept and work within the
democratic process. Islamist groups like Hezbollah and Hamas participate in
the democratic and political process and carry out armed attacks. Radical Islamist
groups entirely reject democracy and call for violent/offensive jihad or urge and conduct attacks on a religious basis.
-
Western colonialism of the Muslim world, beginning in the 19th
century, greatly contributed to equating the secular West with the enemy of Islam, thus fueling the development of increasingly radical Islamism. Beginning in
the 1970s, Western and pro-Western governments often supported fledgling
Islamists and Islamist groups that later came to be seen as dangerous enemies.
For Islamists, the primary threat of the West is cultural rather than political
or economic.
-
In the late 20th century, an Islamic revival developed in the
Muslim world. It was manifested in greater religious piety and a growing
adoption of Islamic culture. Two of the most important events that fueled the
resurgence were the Arab oil embargo and subsequent quadrupling of the price of
oil in the mid-1970s and the 1979 Iranian Revolution, which established an
Islamic republic in Iran under Ayatollah Khomeini. Although religious extremism and attacks on civilians and military targets represent
only a small part of the movement, the revival has seen a proliferation of
Islamic extremist groups.
-
The number of militant Islamic movements calling for “an
Islamic state and the end of Western influence” is relatively small.
According to polls taken in 2008 and 2010 by Pew and Gallop, pluralities of the
population in Muslim-majority countries are undecided as to what extent
religion should influence public life,
politics, and the legal system.
-
Saudi Arabia and Qatar have devotec considerable energies to
spreading Salafism and to gaining influence in the countries that benefited from
their financial support. Such developments as the Iranian Revolution and the
Soviet-Afghan War convinced many that the Westernization of the Muslim world
was avoidable and fueled radical Islamism. As a result, groups like
al-Quaeda, Taliban, and Islamic State gained popularity and tangible military
and political power across the Middle East and other regions of the world.
Key Terms
- Muslim Brotherhood
-
A transnational Sunni Islamist organization founded in Egypt by Islamic scholar and schoolteacher Hassan al-Banna in 1928. The organization has combined political activism with charity work as its model of functioning, gaining supporters throughout the Arab world and influencing other Islamist groups. As of 2015, it is considered a terrorist organization by the governments of five Arab countries and Russia, but claims to be a peaceful, democratic organization that condemns violence.
- Islamism
-
A term that can refer to diverse forms of social and political activism advocating that public and political life should be guided by Islamic principles, or more specifically to movements that call for full implementation of sharia. It is commonly used interchangeably with the terms political Islam or Islamic fundamentalism.
Its meaning has been debated in both public and academic contexts.
- jihad
-
An Arabic word that literally means striving or struggling, especially with a praiseworthy aim. It can have many shades of meaning in an Islamic context, such as struggle against one’s evil inclinations or efforts toward the moral betterment of society. In classical Islamic law, the term refers to armed struggle against unbelievers, while modernist Islamic scholars generally equate it with defensive warfare. The term has gained additional attention in recent decades through its use by terrorist groups.
- sharia
-
The religious law forming part of the Islamic tradition. It is derived from the religious precepts of Islam, particularly the Quran and the Hadith. In Arabic, the term refers to God’s divine law and is contrasted with fiqh, which refers to its scholarly interpretations. The manner of its application in modern times has been a subject of dispute between Muslim traditionalists and reformists.
- Hamas
-
A Palestinian Sunni-Islamic fundamentalist that has been the governing authority of the Gaza Strip since 2007. It is a point of debate in political and academic circles over whether or not to classify it as a terrorist group.
- Hezbollah
-
A Shia Islamist militant group and political party based in Lebanon. Its status as a legitimate political party, terrorist group, resistance movement, or some combination thereof is a contentious issue.
- Islamic State
-
A Salafi jihadist extremist militant group led by and mainly composed of Sunni Arabs from Syria and Iraq. In 2014, the group proclaimed itself a caliphate, with religious, political, and military authority over all Muslims worldwide. As of March 2015, it had control over territory occupied by ten million people in Syria and Iraq, and has nominal control over small areas of Libya, Nigeria, and Afghanistan. It also operates or has affiliates in other parts of the world, including North Africa and South Asia.
- Taliban
-
A Sunni Islamic fundamentalist political movement in Afghanistan currently waging war (an insurgency, or jihad) within that country. The group has used terrorism as a specific tactic to further their ideological and political goals.
- Salafism
-
An ultra-conservative reform branch or movement within Sunni Islam that developed in Arabia in the first half of the 18th century against a background of European colonialism. It advocated a return to the traditions of the “devout ancestors” (the salaf).
- al-Qaeda
-
A militant Sunni Islamist multi-national organization founded in 1988 by Osama bin Laden, Abdullah Azzam, and several other Arab volunteers who fought against the Soviet invasion of Afghanistan in the 1980s. It has been widely designated as a terrorist group.
What Is Islamism?
Islamism is a concept whose meaning has been debated in both public and academic contexts. The term can refer to diverse forms of social and political activism advocating that public and political life should be guided by Islamic principles, or more specifically to movements that call for full implementation of sharia. Sharia is the religious law forming part of the Islamic tradition, derived from the religious precepts of Islam, particularly the Quran and the hadith (various reports describing the words, actions, or habits of the Islamic prophet Muhammad). Islamism is commonly used interchangeably with the terms political Islam or Islamic fundamentalism. In Western media, the term tends to refer to groups who aim to establish a sharia-based Islamic state, often with connotations of political extremism and implications of violent tactics and human rights violations.
Different currents of Islamist thought have advocated a revolutionary strategy of Islamizing society through exercise of state power or a reformist strategy of re-Islamizing society through grassroots social and political activism. Islamists may emphasize the implementation of sharia (Islamic law), pan-Islamic political unity and an Islamic state, or selective removal of non-Muslim influences, particularly Western military, economic, political, social, or cultural influences, from the Muslim world.
Islamism is not a united movement, but takes different forms and spans a wide range of strategies and tactics. Moderate and reformist Islamists who accept and work within the democratic process include parties like the Tunisian Ennahda Movement. Jamaat-e-Islami of Pakistan is basically a sociopolitical and democratic vanguard party, but has also gained political influence through military coup d’états. Islamist groups like Hezbollah and Hamas participate in the democratic and political process as well as in armed attacks.
Hezbollah is a Shia Islamist militant group and political party based in Lebanon.
Hezbollah’s status as a legitimate political party, terrorist group, resistance movement, or some combination thereof is a contentious issue. Similarly, Hamas is a Palestinian Sunni-Islamic fundamentalist that has been the governing authority of the Gaza Strip since 2007. It is a point of debate in political and academic circles over whether or not to classify Hamas as a terrorist group. Radical Islamist groups like al-Qaeda or the Taliban entirely reject democracy and call for violent/offensive jihad or urge and conduct attacks on a religious basis.
Islamism and the West
In the 19th century, European encroachment on the Muslim world came with the retreat of the Ottoman Empire, the French conquest of Algeria (1830), the disappearance of the Moghul Empire in India (1857), and the Russian incursions into the Caucasus and Central Asia. The first Muslim reaction to European encroachment was of rural and working class and not urban origin. Charismatic leaders launched the call for jihad and formed tribal coalitions. Sharia in defiance of local common law was imposed to unify tribes. All these movements eventually failed, despite some successes over the colonizing armies.
Under later Western colonialism, nostalgia for the days of successful Islamic empire simmered. This played a major role in the Islamist political ideal of Islamic state, a state in which Islamic law is preeminent. The Islamist political program is generally accomplished by reshaping the governments of existing Muslim nation-states. Today, however, the means of doing this varies greatly across movements and circumstances. Many Islamist movements, such as the Jamaat-e-Islami and Muslim Brotherhood, have used the democratic process and focus on votes and coalition-building with other political parties. Radical movements such as Taliban and al-Qaeda embrace militant Islamic ideology.
Beginning in the 1970s, Western and pro-Western governments often supported fledgling Islamists and Islamist groups that later came to be seen as dangerous enemies. Islamists were considered by Western governments bulwarks against what were thought ore dangerous leftist/communist/nationalist insurgents, which Islamists were correctly seen as opposing. The U.S. spent billions of dollars to aid the
Muslim Afghan enemies of the Soviet Union during the Soviet-Afghan War. Similarly, although Hamas is a strong opponent of Israel’s existence, it traces its origins to institutions and clerics supported by Israel in the 1970s and 1980s. Israel tolerated and supported Islamist movements in Gaza as it perceived them preferable to the secular and then more powerful al-Fatah. Egyptian pro-Western, anti-Soviet, and pro-Israeli President Anwar Sadat released Islamists from prison and welcomed home exiles in tacit exchange for political support in his struggle against leftists. Sadat was later assassinated and a formidable insurgency was formed in Egypt in the 1990s.
For Islamists, the primary threat of the West is cultural rather than political or economic. Islamists assume that cultural dependency robs one of faith and identity and thus destroys Islam and the Islamic community far more effectively than political rule. Furthermore, the end of the Cold War and Soviet occupation of Afghanistan has eliminated the common atheist Communist enemy uniting some religious Muslims and the capitalist west.
Islamic Revivalism
In the late 20th century an Islamic revival or Islamic awakening developed in the Muslim world, manifested in greater religious piety and growing adoption of Islamic culture. Two of the most important events that fueled or inspired the resurgence were the Arab oil embargo and subsequent quadrupling of the price of oil in the mid-1970s and the 1979 Iranian Revolution, which established an Islamic republic in Iran under Ayatollah Khomeini. The first created a flow of many billions of dollars from Saudi Arabia to fund Islamic books, scholarships, fellowships, and mosques around the world. The second undermined the assumption that Westernization strengthened Muslim countries and was the irreversible trend of the future.
The revival is a reversal of the Westernization approach common among Arab and Asian governments earlier in the 20th century. Although religious extremism and attacks on civilians and military targets represent only a small part of the movement, the revival has seen a proliferation of Islamic extremist groups in the Middle East and elsewhere in the Muslim world. They have voiced their anger at perceived exploitation as well as materialism, Westernization, democracy, and modernity, which are most commonly associated with accepting Western secular beliefs and values.
Rise of Radical Islamism
The number of militant Islamic movements calling for “an Islamic state and the end of Western influence” is relatively small. According to polls taken in 2008 and 2010 by Pew and Gallop, pluralities of the population in Muslim-majority countries are undecided as to what extent religion (and certain interpretations of) should influence public life, politics, and the legal system.
Starting in the mid-1970s, the Islamic resurgence was funded by an abundance of money from Saudi Arabian oil exports. The tens of billions of dollars obtained from the recently heightened price of oil funded most of the expenses associated with the resurgence. Throughout the Muslim world, religious institutions for people both young and old received Saudi funding along with training for the preachers and teachers who went on to teach and work at the emerging universities, schools, and mosques. The funding was also used to reward journalists and academics who followed the Saudis’ strict interpretation of Islam known as Salafism (sometimes referred to as Wahhabism, but Salafists consider the term derogatory). In its harshest form, it preaches that Muslims should not only “always oppose” infidels “in every way,” but “hate them for their religion … for Allah’s sake,” that democracy “is responsible for all the horrible wars of the 20th century,” and that Muslims not ascribing to this strict interpretation were infidels. While this effort has by no means converted all or even most, it has done much to undermine more moderate local interpretations.
The strength of the Islamist movement was manifest in an event that might have seemed sure to turn Muslim public opinion against fundamentalism, but did just the opposite. In 1979, the Grand Mosque in Mecca, Saudi Arabia, was seized by an armed fundamentalist group and held for over a week. Scores were killed, including many pilgrim bystanders in a gross violation of one of the most holy sites in Islam, where arms and violence are strictly forbidden. Instead of prompting a backlash, Saudi Arabia, already very conservative, responded by shoring up its fundamentalist credentials with even more Islamic restrictions. Crackdowns followed on everything, including shopkeepers who did not close for prayer and newspapers that published pictures of women. In other Muslim countries, blame for and wrath against the seizure was directed not against fundamentalists, but against Islamic fundamentalism’s foremost geopolitical enemy – the United States.
Just like Saudi Arabia, Qatar has devolved considerable energies to spreading Salafism and gaining influence in the countries that benefited from its support. Over the past two decades, the country has exerted a semi-formal patronage for the international movement of the Muslim Brotherhood. Qatar is known to have backed Islamist factions in Egypt, Libya, Syria, and Yemen. Hamas has also been among the primary beneficiaries of Qatar’s financial support.
The first modern Islamist state was established among the Shia of Iran. In a major shock to the rest of the world, Ayatollah Ruhollah Khomeini led the Iranian Revolution of 1979 in order to overthrow the oil-rich, well-armed, Westernized, and pro-American secular monarchy ruled by Shah Muhammad Reza Pahlavi. Khomeini believed that complete imitation of the Prophet Mohammad and his successors was essential to Islam, that many secular, Westernizing Muslims were actually agents of the West, and that acts such as the plundering of Muslim lands were part of a long-term conspiracy against Islam by Western governments. The Islamic Republic has also maintained its hold on power in Iran in spite of U.S. economic sanctions and has created or assisted like-minded Shia terrorist groups in Iraq, Egypt, Syria, Jordan, and Lebanon (Hezbollah).
In 1979, the Soviet Union deployed its army into Afghanistan, attempting to suppress an Islamic rebellion against an allied Marxist regime in the Afghan Civil War. The conflict, pitting indigenous impoverished Muslims against an anti-religious superpower, galvanized thousands of Muslims around the world to send aid and sometimes to go themselves to fight for their faith. When the Soviet Union abandoned the Marxist Najibullah regime and withdrew from Afghanistan in 1989 (the regime finally fell in 1992), many Muslims saw the victory as the triumph of Islamic faith over superior military power and technology that could be duplicated elsewhere. The veterans of the war returning home to Algeria, Egypt, and other countries were often eager to continue armed jihad.
Another factor in the early 1990s that worked to radicalize the Islamist movement was the Gulf War, which brought several hundred thousand U.S. and allied non-Muslim military personnel to Saudi Arabian soil to put an end to Saddam Hussein’s occupation of Kuwait. Prior to 1990, Saudi Arabia played an important role in restraining the many Islamist groups that received its aid. But when Saddam, secularist and Ba’athist dictator of neighboring Iraq, attacked Saudi Arabia (his enemy in the war), western troops came to protect the Saudi monarchy. Islamists accused the Saudi regime of being a puppet of the west.
These attacks resonated with conservative Muslims and the problem did not go away with Saddam’s defeat, since American troops remained stationed in the kingdom. Saudi Arabia attempted to compensate for its loss of prestige among the conservative groups by repressing those domestic Islamists who attacked it (bin Laden being a prime example) and increasing aid to Islamic groups that did not (including some violent groups), but its pre-war influence on behalf of moderation was greatly reduced. One result of this was a campaign of attacks on government officials and tourists in Egypt, a bloody civil war in Algeria, and Osama bin Laden’s terror campaign climaxing in the 9/11 attack.
In 1992, the Democratic Republic of Afghanistan ruled by communist forces collapsed and democratic Islamist elements founded the Islamic State of Afghanistan. In 1996, a more conservative and anti-democratic Islamist movement known as the Taliban rose to power, defeated most of the warlords, and took over roughly 80% of Afghanistan. The Taliban differed from other Islamist movements to the point where they might be more properly described as Islamic fundamentalist or neofundamentalist, interested in spreading “an idealized and systematized version of conservative tribal village customs” under the label of Sharia to an entire country. Their ideology was also described as influenced by Wahhabism and the extremist jihadism of their guest Osama bin Laden. The Taliban considered politics to be against sharia and thus did not hold elections. The Taliban’s hosting of Osama bin Laden led to an American-organized attack that drove them from power following the 9/11 attacks. Taliban are still very much alive and fighting a vigorous insurgency with suicide bombings and armed attacks launched against NATO and Afghan government targets.
The Islamic State, formerly known as the Islamic State of Iraq and the Levant, is a Salafi jihadist extremist militant group led by and mainly composed of Sunni Arabs from Syria and Iraq. In 2014, the group proclaimed itself a caliphate, with religious, political, and military authority over all Muslims worldwide. As of March 2015, it had control over territory occupied by ten million people in Syria and Iraq, and has nominal control over small areas of Libya, Nigeria, and Afghanistan. ISIL (commonly referred to ISIS) also operates or has affiliates in other parts of the world, including North Africa and South Asia.
Originating in 1999, ISIL pledged allegiance to al-Qaeda in 2004, participated in the Iraqi insurgency that followed the invasion of Iraq by Western coalition forces in 2003, joined the fight in the Syrian Civil War beginning in 2011, and was expelled from al-Qaeda in early 2014. It gained prominence after it drove Iraqi government forces out of key cities in western Iraq in June 2014. The group is adept at social media, posting Internet videos of beheadings of soldiers, civilians, journalists, and aid workers and is known for its destruction of cultural heritage sites. The United Nations (UN) has held ISIL responsible for human rights abuses and war crimes and Amnesty International has reported ethnic cleansing on a “historic scale” by the group. The group has been designated a terrorist organization by the UN, the European Union (EU) and member states, the United States, India, Indonesia, Turkey, Saudi Arabia, Syria, and other countries.
38.2.3: The Wars in Iraq and Afghanistan
The wars in Afghanistan and Iraq failed to stabilize the political situation in the Middle East and contributed to ongoing civil conflicts, with counterterrorism experts arguing that they created circumstances beneficial to the escalation of radical Islamism.
Learning Objective
Evaluate the consequences of American military efforts in Iraq and Afghanistan
Key Points
-
The U.S. invasion
of Afghanistan occurred after the September 11 attacks in late 2001. U.S.
President George W. Bush demanded that the Taliban hand over Osama bin Laden
and expel al-Qaeda from Afghanistan. The Taliban government refused unless it provided evidence of his involvement
in the 9/11 attacks. The request was dismissed by the United States as a meaningless
delaying tactic and on October 7, 2001, it launched Operation Enduring Freedom
with the United Kingdom. The two were later joined by other forces.
-
Although outgunned and
outnumbered, insurgents from the Taliban and other radical groups have waged
asymmetric warfare with guerrilla raids and ambushes in the countryside, suicide
attacks against urban targets, and turncoat killings against coalition forces.
From
2006, the Taliban made significant gains and showed an increased willingness to
commit atrocities against civilians. Violence sharply escalated from 2007 to
2009.
-
On May 2, 2011, U.S. Navy SEALs killed Osama bin Laden in Abbotabad, Pakistan. A year later,
NATO leaders endorsed an exit strategy for withdrawing their forces. UN-backed peace
talks have since taken place between the Afghan government and the Taliban.
Although there was a
formal end to combat operations, as of 2017 American forces continue to conduct airstrikes
and special operations raids, while Afghan forces are losing ground to Taliban
forces in some regions.
War
crimes by have been committed both sides.
-
The Iraq War began on
March 20, 2003, with the United States, joined by the United Kingdom and
several coalition allies, launching a “shock and awe” bombing
campaign. The invasion led to the collapse of the Ba’athist government. President Saddam Hussein
was captured in 2003 and executed by a
military court three years later. However, the power vacuum following Saddam’s
demise and the mismanagement of the occupation led to widespread sectarian
violence and insurgency against U.S.
and coalition forces.
-
The Bush administration
based its rationale for the war principally on the assertion that Iraq
possessed weapons of mass destruction, but no substantial evidence for this claim was found.
President
Barack Obama formally withdrew all combat troops from Iraq by
December 2011, but
Iraqi insurgency surged in
the aftermath of the U.S. withdrawal. In 2014, ISIS took over the cities of Mosul and
Tikrit and stated it was ready to march on Baghdad. In the summer of 2014, President Obama announced the return of U.S. forces to Iraq in an effort to halt the advance of ISIS forces, render
humanitarian aid to stranded refugees, and stabilize the political situation.
-
The war resulted in
a humanitarian crisis, including child malnutrition, the psychological scarring of Iraqi children, a scarcity of safe drinking water (resulting in a cholera
outbreak), the outflow of half of Iraqi doctors, birth defects caused by the
use of depleted uranium and white phosphorus by the U.S. military, 4.4 million internally displaced persons, and the dramatic decline of the population
of Iraqi Christians. Throughout the entire
war, there have been human rights abuses on all sides of the conflict. Arguably
the most controversial incident was a series of human rights violations against
detainees in the Abu Ghraib prison in Iraq.
Key Terms
- Iraq War
-
A protracted armed conflict that began in 2003 with the invasion of Iraq by a United States-led coalition that toppled the government of Saddam Hussein. The conflict continued for much of the next decade as an insurgency emerged to oppose the occupying forces and the post-invasion Iraqi government.
- War in Afghanistan
-
A war that followed the 2001 United States invasion of Afghanistan, supported initially by the United Kingdom and joined by the rest of NATO in 2003. Its public aims were to dismantle al-Qaeda and deny it a safe base of operations in Afghanistan by removing the Taliban from power.
- al-Qaeda
-
A militant Sunni Islamist multi-national organization founded
in 1988 by Osama bin Laden, Abdullah Azzam, and several other
Arab volunteers who fought against the Soviet invasion of
Afghanistan in the 1980s. It has been widely designated as a terrorist group.
- Islamic State
-
A Salafi jihadist extremist militant group led by and
mainly composed of Sunni Arabs from Syria and Iraq. In 2014, the group
proclaimed itself a caliphate, with religious, political, and military
authority over all Muslims worldwide. As of March 2015, it has control over
territory occupied by ten million people in Syria and Iraq and nominal
control over small areas of Libya, Nigeria, and Afghanistan. It also operates
or has affiliates in other parts of the world, including North Africa and South
Asia.
- Operation Enduring Freedom
-
A code name used to officially describe the War in Afghanistan, from the period between October 2001 and December 2014. Continued operations in Afghanistan by the United States’ military forces, both non-combat and combat, now occur under the name Operation Freedom’s Sentinel.
- Taliban
-
A Sunni Islamic fundamentalist political movement in
Afghanistan currently waging war (an insurgency, or jihad) within that
country. The group has used terrorism as a specific tactic to further their
ideological and political goals.
War in Afghanistan
The United States invasion of Afghanistan occurred after the September 11 attacks in late 2001. U.S. President George W. Bush demanded that the Taliban hand over Osama bin Laden and expel al-Qaeda from Afghanistan. The Taliban government refused to extradite him (or others sought by the U.S.) without evidence of his involvement in the 9/11 attacks. The request was dismissed by the U.S. as a meaningless delaying tactic and on October 7, 2001, it launched Operation Enduring Freedom with the United Kingdom. The two were later joined by other forces, including the Afghan Northern Alliance that had been fighting the Taliban in the ongoing civil war since 1996. In December 2001, the United Nations Security Council established the International Security Assistance Force (ISAF) to assist the Afghan interim authorities with securing Kabul. At the Bonn Conference the same month, Hamid Karzai was selected to head the Afghan interim administration, which after a 2002 loya jirga (Pashto for “grand assembly”)
in Kabul became the Afghan transitional administration. In the popular elections of 2004, Karzai was elected president of the country, now named the Islamic Republic of Afghanistan.
NATO became involved in ISAF in 2003 and later that year assumed leadership of its troops from 43 countries. NATO members provided the core of the force. One portion of U.S. forces in Afghanistan operated under NATO command. The rest remained under direct U.S. command. The Taliban was reorganized by its leader Mullah Omar and in 2003, launched an insurgency against the government and ISAF. Although outgunned and outnumbered, insurgents from the Taliban and other radical groups have waged asymmetric warfare with guerrilla raids and ambushes in the countryside, suicide attacks against urban targets, and turncoat killings against coalition forces. The Taliban exploited weaknesses in the Afghan government, among the most corrupt in the world, to reassert influence across rural areas of southern and eastern Afghanistan. In the initial years, there was little fighting but from 2006 the Taliban made significant gains and showed an increased willingness to commit atrocities against civilians. Violence sharply escalated from 2007 to 2009. While ISAF continued to battle the Taliban insurgency, fighting crossed into neighboring northwestern Pakistan.
On May 2, 2011, United States Navy SEALs killed Osama bin Laden in Abbotabad, Pakistan. A year later, NATO leaders endorsed an exit strategy for withdrawing their forces. UN-backed peace talks have since taken place between the Afghan government and the Taliban. In May 2014, the United States announced that its major combat operations would end in December and that it would leave a residual force in the country. In October 2014, British forces handed over the last bases in Helmand to the Afghan military, officially ending their combat operations in the war. In December 2014, NATO formally ended combat operations in Afghanistan and transferred full security responsibility to the Afghan government.
Aftermath and Consequences
Although there was a formal end to combat operations, partially because of improved relations between the United States and the new President
Ashraf Ghani, American forces increased raids against Islamic militants and terrorists, justified by a broad interpretation of protecting American forces. In March 2015, it was announced that the United States would maintain almost ten thousand service members in Afghanistan until at least the end of 2015, a change from planned reductions. In October 2015, the Obama administration announced that U.S. troops would remain in Afghanistan past the original planned withdrawal date of December 31, 2016. As of 2017, American forces continue to conduct airstrikes and special operations raids, while Afghan forces are losing ground to Taliban forces in some regions.
War casualty estimates vary significantly. According to a UN report, the Taliban were responsible for 76% of civilian casualties in Afghanistan in 2009. In 2011, a record over three thousand civilians were killed, the fifth successive annual rise. According to a UN report, in 2013 there were nearly three thousand civilian deaths, with 74% blamed on anti-government forces. A report titled Body Count put together by Physicians for Social Responsibility, Physicians for Global Survival, and the Nobel Peace Prize-winning International Physicians for the Prevention of Nuclear War (IPPNW) concluded that 106,000–170,000 civilians have been killed as a result of the fighting in Afghanistan at the hands of all parties to the conflict. According to the Watson Institute for International Studies Costs of War Project, 21,000 civilians have been killed as a result of the war.
An estimated 96% of Afghans have been affected either personally by or from the wider consequences of the war. Since 2001, more than 5.7 million former refugees have returned to Afghanistan but 2.2 million others remained refugees in 2013. In 2013, the UN estimated that 547,550 were internally displaced persons, a 25% increase over the 2012 estimates.
From 1996 to 1999, the Taliban controlled 96% of Afghanistan’s poppy fields and made opium its largest source of revenue. Taxes on opium exports became one of the mainstays of Taliban income. By 2000, Afghanistan accounted for an estimated 75% of the world’s opium supply. The Taliban leader Mullah Omar then banned opium cultivation and production dropped. Some observers argue that the ban was issued only to raise opium prices and increase profit from the sale of large existing stockpiles. The trafficking of accumulated stocks continued in 2000 and 2001. Soon after the invasion, opium production increased markedly. By 2005, Afghanistan was producing 90% of the world’s opium, most of which was processed into heroin and sold in Europe and Russia. In 2009, the BBC reported that “UN findings say an opium market worth $65bn funds global terrorism, caters to 15 million addicts, and kills 100,000 people every year.”
War crimes have been committed by both sides and include civilian massacres, bombings of civilian targets, terrorism, use of torture, and the murder of prisoners of war. Additional common crimes include theft, arson, and the destruction of property not warranted by military necessity. The Afghanistan Independent Human Rights Commission (AIGRC) called the Taliban’s terrorism against the Afghan civilian population a war crime. According to Amnesty International, the Taliban commit war crimes by targeting civilians, including killing teachers, abducting aid workers, and burning school buildings. The organization reported that up to 756 civilians were killed in 2006 by bombs, mostly on roads or carried by suicide attackers belonging to the Taliban. NATO has also alleged that the Taliban has used civilians as human shields.
In 2009, the U.S. confirmed that Western military forces in Afghanistan use white phosphorus, condemned by human rights organizations as cruel and inhumane because it causes severe burns, to illuminate targets or as an incendiary to destroy bunkers and enemy equipment. U.S. forces used white phosphorus to screen a retreat in the Battle of Ganjgal when regular smoke munitions were not available. White phosphorus burns on the bodies of civilians wounded in clashes near Bagram were confirmed. The U.S. claims at least 44 instances in which militants have used white phosphorus in weapons or attacks.
Iraq War
The Iraq War began on March 20, 2003, with the United States, joined by the United Kingdom and several coalition allies, launching a “shock and awe” bombing campaign. Iraqi forces were quickly overwhelmed as U.S. forces swept through the country. The invasion led to the collapse of the Ba’athist government (under the rule of the Arab Socialist Ba’ath Party’s). President Saddam Hussein was captured during Operation Red Dawn in December 2003 and executed by a military court three years later. However, the power vacuum following Saddam’s demise and the mismanagement of the occupation led to widespread sectarian violence between Shias and Sunnis as well as a lengthy insurgency against U.S. and coalition forces. The United States responded with a troop surge in 2007. The winding down of U.S. involvement in Iraq accelerated under President Barack Obama and the U.S. formally withdrew all combat troops from Iraq by December 2011, but left private security contractors in its place to continue the war.
The Bush administration based its rationale for the war principally on the assertion that Iraq possessed weapons of mass destruction (WMDs) and that the Iraqi government posed an immediate threat to the United States and its coalition allies. Select U.S. officials accused Hussein of harboring and supporting al-Qaeda, while others cited the desire to end the repressive dictatorship and bring democracy to the people of Iraq. After the invasion, no substantial evidence was found to verify the initial claims about WMDs. The rationale and misrepresentation of pre-war intelligence faced heavy criticism within the U.S. and internationally.
In the aftermath of the invasion, Iraq held multi-party elections in 2005. Nouri al-Maliki became Prime Minister in 2006 and remained in office until 2014. The al-Maliki government enacted policies that were widely seen as having the effect of alienating the country’s Sunni minority and worsening sectarian tensions.
Aftermath of 2011 Withdrawal
The invasion and occupation led to sectarian violence, which caused widespread displacement among Iraqi civilians. The Iraqi Red Crescent organization estimated the total internal displacement was around 2.3 million in 2008, and as many as 2 million Iraqis left the country. The invasion preserved the autonomy of the Kurdish region and because the Kurdish region is historically the most democratic area of Iraq, many Iraqi refugees from other territories fled into the Kurdish land.
Poverty led many Iraqi women to turn to prostitution to support themselves and their families, attracting sex tourists from regional lands.
Iraqi insurgency surged in the aftermath of the U.S. withdrawal. Terror campaigns involving Iraqi (primarily radical Sunni) anti-government insurgent groups and various factions within Iraq escalated. The events of post U.S. withdrawal have showed patterns raising concerns that the surging violence might slide into another civil war. By mid-2014, the country was in chaos with a new government yet to be formed following national elections and the insurgency reaching new heights. In early June 2014, the ISIL (ISIS) took over the cities of Mosul and Tikrit and stated it was ready to march on Baghdad, while Iraqi Kurdish forces took control of key military installations in the major oil city of Kirkuk. Prime Minister Nouri al-Maliki asked his parliament to declare a state of emergency that would give him increased powers, but the lawmakers refused.
In the summer of 2014 President Obama announced the return of U.S. forces to Iraq, but only in the form of aerial support, in an effort to halt the advance of ISIS forces, render humanitarian aid to stranded refugees, and stabilize the political situation. In August 2014, Prime Minister Nouri al-Maliki succumbed to pressure at home and abroad to step down. This paved the way for Haidar al-Abadi to take over. In what was claimed to be revenge for the aerial bombing ordered by President Obama, ISIS, which by this time had changed its name to the Islamic State, beheaded an American journalist, James Foley, who had been kidnapped two years earlier. Despite U.S. bombings and breakthroughs on the political front, Iraq remained in chaos with the Islamic State consolidating its gains and sectarian violence continuing unabated.
Consequences
Various scientific surveys of Iraqi deaths resulting from the first four years of the Iraq War estimated that between 151,000 and over one million Iraqis died as a result of the conflict during this time. A later study, published in 2011, estimated that approximately 500,000 Iraqis had died as a result of the conflict since the invasion. For troops in the U.S.-led multinational coalition, the death toll is carefully tracked and updated daily. A total of 4,491 U.S. service members were killed in Iraq between 2003 and 2014. Regarding the Iraqis, however, information on both military and civilian casualties is both less precise and less consistent.
The war also resulted in a humanitarian crisis. The child malnutrition rate rose to 28%. Some 60–70% of Iraqi children were reported to be suffering from psychological problems in 2007. Most Iraqis had no access to safe drinking water. A cholera outbreak in northern Iraq was thought to be the result of poor water quality. As many as half of Iraqi doctors left the country between 2003 and 2006. The use of depleted uranium and white phosphorus by the U.S. military has been blamed for birth defects and cancers in the Iraqi city of Fallujah. By the end of 2015, according to the Office of the United Nations High Commissioner for Refugees, 4.4 million Iraqis had been internally displaced. The population of Iraqi Christians dropped dramatically during the war, from 1.5 million in 2003 to perhaps only 275,000 in 2016. The Foreign Policy Association reported that “the most perplexing component of the Iraq refugee crisis” was that the U.S. has accepted only around 84,000 Iraqi refugees.
Throughout the entire Iraq war, there have been human rights abuses on all sides of the conflict. Arguably the most controversial incident
was a series of human rights violations against detainees in the Abu Ghraib prison in Iraq. These violations included physical and sexual abuse, torture, rape, sodomy, and murder. The abuses came to widespread public attention with the publication of photographs of the abuse by CBS News in April 2004. The incidents received widespread condemnation both within the United States and abroad, although the soldiers received support from some conservative media within the United States. The administration of George W. Bush attempted to portray the abuses as isolated incidents, not indicative of general U.S. policy. This was contradicted by humanitarian organizations such as the Red Cross, Amnesty International, and Human Rights Watch. After multiple investigations, these organizations stated that the abuses at Abu Ghraib were not isolated incidents, but were part of a wider pattern of torture and brutal treatment at American overseas detention centers, including those in Iraq, Afghanistan, and Guantanamo Bay. Several scholars stated that the abuses constituted state-sanctioned crimes.
Iraq War and Terrorism
Although explicitly stating that Iraq had “nothing” to do with 9/11, President George W. Bush consistently referred to the Iraq war as “the central front in the war on terror” and argued that if the United States pulled out of Iraq, “terrorists will follow us here.” While other proponents of the war regularly echoed this assertion, as the conflict dragged on, members of the U.S. Congress, the U.S. public, and even U.S. troops questioned the connection between Iraq and the fight against anti-U.S. terrorism. In particular, a consensus developed among intelligence experts that the Iraq war actually increased terrorism. Counterterrorism expert Rohan Gunaratna frequently referred to the invasion of Iraq as a “fatal mistake.”
London’s International Institute for Strategic Studies concluded in 2004 that the occupation of Iraq had become “a potent global recruitment pretext” for radical Muslim fighters and that the invasion “galvanized” al-Qaeda and “perversely inspired insurgent violence.” The U.S. National Intelligence Council concluded in a 2005 report that the war in Iraq had become a breeding ground for a new generation of terrorists. David Low, the national intelligence officer for transnational threats, indicated that the report concluded that the war in Iraq provided terrorists with “a training ground, a recruitment ground, the opportunity for enhancing technical skills … There is even, under the best scenario, over time, the likelihood that some of the jihadists who are not killed there will, in a sense, go home, wherever home is, and will therefore disperse to various other countries.” The Council’s chairman Robert Hutchings noted, “At the moment, Iraq is a magnet for international terrorist activity.” The 2006 National Intelligence Estimate, which outlined the considered judgment of all 16 U.S. intelligence agencies, concluded that “the Iraq conflict has become the ’cause célèbre’ for jihadists, breeding a deep resentment of U.S. involvement in the Muslim world and cultivating supporters for the global jihadist movement.”
38.2.4: The Arab Spring
The Arab Spring was a revolutionary wave of both violent and non-violent protests in North Africa and the Middle East that began in 2010, triggered by authoritarianism, human rights violations, political corruption, economic decline, unemployment, extreme poverty, and some demographic structural factors. This resulted in limited pro-democratic changes, with Tunisia emerging as the only democratic country in the Arab world.
Learning Objective
Discuss whether the Arab Spring was a success
Key Points
- The
Arab Spring was a revolutionary wave of both violent and non-violent
demonstrations, protests, riots, coups, and civil wars in North Africa and the
Middle East that began in 2010 with the Tunisian Revolution.
Analysts
have pointed to a number of complex factors behind the movement, including
authoritarianism, human rights violations, political corruption, economic decline, unemployment, extreme poverty, and a demographic structural factors such as a large percentage of educated but
dissatisfied youth.
-
In
the wake of the Arab Spring protests, a considerable amount of attention has
been focused on the role of social media and digital technologies in allowing
citizens to circumvent state-operated media channels. The influence of social
media on political activism during the Arab Spring has been much
debated. While social networks were a critical instrument
for rebels in the countries with high Internet usage rates, mainstream electronic media devices and word of mouth remained important means of communication.
- Prior to the Arab Spring, social unrest had been escalating in the Arab world.
Tunisia
experienced a series of conflicts. In Egypt, the labor movement had been strong for years and provided an important venue for
organizing protests and collective action. In
Algeria, discontent had been building for years over a number of social issues. In Western
Sahara, a group of young Sahrawis demonstrated against labor discrimination, unemployment, looting of resources, and human
rights abuses.
-
The
catalyst for the escalation of protests was the self-immolation of Tunisian
Mohamed Bouazizi. Unable to find work and selling fruit at a roadside stand,
Bouazizi had his wares confiscated by a municipal inspector in December 2010.
An hour later he doused himself with gasoline and set himself afire. His death
on January 4, 2011, brought together various groups dissatisfied with the
existing system, including many unemployed individuals, political and human rights
activists, labor, trade unionists, students, professors, lawyers, and others, to
begin the Tunisian Revolution.
- The demonstrations, triggered directly by
Bouazizi’s death, brought to the forefront such issues as high unemployment,
food inflation, corruption, lack of political freedoms, and poor living
conditions. With
the success of the protests in Tunisia, a wave of unrest sparked in Algeria,
Jordan, Egypt, and Yemen and then spread to other countries. By the end of
February 2012, rulers had been forced from power and protests occurred across the region.
Several leaders announced their intentions to
step down at the end of their current terms.
-
In
the aftermath of the Arab Spring in various countries, there was a wave of
violence and instability known as the Arab Winter. It was characterized by extensive civil wars, general regional
instability, economic and demographic decline, and religious wars between Sunni
and Shia Muslims. Although
the long-term effects of the Arab Spring have yet to be shown, its short-term
consequences varied greatly across the Middle East and North Africa. As of
2017, Tunisia is considered the only full democracy in the Arab World.
Key Terms
- Arab Winter
-
A term for the rise of authoritarianism and Islamic extremism evolving in the aftermath of the Arab Spring protests in Arab, Kurdish, and Berber countries. The process is characterized by the emergence of multiple regional civil wars, mounting regional instability, economic and demographic decline of Arab countries, and ethno-religious sectarian strife. According to a study by the American University of Beirut, as of summer 2014, it resulted in nearly a quarter of a million deaths and millions of refugees.
- Egyptian Revolution
-
Social unrest that began in January 2011 and took place across all of Egypt. It consisted of demonstrations, marches, occupations of plazas, non-violent civil resistance, acts of civil disobedience, and strikes. Millions of protesters from a range of socioeconomic and religious backgrounds demanded the overthrow of Egyptian President Hosni Mubarak.
- Arab Spring
-
A revolutionary wave of both violent and non-violent demonstrations, protests, riots, coups, and civil wars in North Africa and the Middle East that began in December 2010 in Tunisia with the Tunisian Revolution.
- Tunisian Revolution
-
An intensive campaign of civil resistance that took place in Tunisia and led to the ousting of longtime president Zine El Abidine Ben Ali in January 2011. It eventually led to a thorough democratization of the country and to free and democratic elections.
The Arab Spring was a revolutionary wave of both violent and non-violent demonstrations, protests, riots, coups, and civil wars in North Africa and the Middle East that began in 2010 with the Tunisian Revolution. The Tunisian Revolution effect spread strongly to five other countries: Libya, Egypt, Yemen, Syria, and Iraq, where either the regime was toppled or major uprisings and social violence occurred, including civil wars or insurgencies. Sustained street demonstrations took place in Morocco, Bahrain, Algeria, Iran, Lebanon, Jordan, Kuwait, Oman, and Sudan. Minor protests occurred in Djibouti, Mauritania, the Palestinian territories, Saudi Arabia, Somalia, and the Moroccan-controlled Western Sahara. A major slogan of the demonstrators in the Arab world was “the people want to bring down the regime.”
Analysts have pointed to a number of complex factors behind the movement, including issues such as authoritarianism, human rights violations, political corruption (at the time, explicitly revealed to the public by Wikileaks diplomatic cables), economic decline, unemployment, extreme poverty, and a number of demographic structural factors, such as a large percentage of educated but dissatisfied youth. Catalysts for the revolts in all Northern African and Persian Gulf countries included the concentration of wealth in the hands of autocrats in power for decades, insufficient transparency of its redistribution, corruption, and especially the refusal of the youth to accept the status quo. Some protesters looked to the Turkish model, with contested but peaceful elections, fast-growing but liberal economy, and secular constitution but Islamist government, as an ideal.
Role of Media
In the wake of the Arab Spring protests, a considerable amount of attention has been focused on the role of social media and digital technologies in allowing citizens to circumvent state-operated media channels. The influence of social media on political activism during the Arab Spring has been much debated. Protests took place both in states with a very high level of Internet usage (such as Bahrain with 88% of its population online in 2011) and in states with one of the lowest Internet use rates (Yemen and Libya).
Facebook, Twitter, and other major social media played a key role in the movement of Egyptian and Tunisian activists in particular. Nine out of ten Egyptians and Tunisians responded to a poll that they used Facebook to organize protests and spread awareness. In Egypt, young men referred to themselves as “the Facebook generation.” Furthermore, 28% of Egyptians and 29% of Tunisians from the same poll said that blocking Facebook greatly hindered and/or disrupted communication.
During the protests, people created pages on Facebook to raise awareness about alleged crimes against humanity, such as police brutality in the Egyptian Revolution.
The use of social media platforms more than doubled in Arab countries during the protests, with the exception of Libya.
Social networks were not the only instrument for rebels to coordinate their efforts and communicate. In the countries with the lowest Internet penetration and the limited role of social networks, such as Yemen and Libya, the role of mainstream electronic media devices such as cell phones, emails, and video clips was very important to cast light on the situation in the country and spread the word about the protests in the outside world. In Egypt, in Cairo particularly, mosques were one of the main platforms to coordinate the protest actions and raise awareness to the masses. Jared Keller, a journalist for The Atlantic, noted differences between the Arab countries where protests emerged. For example, in Egypt, most activists and protesters used Facebook (among other social media) to organize while in Iran, “good old-fashioned word of mouth” was the main means of communication.
Social Unrest in the Arab World
Tunisia experienced a series of conflicts during the three years leading up to the Arab Spring, most notably in the mining area of Gafsa in 2008 where protests continued for many months. These included rallies, sit-ins, and strikes. In Egypt, the labor movement had been strong for years, with more than 3,000 labor actions since 2004, and provided an important venue for organizing protests and collective action. One important demonstration was an attempted workers’ strike in 2008 at the state-run textile factories of al-Mahalla al-Kubra, outside Cairo. The idea for this type of demonstration spread throughout the country, promoted by computer-literate working class youths and their supporters among middle-class college students. A Facebook page to promote the strike attracted tens of thousands of followers and provided the platform for sustained political action in pursuit of the “long revolution.” The government mobilized to break the strike through infiltration and riot police, and while the regime was somewhat successful in forestalling a strike, dissidents formed a committee of youths and labor activists that became one of the major forces calling for the anti-Mubarak demonstration.
In Algeria, discontent had been building for years over a number of issues. Some estimates suggest that during 2010 there were as many as 9,700 protests throughout the country. Many events focused on issues such as education and health care, while others cited rampant corruption. In Western Sahara, the Gdeim Izik protest camp was erected 12 kilometres (7.5 mi) south-east of El Aaiún by a group of young Sahrawis (an ethnic groups living in western part of the Sahara desert)
in 2010. Their intention was to demonstrate against labor discrimination, unemployment, looting of resources, and human rights abuses. The camp contained between 12,000 and 20,000 inhabitants, but it was destroyed and its inhabitants evicted by Moroccan security forces. The security forces faced strong opposition from some young Sahrawi civilians and rioting soon spread to El Aaiún and other towns within the territory, resulting in an unknown number of injuries and deaths. Violence against Sahrawis in the aftermath of the protests was cited as a reason for renewed protests months later, after the start of the Arab Spring.
Catalyst of Arab Spring
The catalyst for the escalation of protests was the self-immolation of Tunisian Mohamed Bouazizi. Unable to find work and selling fruit at a roadside stand, Bouazizi had his wares confiscated by a municipal inspector in December 2010. An hour later he doused himself with gasoline and set himself afire. His death on January 4, 2011, brought together various groups dissatisfied with the existing system, including many unemployed individuals, political and human rights activists, labor, trade unionists, students, professors, lawyers, and others, to begin the Tunisian Revolution.
The demonstrations, triggered directly by Bouazizi’s death, brought to the forefront such issues as high unemployment, food inflation, corruption, lack of political freedoms, and poor living conditions.
With the success of the protests in Tunisia, a wave of unrest sparked in Algeria, Jordan, Egypt, and Yemen and then spread to other countries. By the end of February 2012, rulers had been forced from power in Tunisia, Egypt, Libya, and Yemen. Civil uprisings had erupted in Bahrain and Syria. Major protests had broken out in Algeria, Iraq, Jordan, Kuwait, Morocco, and Sudan. Minor protests had occurred in Mauritania, Oman, Saudi Arabia, Djibouti, Western Sahara, and Palestine. Tunisian President Zine El Abidine Ben Ali fled to Saudi Arabia in January 2011. In Egypt, President Hosni Mubarak resigned in February 2011 after 18 days of massive protests, ending his 30-year presidency. The Libyan leader Muammar Gaddafi was overthrown in August 2011 and killed in October 2011. Yemeni President Ali Abdullah Saleh signed the power-transfer deal in which a presidential election was held, resulting in his successor Abd al-Rab Mansur al-Hadi formally replacing him as the president of Yemen in February 2012, in exchange for immunity from prosecution. Weapons and Tuareg (a large Berber ethnic confederation)
fighters returning from the Libyan Civil War stoked a simmering conflict in Mali, which has been described as a fallout from the Arab Spring in North Africa.
During this period of regional unrest, several leaders announced their intentions to step down at the end of their current terms. Sudanese President Omar al-Bashir announced that he would not seek re-election in 2015, as did Iraqi Prime Minister Nouri al-Maliki, whose term was ending in 2014, although there were violent demonstrations demanding his immediate resignation in 2011. Protests in Jordan also caused the sacking of four successive governments by King Abdullah. The popular unrest in Kuwait resulted in resignation of Prime Minister Nasser Mohammed Al-Ahmed Al-Sabah cabinet.
Aftermath: Arab Winter
In the aftermath of the Arab Spring in various countries, there was a wave of violence and instability commonly known as the Arab Winter or Islamist Winter. The Arab Winter was characterized by extensive civil wars, general regional instability, economic and demographic decline, and religious wars between Sunni and Shia Muslims. According to a study by the American University of Beirut, as of summer 2014, the Arab Winter resulted in nearly a quarter of a million deaths and millions of refugees.
Although the long-term effects of the Arab Spring are not yet evident, its short-term consequences varied greatly across the Middle East and North Africa. In Tunisia and Egypt, where the existing regimes were ousted and replaced through a process of free and fair election, the revolutions were considered short-term successes. This interpretation is, however, undermined by the subsequent political turmoil that emerged, particularly in Egypt. Elsewhere, most notably in the monarchies of Morocco and the Persian Gulf, existing regimes co-opted the Arab Spring movement and managed to maintain order without significant social change. In other countries, particularly Syria and Libya, the apparent result of Arab Spring protests was a complete collapse of social order. As of 2017,
Tunisia is considered the only full democracy in the Arab World, despite many challenges the country still faces. Since the end of the revolution, Egypt has gone through political turmoil, with
democratically elected President Mohamed Morsi attempting to pass an extremist Islamist constitution that would grant him unparalleled powers, just to be ousted in 2013 by a military coup. Despite some democratic gestures (e.g., secular constitution and elections), international organizations currently consider Egypt to be an authoritarian regime.
Social scientists have endeavored to understand the circumstances that led to this variation in outcome. A variety of causal factors have been highlighted, most of which hinge on the relationship between the strength of the state and the strength of civil society. Countries with stronger civil society networks in various forms saw more successful reforms during the Arab Spring. One of the primary influences highlighted in the analysis of the Arab Spring is the relative strength or weakness of a society’s formal and informal institutions prior to the revolts. When the Arab Spring began, Tunisia had an established infrastructure and a lower level of petty corruption than did other states such as Libya. This meant that following the overthrow of the existing regime, there was less work to be done in reforming Tunisian institutions than elsewhere and consequently it was less difficult to transition to and consolidate a democratic system of government.
Also crucial was the degree of state censorship over print, broadcast, and social media in different countries. Television coverage by channels like Al Jazeera and BBC News provided worldwide exposure and prevented mass violence by the Egyptian government in Tahrir Square. In other countries, such as Libya, Bahrain, and Syria, such international press coverage was not present to the same degree and the governments were able to act more freely in suppressing the protests. Strong authoritarian regimes with high degrees of censorship in their national broadcast media were able to block communication and prevent the domestic spread of information necessary for successful protests. Morocco is a case in point, as its broadcast media at the time of the revolts was owned and operated almost exclusively by political elites with ties to the monarchy. Countries with greater access to social media, such as Tunisia and Egypt, proved more effective in mobilizing large groups of people and appear to have been more successful overall than those with greater state control over media.
The support, even if tacit, of national military forces during protests has also been correlated to the success of the Arab Spring movement in different countries. In Egypt and Tunisia, the military actively participated in ousting the incumbent regime and in facilitating the transition to democratic elections. Countries like Saudi Arabia, on the other hand, exhibited a strong mobilization of military force against protesters, effectively ending the revolts in their territories. Others, including Libya and Syria, failed to stop the protests entirely and instead ended up in civil war. The support of the military in Arab Spring protests has also been linked to the degree of ethnic homogeneity in different societies. In Saudi Arabia and Syria, where the ruling elite was closely linked with ethnic or religious subdivisions of society, the military sided with the existing regime and took on the ostensible role of protector to minority populations.
Scholars Quinn Mecham and Tarek Osman have identified some trends in political Islam resulting from the Arab Spring. These include repression of the Muslim Brotherhood (transnational organization that claims to be pro-democratic although many Middle Eastern commentators questions its commitment to democracy); rise of Islamist state-building, most prominently in Syria, Iraq, Libya and Yemen, as Islamists have found it easier than competing non-Islamists to fill the void of state failure by securing external funding, weaponry, and fighters; increasing sectarianism (primarily Sunni-Shia); increased caution and political learning in countries such as Algeria and Jordan where Islamists have chosen not to lead a major challenge against their governments; and in countries where Islamists did chose to lead a major challenge and did not succeed in transforming society (particularly Egypt), a disinterest in finding the answer as to what went wrong in favor of antagonism, anger, and a thirst for revenge.
38.2.5: The Syrian Civil War
The Syrian Civil War is an ongoing armed conflict
that grew out of discontent with the authoritarian government of President Bashar al-Assad and escalated into a brutal war fought by a complex network of factions, including the Syrian government and its allies, many fractured anti-government rebel groups, and radical Islamist organizations that aim to establish an Islamic state.
Learning Objective
Outline the events that led to the Syrian Civil War
Key Points
- Since 1949, Syrian has been under authoritarian rule, with
numerous coups shifting the center of power. In 1971, Hafez al-Assad
declared himself President. Immediately after his death in 2000, the Parliament
amended the constitution, reducing the mandatory minimum age of the President
from 40 to 34, which allowed his son, Bashar al-Assad, to become legally
eligible for nomination by the ruling Ba’ath party. Bashar inspired hopes for
reform and a Damascus Spring of intense political and social debate took place
from mid-2000 to mid-2001. However, the movement was suppressed.
-
Following the Arab Spring trends across the Arab world, in March
2011, protesters marched in the capital of Damascus, demanding democratic
reforms and the release of political prisoners. Security forces retaliated by
opening fire on the protesters. Initially, the protesters demanded mostly
democratic reforms, but by April, the emphasis in demonstration slogans
began shifting toward a call to overthrow the Assad regime. Protests spread
widely to other cities.
-
In July 2011, seven defecting Syrian Armed Forces officers formed
the Free Syrian Army (FSA), aiming to overthrow the Assad government with
united opposition forces. In August, a coalition of anti-government groups
called the Syrian National Council was formed. The council, based in Turkey,
attempted to organize the opposition. The opposition, however, including the
FSA, remained a fractious collection of different groups. By September 2011,
Syrian rebels were engaged in an active insurgency campaign in many parts of
Syria.
-
The war is currently being fought by a complex network of
factions: the Syrian government and its allies, a loose alliance of Sunni
Arab rebel groups (including the Free Syrian Army), the majority-Kurdish
Syrian Democratic Forces, Salafi jihadist groups (including al-Nusra
Front) who sometimes cooperate with the Sunni rebel groups, and the Islamic
State of Iraq and the Levant (ISIL). Hezbollah, Iran, Afghanistan,
Pakistan, and Russia support the pro-Assad forces while a number of countries,
including many NATO members, participate in the Combined Joint Task Force,
chiefly to fight ISIL and support rebel groups perceived as moderate and
friendly to Western nations.
-
Estimates of deaths in the Syrian Civil War, per opposition
activist groups, vary between 321,358 and 470,000. The use of chemical weapons attacks
has been confirmed by UN investigations. Formerly rare infectious diseases have
spread in rebel-held areas, brought on by poor sanitation and deteriorating
living conditions. The violence has caused millions to flee their homes. As of
March 2017, the UNHCR reports 6.3 million Syrians are internally displaced
and nearly five million registered as Syrian refugees (outside of
Syria).
-
According to various human rights organizations and the United
Nations, human rights violations have been committed by both the government and
the rebels, with the “vast majority of the abuses having been committed by
the Syrian government.” The war has also led to the massive destruction of
Syrian heritage sites.
Key Terms
- Damascus Spring
-
A period of intense political and social debate in Syria, which started after the death of President Hafiz al-Assad in June 2000 and continued to some degree until fall 2001, when most of its activities were suppressed by the government.
- Free Syrian Army
-
A faction in the Syrian Civil War founded in July 2011 by officers who defected from the Syrian Armed Forces, with the stated goal to bring down the government of Bashar al-Assad.
- shabiha
-
Mostly Alawite groups of armed militia in support of the Ba’ath Party government of Syria, led by the Al-Assad family. However, in the Aleppo Governorate, they were composed entirely of the local pro-Assad Sunni tribes. The Syrian opposition stated that they are a tool of the government for cracking down on dissent. Syrian Observatory for Human Rights has stated that some of the groups are mercenaries.
- Syrian Civil War
-
An armed conflict taking place in Syria. The unrest in Syria, part of a wider wave of 2011 Arab Spring protests, grew out of discontent with the authoritarian government of President Bashar al-Assad and escalated to an armed conflict after protests calling for his removal turned violent in response to the crackdown on dissent. The war is being fought by several factions: the Syrian government and its allies, a loose alliance of Sunni Arab rebel groups (including the Free Syrian Army), the majority-Kurdish Syrian Democratic Forces, Salafi jihadist groups (including al-Nusra Front) who sometimes cooperate with the Sunni rebel groups, and the Islamic State of Iraq and the Levant (ISIL).
- al-Nusra Front
-
A Sunni Islamist terrorist organization fighting against Syrian Government forces in the Syrian Civil War with the aim of establishing an Islamist state in the country. It was the official Syrian branch of al-Qaeda until July 2016, when it ostensibly split, now also operating in neighboring Lebanon. In early 2015, the group became one of the major components of the powerful jihadist joint operations room named the Army of Conquest, which took over large territories in Northwestern Syria.
- Arab Spring
-
A revolutionary
wave of both violent and non-violent demonstrations, protests, riots,
coups, and civil wars in North Africa and the Middle
East that began in December 2010 in Tunisia with the Tunisian
Revolution.
Assad Regime
Syria became an independent republic in 1946, although democratic rule ended with a coup in 1949, followed by two more coups the same year. A popular uprising against military rule in 1954 saw the army transfer power to civilians. The secular Ba’ath Syrian Regional Branch government came to power through a successful coup d’état in 1963. For the next several years, Syria went through additional coups and changes in leadership. In 1971, Hafez al-Assad declared himself President, a position that he held until his death in 2000. Immediately following al-Assad’s death, the Parliament amended the constitution, reducing the mandatory minimum age of the President from 40 to 34, which allowed his son, Bashar al-Assad, to become legally eligible for nomination by the ruling Ba’ath party. In 2000, Bashar al-Assad was elected President by referendum, in which he ran unopposed, garnering the alleged 97.29% of the vote according to Syrian government statistics.
Bashar, who speaks French and English and has a British-born wife, inspired hopes for reform, and a Damascus Spring of intense political and social debate took place from mid-2000 to mid-2001. The period was characterized by the emergence of numerous political forums or salons where groups of like-minded people met in private houses to debate political and social issues. The phenomenon of salons spread rapidly in Damascus and to a lesser extent in other cities. The movement ended with the arrest and imprisonment of ten leading activists who had called for democratic elections and a campaign of civil disobedience.
Syria’s Social Profile
Syrian Arabs, together with some 600,000 Palestinian Arabs, make up roughly 74 percent of the population. Syrian Muslims are 74 percent Sunnis (including Sufis) and 13 percent Shias (including 8-12 percent Alawites), 3 percent are Druze, and the remaining 10 percent are Christians. Not all of the Sunnis are Arabs. The Assad family is mixed. Bashar is married to a Sunni with whom he has several children, but is affiliated with the minority Alawite sect. The majority of Syria’s Christians belonged to the Eastern Christian churches. Syrian Kurds, an ethnic minority making up approximately 9 percent of the population, have been angered by ethnic discrimination and the denial of their cultural and linguistic rights as well as the frequent denial of citizenship rights.
Socioeconomic inequality increased significantly after free market policies were initiated by Hafez al-Assad in his later years, and it accelerated after Bashar al-Assad came to power. With an emphasis on the service sector, these policies benefited a minority of the nation’s population, mostly people who had connections with the government and members of the Sunni merchant class of Damascus and Aleppo. This coincided with the most intense drought ever recorded in Syria, which lasted from 2007 to 2010 and resulted in widespread crop failure, an increase in food prices, and a mass migration of farming families to urban centers. The country also faced particularly high youth unemployment rates.
The human rights situation in Syria has long been the subject of harsh critique from global organizations. The rights of free expression, association, and assembly were strictly controlled. The country was under emergency rule from 1963 until 2011 and public gatherings of more than five people were banned. Security forces had sweeping powers of arrest and detention. Authorities have harassed and imprisoned human rights activists and other critics of the government, who were often detained indefinitely and tortured while under prison-like conditions. Women and ethnic minorities faced discrimination in the public sector. Thousands of Syrian Kurds were denied citizenship in 1962 and their descendants were labeled “foreigners.”
Breakout of Civil War
Following the Arab Spring trends across the Arab world, in March 2011 protesters marched in the capital of Damascus, demanding democratic reforms and the release of political prisoners. Security forces retaliated by opening fire on the protesters. The protest was triggered by the arrest of a boy and his friends for writing in graffiti “The people want the fall of the government” in the city of Daraa. The protesters burned down a Ba’ath Party headquarters and other buildings. The ensuing clashes claimed the lives of seven police officers and 15 protesters. Several days later in a speech, President Bashar al-Assad blamed “foreign conspirators” pushing “Israeli propaganda” for the protests.
Initially, the protesters demanded mostly democratic reforms, release of political prisoners, an increase in freedoms, abolition of the emergency law, and an end to corruption. Already by April, however, the emphasis in demonstration slogans shifted slowly towards a call to overthrow the Assad regime. Protests spread widely to other cities. By the end of May, 1,000 civilians and 150 soldiers and policemen had been killed and thousands detained. Among the arrested were many students, liberal activists, and human rights advocates.
In July 2011, seven defecting Syrian Armed Forces officers formed the Free Syrian Army (FSA), originally composed of defected Syrian military officers and soldiers aiming to overthrow the Assad government with united opposition forces. In August, a coalition of anti-government groups called the Syrian National Council was formed. The council, based in Turkey, attempted to organize the opposition. The opposition, however, including the FSA, remained a fractious collection of political groups, longtime exiles, grassroots organizers, and armed militants divided along ideological, ethnic and/or sectarian lines. Throughout August, government forces stormed major urban centers and outlying regions, and continued to attack protests. By September 2011, Syrian rebels were engaged in an active insurgency campaign in many parts of Syria. By October, the FSA started to receive active support from the Turkish government, which allowed the rebel army to operate its command and headquarters from the country’s southern Hatay Province close to the Syrian border and its field command from inside Syria.
Fighting Factions
The war is currently being fought by a complex network of factions. A number of sources have emphasized that as of at least late 2015/early 2016, the Syrian government was dependent on a mix of volunteers and militias rather than the Syrian Armed Forces. The Syrian National Defense Force was formed out of pro-government militias. They act in an infantry role, directly fighting against rebels on the ground and running counter-insurgency operations in coordination with the army, who provides them with logistical and artillery support. The shabiha are unofficial pro-government militias drawn largely from Syria’s Alawite minority group. Since the uprising, the Syrian government has been accused of using shabiha to break up protests and enforce laws in restive neighborhoods.
The Christian militias in Syria and northern Iraq are largely made up of ethnic Assyrians, Syriac-Arameans, and Armenians. Sensing that they depend on the largely secular government, the militias of Syrian Christians fight both on the Syrian government’s side and with Kurdish forces. The Eastern Aramaic-speaking Assyrians in north eastern Syria and northern Iraq have formed various militias (including the Assyrian Defense Force, Dwekh Nawsha, and Sootoro) to defend their ancient towns, villages, and farmsteads from ISIS. They often but not always fight in conjunction with Kurdish and Armenian groups.
In February 2013, former secretary general of Hezbollah Sheikh Subhi al-Tufayli confirmed that Hezbollah was fighting for the Syrian Army. Iran, on the other hand, continues to officially deny the presence of its combat troops in Syria, maintaining that it provides military advice to Assad’s forces in their fight against terrorist groups. Since the civil uprising phase of the Syrian Civil War, Iran has provided the Syrian government with financial, technical, and military support, including training and some combat troops. The number of Afghans fighting in Syria on behalf of the Syrian government has been estimated at 10,000-12,000 while the number of Pakistanis is not known. In September 2015, Russia’s Federation Council unanimously granted the request by President of Russia Vladimir Putin to permit the use of the Russian Armed Forces in Syria.
The armed opposition consists of various groups that were either formed during the course of the conflict or joined from abroad. In the northwest of the country, the main opposition faction is the al-Qaeda-affiliated al-Nusra Front allied with numerous other smaller Islamist groups, some of which operate under the umbrella of the Free Syrian Army (FSA). The designation of the FSA by the West as a moderate opposition faction has allowed it to receive sophisticated weaponry and other military support from the U.S., Turkey, and some Gulf countries that effectively increases the total fighting capacity of the Islamist rebels. In the east, the Islamic State of Iraq and the Levant (ISIL, known more commonly as ISIS), a jihadist militant group originating from Iraq, made rapid military gains in both Syria and Iraq. ISIL eventually came into conflict with other rebels, especially with al-Nusra, leaders of which did not want to pledge allegiance to ISIL. As of 2015, Qatar, Saudi Arabia, and Turkey were openly backing the Army of Conquest, an umbrella rebel group that reportedly includes an al-Qaeda linked al-Nusra Front and another Salafi coalition known as Ahrar ash-Sham and Faylaq Al-Sham, a coalition of Muslim Brotherhood-linked rebel groups. Also, in the northeast local Kurdish militias have taken up arms and fought with both rebel Islamist factions and government loyalists.
The Syrian Democratic Forces (SDF) are an alliance of Arab, Assyrians, Armenian, Kurdish, and Turkmen militias fighting for a democratic and federalist Syria. They are opposed to the Assad government, but have directed most of their efforts against Al-Nusra Front and ISIL.
A number of countries, including many NATO members, participate in the Combined Joint Task Force, chiefly to fight ISIL and support rebel groups perceived as moderate and friendly to Western nations such as the Free Syrian Army. Those who have conducted airstrikes in Syria include the United States, Australia, Bahrain, Canada, France, Jordan, The Netherlands, Saudi Arabia, Turkey, the United Arab Emirates, and the United Kingdom. Some members are involved in the conflict beyond combating ISIL. Turkey has been accused of fighting against Kurdish forces in Syria and Iraq, including intelligence collaborations with ISIL in some cases.
Consequences
Estimates of deaths in the Syrian Civil War, per opposition activist groups, vary between 321,358 and 470,000. In April 2016, the United Nations and Arab League Envoy to Syria put out an estimate of 400,000 deaths.
A UN fact-finding mission was requested by member states to investigate 16 alleged chemical weapons attacks. Seven of them have been investigated (nine were dropped for lack of “sufficient or credible information”) and in four cases the UN inspectors confirmed use of sarin gas. The reports, however, did not blame any party for using chemical weapons. Many, including the United States and the European Union, have accused the Syrian government of conducting several chemical attacks, the most serious eing the 2013 Ghouta attacks. Before this incident, UN human rights investigator Carla del Ponte, who has been investigating sarin gas use in Syria, accused the opposition of the government of using sarin gas in 2013.
Formerly rare infectious diseases have spread in rebel-held areas, brought on by poor sanitation and deteriorating living conditions. The diseases have primarily affected children and include measles, typhoid, hepatitis, dysentery, tuberculosis, diphtheria, whooping cough, and the disfiguring skin disease leishmaniasis. Of particular concern is the contagious and crippling poliomyelitis.
The violence in Syria caused millions to flee their homes. In March 2015, Al-Jazeera estimated 10.9 million Syrians, or almost half the population, were displaced. As of March 2017, the UNHCR reports 6.3 million Syrians are internally displaced and nearly five million registered as Syrian refugees (outside of Syria). Most Syrian refugees have sought safety in Lebanon, Jordan, Turkey, and Iraq.
In 2017, the United Nations (UN) identified 13.5 million Syrians requiring humanitarian assistance (in 2014, the population of Syria was about 18 million).
According to various human rights organizations and United Nations, human rights violations have been committed by both the government and the rebels, with the “vast majority of the abuses having been committed by the Syrian government.” The UN commission investigating human rights abuses in Syria confirms at least nine intentional mass killings in the period 2012 to mid-July 2013, identifying the perpetrator as Syrian government and its supporters in eight cases and the opposition in one. By late 2013, the Euro-Mediterranean Human Rights Network reported approximately 6,000 women have been raped since the start of the conflict, with figures likely to be much higher given that most cases go unreported. According to some international lawyers, Syrian government officials could face war crimes charges in the light of a huge cache of evidence smuggled out of the country showing the systematic killing of about 11,000 detainees. Most of the victims were young men and many corpses were emaciated, bloodstained, and bore signs of torture. Experts note this evidence is more detailed and on a far larger scale than anything else that has yet emerged from the crisis. In 2014, Human Rights Watch released a report detailing government forces razing to the ground seven anti-government districts in the cities of Damascus and Hama. Witnesses spoke of explosives and bulldozers used to knock down buildings. Satellite imagery was provided as part of the report and the destruction was characterized as collective punishment against residents of rebel-held areas. UN also reported that armed forces of both sides of the conflict blocked access of humanitarian convoys, confiscated food, cut off water supplies, and targeted farmers working their fields. UN has also accused ISIS forces of using public executions, amputations, and lashings in a campaign to instill fear. Enforced disappearances and arbitrary detentions have also been a feature since the Syrian uprising began. In February 2017, Amnesty International published a report which accused the Syrian government of murdering an estimated 13,000 persons, mostly civilians, at the Saydnaya military prison.
As the conflict has expanded across Syria, many cities have been engulfed in a wave of crime as fighting caused the disintegration of much of the civilian state and many police stations stopped functioning. Rates of theft increased, with criminals looting houses and stores. Criminal networks have been used by both the government and the opposition during the conflict. Facing international sanctions, the Syrian government relied on criminal organizations to smuggle goods and money in and out of the country. The economic downturn caused by the conflict and sanctions also led to lower wages for shabiha members. In response, some shabiha members began stealing civilian properties and engaging in kidnappings. Rebel forces sometimes rely on criminal networks to obtain weapons and supplies. Black market weapon prices in Syria’s neighboring countries have significantly increased since the start of the conflict. To generate funds to purchase arms, some rebel groups have turned towards extortion, theft, and kidnapping.
As of March 2015, the war has affected 290 heritage sites, severely damaged 104, and completely destroyed 24. All the six UNESCO World Heritage Sites in Syria have been damaged. Destruction of antiquities has been caused by shelling, army entrenchment, and looting at various tells, museums, and monuments. A group called Syrian Archaeological Heritage Under Threat is monitoring and recording the destruction in an attempt to create a list of heritage sites damaged during the war and to gain global support for the protection and preservation of Syrian archaeology and architecture. In 2014 and 2015, following the rise of the ISIL, several sites in Syria were destroyed by the group as part of a deliberate destruction of cultural heritage sites.
38.2.6: The Iranian Nuclear Deal
The Iran nuclear deal is an international agreement on the limits and international control imposed on the nuclear program of Iran. It was reached in 2015 after years of negotiations between Iran, the P5+1, and the European Union.
Learning Objective
Explain the arguments for and against the nuclear deal between the U.S. and Iran
Key Points
-
The
nuclear program of Iran has included several research sites, two uranium mines,
a research reactor, and uranium processing facilities that include three known
uranium enrichment plants. In 1970, Iran ratified the Nuclear Non-Proliferation
Treaty (NPT), making its nuclear program subject to International Atomic Energy
Agency (IAEA) verification. The program was launched in the 1950s with the help
of the United States as part of the Atoms for Peace program.
- The participation
of the United States and Western European governments in Iran’s nuclear program
continued until the 1979 Iranian Revolution that toppled the Shah of Iran.
Following the 1979 Revolution, most of the international nuclear cooperation
with Iran was cut off.
In
the 2000s, the revelation of Iran’s clandestine uranium enrichment program
raised concerns that it might be intended for non-peaceful uses. While since 2003 the United
States has alleged that Iran has a program to develop nuclear weapons, Iran has
maintained that its nuclear program is aimed only at generating electricity.
- Formal negotiations toward the Joint Comprehensive Plan of Action on Iran’s nuclear program began with the adoption of the Joint Plan of Action, an interim agreement signed between Iran and the P5+1 countries in November 2013. For the next twenty months, Iran and the P5+1 countries engaged in negotiations, and in April 2015 agreed on an Iran nuclear deal framework for the final agreement. In July 2015, Iran and the P5+1 agreed on the plan.
- Under the agreement, Iran agreed to eliminate its stockpile of medium-enriched uranium, cut its stockpile of low-enriched uranium, and reduce by about two-thirds the number of its gas centrifuges. For the next 15 years, Iran will only enrich uranium up to 3.67%. Iran also agreed not to build any new heavy-water facilities for the same period of time. Uranium-enrichment activities will be limited to a single facility. Other facilities will be converted to avoid proliferation risks. To monitor and verify Iran’s compliance with the agreement, the IAEA will have regular access to all Iranian nuclear facilities.
-
More than 90 countries endorsed the agreement as did many
international organizations, including the UN and NATO. The most notable critic
of the agreement is the state of Israel. Nuclear experts and watchdogs agreed that the agreement was a positive development. An
intense public debate in the United States took place during the congressional
review period, with various groups lobbying both opposition and support for the
agreement.
-
With the prospective lifting of some
sanctions, the agreement is expected to have a significant impact on both the
economy of Iran and global markets. The energy sector is particularly
important. The agreement will boost Iran’s scientific cooperation with Western powers and has already improved diplomatic relations in some cases. However, Iran and the U.S. have been both accused of violating the agreement, and its future under Trump administration is uncertain.
Key Terms
- International Atomic Energy Agency
-
An international organization that seeks to promote the peaceful use of nuclear energy and inhibit its use for any military purpose, including nuclear weapons. It was established as an autonomous organization in 1957. Although established independently of the United Nations through its own international treaty, it reports to both the United Nations General Assembly and Security Council.
- Joint Plan of Action
-
A pact signed between Iran and the P5+1 countries in Geneva, Switzerland in 2013. It consisted of a short-term freeze of portions of Iran’s nuclear program in exchange for decreased economic sanctions on Iran as the countries worked towards a long-term agreement. It represented the first formal agreement between the United States and Iran in 34 years. Implementation of the agreement began January 20, 2014.
- Iran Sanctions Act
-
A 1996 act of Congress that imposed economic sanctions on firms doing business with Iran (and originally also with Libya, but the act does not apply to Libya since 2006). The act allows the president to waive sanctions on a case-by-case basis, although this waiver is subject to renewal every six months. Despite the restrictions on American investment in Iran, other provisions apply to all foreign investors, and many Iranian expatriates based in the U.S. continue to make substantial investments in Iran.
On December 1, 2016, the Senate voted 99-0 in favor of extending the sanctions a further ten years.
- Joint Comprehensive Plan of Action
-
An international agreement, known commonly as the Iran deal or Iran nuclear deal, on the nuclear program of Iran reached in Vienna in July 2015 between Iran, the P5+1 (the five permanent members of the United Nations Security Council—China, France, Russia, United Kingdom, United States—plus Germany), and the European Union.
- P5+1
-
A group of six world powers that joined together in diplomatic efforts with Iran with regard to its nuclear program. The group consists of the UN Security Council’s five permanent members and Germany.
- Nuclear Non-Proliferation Treaty
-
An international treaty whose objective is to prevent the spread of nuclear weapons and weapons technology, promote cooperation in the peaceful uses of nuclear energy, and further the goal of achieving nuclear disarmament and general and complete disarmament. Opened for signature in 1968, the treaty entered into force in 1970. As of August 2016, 191 states have adhered to the treaty, although North Korea announced its withdrawal in 2003.
Iran’s Nuclear Program
The
nuclear program of Iran has included several research sites, two uranium mines,
a research reactor, and uranium processing facilities that include three known
uranium enrichment plants. In 1970, Iran ratified the Nuclear Non-Proliferation
Treaty (NPT), making its nuclear program subject to International Atomic Energy
Agency (IAEA) verification. The program was launched in the 1950s with the help
of the United States as part of the Atoms for Peace program. The participation
of the United States and Western European governments in Iran’s nuclear program
continued until the 1979 Iranian Revolution that toppled the Shah of Iran.
Following the 1979 Revolution, most of the international nuclear cooperation
with Iran was cut off. In 1981, Iranian officials concluded that the country’s
nuclear development should continue. Negotiations took place with France in the
late 1980s and with Argentina in the early 1990s, and agreements were reached.
In the 1990s, Russia formed a joint research organization with Iran, providing
Iran with Russian nuclear experts and technical information.
In
the 2000s, the revelation of Iran’s clandestine uranium enrichment program
raised concerns that it might be intended for non-peaceful uses. The IAEA
launched an investigation in 2003 after an Iranian dissident group revealed
undeclared nuclear activities carried out by Iran. While since 2003 the United
States has alleged that Iran has a program to develop nuclear weapons, Iran has
maintained that its nuclear program is aimed only at generating electricity.
The United States’s position is that “a nuclear-armed Iran is not
acceptable,” and the United Kingdom, France, and Germany have also attempted
to negotiate a cessation of nuclear enrichment activities by Iran.
In
2006, American and European representatives noted that Iran has enough
unenriched uranium hexafluoride gas to make ten atomic bombs, adding that it
was “time for the Security Council to act.” In 2006, because of
Iran’s noncompliance with its NPT obligations, the United Nations Security
Council demanded that Iran suspend its enrichment programs. In 2007, the United
States National Intelligence Estimate (NIE) stated that Iran halted an alleged
active nuclear weapons program in fall 2003. In 2011, the IAEA reported
credible evidence that Iran had been conducting experiments aimed at designing
a nuclear bomb until 2003 and that research may have continued on a smaller
scale after that time.
Negotiations
In
March 2013, the United States began a series of secret bilateral talks with
Iranian officials in Oman and in June, Hassan Rouhani was elected president of
Iran. Rouhani has been described as “more moderate, pragmatic and willing
to negotiate” than his predecessor, the anti-Western hardliner Mahmoud
Ahmadinejad. However, in a 2006 nuclear negotiation with European powers,
Rouhani said that Iran had used the negotiations to dupe the Europeans, saying
that during the negotiations, Iran managed to master the conversion of uranium
yellowcake (the conversion of yellowcake is an important step in the nuclear
fuel process). In August 2013, three days after his inauguration, Rouhani
called for a resumption of serious negotiations with the P5+1 (the UN Security
Council’s five permanent members, China, France, Russia, the United
Kingdom, and the United States, plus Germany) on the Iranian nuclear program.
In September 2013, Obama and Rouhani had a telephone conversation, the first
high-level contact between U.S. and Iranian leaders since 1979. U.S. Secretary
of State John Kerry also had a meeting with Iranian foreign minister Mohammad
Javad Zarif, signaling that the two countries were open to cooperation.
After
several rounds of negotiations, in November 2013, the Joint Plan of Action
(JPA), an interim agreement on the Iranian nuclear program, was signed between
Iran and the P5+1 countries in Geneva, Switzerland. It consisted of a
short-term freeze of portions of Iran’s nuclear program in exchange for
decreased economic sanctions on Iran as the countries work towards a long-term
agreement. The IAEA began “more intrusive and frequent inspections”
under this interim agreement, formally activated in January 2014. The IAEA
issued a report stating that Iran was adhering to the terms of the interim
agreement, including stopping enrichment of uranium to 20 percent, beginning
the dilution process (to reduce half of the stockpile of 20 percent enriched
uranium to 3.5 percent), and halting work on the Arak heavy-water reactor. A major
focus of the negotiations was limitations on Iran’s key nuclear facilities.
Joint Comprehensive
Plan of Action
The
final agreement between the P5+1+EU and Iran on the Joint Comprehensive Plan of
Action (JCPOA) is the culmination of 20 months of “arduous”
negotiations. It followed the JPA and an Iran nuclear deal framework was
reached in April 2015. Under this framework, Iran agreed tentatively to accept
restrictions on its nuclear program, all of which would last for at least a
decade and some longer, and to submit to an increased intensity of
international inspections. The negotiations were extended several times until
the final JCPOA was finally reached on July 14, 2015.
The
final agreement’s complexity shows the impact of a public letter written by a
bipartisan group of 19 U.S. diplomats, experts, and others in June 2015, when
negotiations were still ongoing. That letter outlined concerns about the
several provisions in the then-unfinished agreement and called for a number of
improvements to strengthen the prospective agreement and win support.
Major
provisions of the final accord include:
- Iran’s current stockpile of low-enriched uranium will be reduced
by 98 percent, from 10,000 kg to 300 kg. This reduction will be
maintained for 15 years. For the same 15-year period, Iran will be
limited to enriching uranium to 3.67%, a percentage sufficient for civilian
nuclear power and research, but not for building a nuclear weapon.
-
For ten years, Iran will place over two-thirds of its centrifuges
in storage, with only 5,060 allowed to enrich uranium, an enrichment
capacity limited to the Natanz plant.
- Iran will not build any new uranium-enrichment facilities for
15 years.
-
Iran may continue research and development work on enrichment, but
that work will take place only at the Natanz facility and include certain
limitations for the first eight years.
-
Iran, with cooperation from the “Working Group” (the
P5+1 and possibly other countries), will modernize and rebuild the Arak heavy
water research reactor based on an agreed design to support its peaceful
nuclear research and production needs and purposes, but in such a way as to
minimize the production of plutonium and prevent production of weapons-grade
plutonium.
-
Iran’s Fordow facility will stop enriching uranium and researching
uranium enrichment for at least 15 years and the facility will be
converted into a nuclear physics and technology center.
-
Iran will implement an Additional Protocol agreement, which will
continue in perpetuity for as long as Iran remains a party to the Nuclear Non-Proliferation
Treaty (NPT). The signing of the Additional Protocol represents a continuation
of the monitoring and verification provisions “long after the
comprehensive agreement between the P5+1 and Iran is implemented.”
-
A comprehensive inspections regime will be implemented to
monitor and confirm that Iran is complying with its obligations and is not
diverting any fissile material.
Following
the issuance of a IAEA report verifying implementation by Iran of the
nuclear-related measures, the UN sanctions against Iran and some EU
sanctions will terminate and some will be suspended. Once sanctions are lifted,
Iran will recover approximately $100 billion of its assets (U.S. Treasury
Department estimate) frozen in overseas banks.
Response
More than 90 countries endorsed the agreement as did many
international organizations, including the UN and NATO. The most notable critic
of the agreement is the state of Israel. Prime Minister Benjamin Netanyahu
said, “Israel is not bound by this deal with Iran, because Iran continues
to seek our destruction, we will always defend ourselves.” Netanyahu
called the deal a “capitulation” and “a bad mistake of historic
proportions.” Most of Israel’s other political figures, including the
opposition, were similarly critical of the agreement. The two countries
maintain extremely hostile relations, with some Iranian leaders calling for the
destruction of Israel.
Following
the unveiling of the agreement, “a general consensus quickly emerged”
among nuclear experts and watchdogs that the agreement “is as close to a
best-case situation as reality would allow.” In August 2015, 75 arms
control and nuclear nonproliferation experts signed a statement endorsing the
deal as “a net-plus for international nuclear nonproliferation
efforts” that exceeds the historical standards for arms control
agreements.
An
intense public debate in the United States took place during the congressional
review period, with various groups lobbying both opposition and support for the
agreement. Many Iranian Americans, even those who fled repression in Iran and
oppose its government, welcomed the JCPOA as a step forward. U.S. pro-Israel
groups are divided on the JCPOA. Various other groups have run ad campaigns for or
against the agreement. For example, the New York-based Iran Project, a
nonprofit led by former high-level U.S. diplomats and funded by the Rockefeller
Brothers Fund, along with the United Nations Association of the United States,
supports the agreement. In July 2015, a bipartisan open letter endorsing the
Iran agreement was signed by more than 100 former U.S. ambassadors and
high-ranking State Department officials. A separate public letter to Congress
in support of the agreement from five former U.S. ambassadors to Israel from
administrations of both parties and three former Under Secretaries of State was
also released in July 2015. Another public letter to Congress urging approval
of the agreement was signed by a bipartisan group of more than 60 “national-security leaders,” including politicians, retired military
officers, and diplomats. In August 2015, 29 prominent U.S. scientists, mostly
physicists, published an open letter endorsing the agreement. An open letter
endorsing the agreement was also signed by 36 retired military generals and
admirals. However, this letter was answered by a letter signed by more than 200
retired generals and admirals opposing the deal.
Republican leaders vowed to attempt to kill
the agreement as soon as it was released, even before classified sections were
made available to Congress. According to the Washington Post,
“most congressional Republicans remained deeply skeptical, some openly
scornful, of the prospect of relieving economic sanctions while leaving any
Iranian uranium-enrichment capability intact.” Senate Majority Leader
Mitch McConnell, Republican of Kentucky, said the deal “appears to fall
well short of the goal we all thought was trying to be achieved, which was that
Iran would not be a nuclear state.” A New York Times news
analysis stated that Republican opposition to the agreement “seems born of
genuine distaste for the deal’s details, inherent distrust of President Obama,
intense loyalty to Israel and an expansive view of the role that sanctions have
played beyond preventing Iran’s nuclear abilities.” The Washington
Post identified twelve issues related to the agreement on which the
two sides disagreed, including the efficacy of inspections at undeclared sites;
the effectiveness of the snapback sanctions; the significance of limits on
enrichment; the significance of IAEA side agreements; the effectiveness of
inspections of military sites; the consequences of walking away from an
agreement; and the effects of lifting sanctions.
One
area of disagreement between supporters and opponents of the JCPOA is the
consequences of walking away from an agreement and whether renegotiation of the
agreement is a realistic option. According to an Associated Press report, the
classified assessment of the United States Intelligence Community on the
agreement concludes that because Iran will be required by the agreement to
provide international inspectors with “unprecedented volume of information
about nearly every aspect of its existing nuclear program,” Iran’s ability
to conceal a covert weapons program will be diminished.
Impact
With the prospective lifting of some
sanctions, the agreement is expected to have a significant impact on both the
economy of Iran and global markets. The energy sector is particularly
important, with Iran having nearly 10 percent of global oil reserves and 18
percent of natural gas reserves. Millions of barrels of Iranian oil may come
onto global markets, lowering the price of crude oil. The economic impact of a
partial lifting of sanctions extends beyond the energy sector. The New
York Times reported that “consumer-oriented companies, in
particular, could find opportunity in this country with 81 million
consumers,” many of whom are young and prefer Western products. Iran is
“considered a strong emerging market play” by investment and trading
firms.
In July 2015, Richard Stone wrote in the
journal Science that if the agreement is fully implemented,
“Iran can expect a rapid expansion of scientific cooperation with Western
powers. As its nuclear facilities are repurposed, scientists from Iran and
abroad will team up in areas such as nuclear fusion, astrophysics, and
radioisotopes for cancer therapy.”
In
August 2015, the British embassy in Tehran reopened almost four years after it
was closed after protesters attacked the embassy in 2011.
Hours
before the official announcement of the activation of JCPOA in January 2016,
Iran released four imprisoned Iranian Americans. A fifth American left Iran in
a separate arrangement.
After
the adoption of the JCPOA, the United States imposed several new non-nuclear
sanctions against Iran, some of which have been condemned by Iran as a possible
violation of the deal. According to Seyed Mohammad Marandi, professor at the
University of Tehran, the general consensus in Iran while the negotiations were
taking place was that the United States would move towards increasing sanctions
on non-nuclear areas. He said that these post-JCPOA sanctions could
“severely damage the chances for the Joint Comprehensive Plan of Action
bearing fruit.”
In
March 2016, the Islamic Revolutionary Guard Corps (IRGC), defined by
English-speaking media as a branch of Iran’s Armed Forces, conducted
ballistic missile tests as part of its military drills, with one of the missiles
carrying the inscription, “Israel should be wiped off the Earth.”
Israel called on Western powers to punish Iran for the tests, which U.S.
officials said do not violate the nuclear deal, but may violate a United
Nations Security Council Resolution. Iranian Foreign Minister Mohammad Javad
Zarif insisted that the tests were not in violation of the UNSC resolution. On
March 17, the U.S. Treasury Department sanctioned Iranian and British companies
for involvement in the Iranian ballistic missile program.
Future?
In
November 2016, Deutsche Welle, citing a source from the IAEA, reported that
“Iran has violated the terms of its nuclear deal.” In December 2016,
the U.S. Senate voted to renew the Iran Sanctions Act (ISA) for another decade.
The Obama Administration and outside experts said the extension would have no
practical effect and risked antagonizing Iran. Iran’s Supreme Leader Ayatollah
Khamenei, President Rouhani, and Iran’s Foreign Ministry spokesman said that
the extension of sanctions would be a breach of the nuclear deal. Some Iranian
officials said that Iran might ramp up uranium enrichment in response.
In
January 2017, representatives from Iran, P5+1, and EU gathered in Vienna’s
Palais Coburg hotel to address Iran’s complaint about the US congressional bill.
The future of nuclear agreement with Iran is uncertain under the administration
of President Trump.
38.3: East Asia in the 21st Century
38.3.1: The Rising Economies of East Asia
East Asia is home to some of the world’s most prosperous economies while Southeast Asia witnesses the growth of some of the world’s fastest growing emerging economies, with
favorable political-legal environments for industry and commerce, abundant natural resources, and adaptable labor determined to be the main factors of the success.
Learning Objective
Explain how East Asian economies have been increasing their share of the global economy.
Key Points
-
East Asian countries’ various reforms resulted in “economic
miracles,” making East Asia home to some of the world’s largest and most
prosperous economies, including Mainland China, Hong Kong, Macau, Taiwan,
Japan, and South Korea. Major growth factors have ranged from favorable
political and legal environments for industry and commerce, through abundant
natural resources, to plentiful supplies of relatively low-cost, skilled, and
adaptable labor. The region’s economic success has led the World Bank to dub it
an East Asian Renaissance.
-
The economy of Japan is the third-largest in the world by
nominal GDP, the fourth-largest by purchasing power parity (PPP), and the
world’s second largest developed economy. Japan is the world’s third largest
automobile manufacturing country, has the largest electronics goods industry,
and is often ranked among the world’s most innovative countries. The Japanese
economy faces considerable challenges posed by a dramatically declining
population.
-
China’s socialist market economy is the world’s second largest
economy by nominal GDP and the world’s largest economy by PPP according to the
IMF. China is a global hub for manufacturing and the largest manufacturing
economy in the world as well as the largest exporter of goods in the world.
China’s unequal transportation system—combined with important differences in
the availability of natural and human resources and in industrial
infrastructure—has produced significant variations in the regional economies.
More recently, the government has struggled to contain the social strife and environmental
damage related to the economy’s rapid transformation.
-
In accordance with the One Country, Two Systems policy, the
economies of the former British colony of Hong Kong and Portuguese colony of
Macau are separate from the rest of China and each other. Both Hong
Kong and Macau are free to conduct and engage in economic
negotiations with foreign countries as well as participate as full members in
various international economic organizations.
-
The Four Asian Tigers are the economies of Hong Kong, Singapore,
South Korea, and Taiwan, which underwent rapid industrialization and maintained
exceptionally high growth rates between the early 1960s (mid-1950s for Hong
Kong) and 1990s. By the 21st century, all four had developed into advanced and
high-income economies, specializing in areas of competitive advantage. Export
policies have been the de facto reason for the rise of the Four Asian Tiger
economies although te approach taken has been different among the four nations.
-
The term Tiger Cub Economies collectively refers to the economies
of Indonesia, Malaysia, the Philippines, Thailand, and Vietnam. Four countries
are included in HSBC’s list of top 50 economies in 2050, while Vietnam,
Indonesia, and the Philippines are included in Goldman Sachs’s Next Eleven list
of economies because of their rapid growth and large population. Out of these,
Vietnam has been determined to become possibly the fastest-growing of the
world’s emerging economies by 2020. The so-called “bamboo network” –
a network of overseas Chinese businesses operating in these markets – has been
critical to the countries’ economic growth.
Key Terms
- G7
-
A group consisting of Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States. These countries are the seven major advanced economies as reported by the International Monetary Fund and represent more than 64% of the net global wealth ($263 trillion).
- socialist market economy
-
The economic model employed by the People’s Republic of China. It is based on the dominance of the state-owned sector and an open-market economy and has its origins in the Chinese economic reforms introduced under Deng Xiaoping. The ideological rationale is that China is in the primary stage of socialism, an early stage within the socialist mode of production and therefore has to adapt capitalist techniques to thrive. Despite this, the system has widely been cited as a form of state capitalism.
- Four Asian Tigers
-
A collective name used to refer to the economies of Hong Kong, Singapore, South Korea, and Taiwan, which underwent rapid industrialization and maintained exceptionally high growth rates (in excess of 7 percent a year) between the early 1960s (mid-1950s for Hong Kong) and 1990s.
- Tiger Cub Economies
-
A collective term used to
refer to the economies of Indonesia, Malaysia, the Philippines, Thailand and Vietnam, the five dominant countries in Southeast Asia. They are so named because they follow the same export-driven model of economic development pursued by the Four Asian Tigers.
- bamboo network
-
A term used to conceptualize connections between certain businesses operated by overseas Chinese in Southeast Asia. It links the overseas Chinese community of Southeast Asia (Malaysia, Indonesia, Thailand, Vietnam, the Philippines, and Singapore) with the economies of Greater China (mainland China, Hong Kong, Macau, and Taiwan). Overseas Chinese companies have a prominent role in the private sector of Southeast Asia and are usually managed as family businesses with a centralized bureaucracy.
- One Country, Two Systems
-
A constitutional principle formulated by Deng Xiaoping, the Paramount Leader of the People’s Republic of China (PRC), for the reunification of China during the early 1980s. He suggested that there would be only one China, but distinct Chinese regions such as Hong Kong and Macau could retain their own capitalist economic and political systems, while the rest of China uses the socialist system.
East Asian Renaissance
The economy of East Asia is one of the most successful regional economies of the world. The changes that turned the area into the economic power began with the Meiji Restoration in the late 19th century, when Japan rapidly transformed into the only industrial power outside Europe and the United States. Japan’s early industrial economy reached its height during World War II and its eventual defeat in the war slowed down economic development for a relatively short period of time. Japan’s economy recovered already in the 1950s and by the 1980s, the country was the world’s second largest economy.
Other East Asian countries followed with their own reforms and resulting “economic miracles” and today, East Asia is home of some of the world’s largest and most prosperous economies, including Mainland China, Hong Kong, Macau, Taiwan, Japan, and South Korea. Major growth factors have ranged from favorable political and legal environments for industry and commerce, through abundant natural resources, to plentiful supplies of relatively low-cost, skilled, and adaptable labor.
Local populations have rapidly adjusted to the requirements of new technologies and scientific discoveries while also demonstrating exceptional work ethics.
The region’s economic success has led the World Bank to dub it an East Asian Renaissance.
Although technically not seen as part of the East Asian Renaissance, India, associated more closely with the South Asian region, has become an equally thriving and critical Asian member of the global economy in the last several decades. For more information on India’s economic power see “India’s Growing Economy” module.
Japan
In the three decades of economic development following 1960, Japan ignored defense spending in favor of economic growth, thus allowing for a rapid economic growth referred to as the Japanese post-war economic miracle. With average growth rates of 10% in the 1960s, 5% in the 1970s, and 4% in the 1980s, Japan was able to establish and maintain itself as the world’s second largest economy from 1978 until 2010, when it was surpassed by the People’s Republic of China.
The economy of Japan is the third-largest in the world by nominal GDP, the fourth-largest by purchasing power parity (PPP), and the world’s second largest developed economy. Japan is a member of the G7. Due to a volatile currency exchange rate, Japan’s GDP as measured in dollars fluctuates widely. Accounting for these fluctuations, Japan is estimated to have a GDP per capita of around $38,490.
Japan is the world’s third largest automobile manufacturing country, has the largest electronics goods industry, and is often ranked among the world’s most innovative countries leading several measures of global patent filings. Facing increasing competition from China and South Korea, manufacturing in Japan today focuses primarily on high-tech and precision goods, such as optical instruments, hybrid vehicles, and robotics. Japan is the world’s largest creditor nation. It generally runs an annual trade surplus and has a considerable net international investment surplus. In 2015, 54 of the Fortune Global 500 companies were based in Japan.
The Japanese economy faces considerable challenges posed by a dramatically declining population. Statistics showed an official decline for the first time in 2015, while projections suggest that it will continue to fall from 127 million down to below 100 million by the middle of the 21st century.
A mountainous, volcanic island country, Japan has inadequate natural resources to support its growing economy and large population and therefore exports goods, in which it has a comparative advantage such as engineering-oriented, research, and development-led industrial products in exchange for the import of raw materials and petroleum. Japan is among the top-three importers for agricultural products in the world next to the European Union and United States in total volume for covering of its own domestic agricultural consumption.
China
China’s socialist market economy is the world’s second largest economy by nominal GDP and the world’s largest economy by PPP according to the IMF, although China’s National Bureau of Statistics rejects this claim. Until 2015, China was the world’s fastest-growing major economy, with growth rates averaging 10% over 30 years. Due to historical and political facts of China’s developing economy, China’s public sector accounts for a bigger share of the national economy than the burgeoning private sector.
China is a global hub for manufacturing and is the largest manufacturing economy in the world as well as the largest exporter of goods in the world. It is also the world’s fastest growing consumer market and second largest importer of goods in the world. It is a net importer of services products and the largest trading nation in the world, playing the most important role in international trade. However,
Western media have criticized China for unfair trade practices, including artificial currency devaluation, intellectual property theft, protectionism, and local favoritism due to one-party oligopoly by the Communist Party of China and its socialist market economy.
China’s unequal transportation system—combined with important differences in the availability of natural and human resources and in industrial infrastructure—has produced significant variations in the regional economies of China. Economic development has generally been more rapid in coastal provinces than in the interior and there are large disparities in per capita income between regions. Three wealthiest regions are along the southeast coast. It is the rapid development of these areas that is expected to have the most significant effect on the Asian regional economy as a whole and Chinese government policy is designed to remove the obstacles to accelerated growth in these wealthier regions.
More recently, the government has struggled to contain the social strife and environmental damage related to the economy’s rapid transformation. Battling corruption and other economic crimes as well as sustaining adequate job growth for tens of millions of workers laid off from state-owned enterprises, migrants, and new entrants to the work force have also been some of the major challenges. From 50 to 100 million rural workers were adrift between the villages and the cities, many subsisting through part-time low-paying jobs. Although the economic growth has resulted in the creation of a strong middle class, hundreds of millions remain excluded from its benefits and inequalities persist. The large-scale underemployment in both urban and rural areas and changing price policies remain a source of concern for the government as potential causes of popular resistance. The prices of certain key commodities, especially of industrial raw materials and major industrial products, are determined by the state and large subsidies were built into the price structure. By the early 1990s, these subsidies began to be eliminated, in large part due to China’s admission into the World Trade Organization (WTO) in 2001, which carried with it requirements for further economic liberalization and deregulation. On a per capita income basis, China ranked 72nd by nominal GDP and 84th by GDP (PPP) in 2015, according to the IMF.
In accordance with the One Country, Two Systems policy, the economies of the former British colony of Hong Kong and Portuguese colony of Macau are separate from the rest of China and each other. Both Hong Kong and Macau are free to conduct and engage in economic negotiations with foreign countries as well as participate as full members in various international economic organizations, often under the names “Hong Kong, China” and “Macau, China.”
Both regions retain their own capitalist economic and political systems.
Four Asian Tigers
The Four Asian Tigers are the economies of Hong Kong, Singapore, South Korea, and Taiwan, which underwent rapid industrialization and maintained exceptionally high growth rates (in excess of 7 percent a year) between the early 1960s (mid-1950s for Hong Kong) and 1990s. By the 21st century, all four had developed into advanced and high-income economies, specializing in areas of competitive advantage. For example, Hong Kong and Singapore have become world-leading international financial centers, whereas South Korea and Taiwan are world leaders in information technology manufacturing. Their economic success stories have served as role models for many developing countries, especially the Tiger Cub Economies (see below).
Export policies have been the de facto reason for the rise of the Four Asian Tiger economies. The approach taken has been different among the four nations. Hong Kong and Singapore introduced trade regimes that were neoliberal in nature and encouraged free trade, while South Korea and Taiwan adopted mixed regimes that accommodated their own export industries. In Hong Kong and Singapore, due to small domestic markets, domestic prices were linked to international prices. South Korea and Taiwan introduced export incentives for the traded-goods sector. The governments of Singapore, South Korea, and Taiwan also worked to promote specific exporting industries, which were termed as an export push strategy. All these policies helped these four nations to achieve a growth averaging 7.5% each year for three decades and as such they achieved developed country status.
A controversial World Bank report (see The East Asian Miracle 1993) credited neoliberal policies with the responsibility for the boom, including maintenance of export-led regimes, low taxes, and minimal welfare states. Some state intervention has been also admitted to be a factor. However, many have argued that industrial policy had a much greater influence than the World Bank report suggested. The World Bank report itself acknowledged benefits from policies of the repression of the financial sector, such as state-imposed below-market interest rates for loans to specific exporting industries. Other important aspects include major government investments in education, non-democratic and relatively authoritarian political systems during the early years of development, high levels of U.S. bond holdings, and high public and private savings rates.
Tiger Cub Economies
The term Tiger Cub Economies collectively refers to the economies of Indonesia, Malaysia, the Philippines, Thailand, and Vietnam, the five dominant countries in Southeast Asia. They are so named because they follow the same export-driven model of economic development pursued by the Four Asian Tigers. Four countries are included in HSBC’s list of top 50 economies in 2050, while Vietnam, Indonesia and the Philippines are included in Goldman Sachs’s Next Eleven list of economies because of their rapid growth and large population. Out of these, Vietnam has been determined to become possibly the fastest-growing of the world’s emerging economies by 2020. Similarly to China, the country’s socialist-oriented market economy is a developing planned economy and market economy. In the 21st century, Vietnam is in a period of being integrated into the global economy. It has become a leading agricultural exporter and served as an attractive destination for foreign investment in Southeast Asia.
Overseas Chinese entrepreneurs played a prominent role in the development of the region’s private sectors. These businesses are part of the larger “bamboo network,” a network of overseas Chinese businesses operating in the markets of Malaysia, Indonesia, Thailand, Vietnam, and the Philippines that share common family and cultural ties. China’s transformation into a major economic power in the 21st century has led to increasing investments in Southeast Asian countries where the bamboo network is present.
38.3.2: Tensions in the South China Sea
Several countries have made competing territorial claims over the South China Sea, as one-third of the world’s shipping sails through its waters and it is believed to hold huge oil and gas reserves beneath its seabed, turning the territorial disputes into Asia’s most potentially dangerous source of conflict.
Learning Objective
Identify the causes of territorial disputes in the South China Sea
Key Points
-
The
South China Sea is a marginal sea encompassing an area from the Karimata and Malacca Straits to the Strait of
Taiwan. The sea is
located south of China, east of Vietnam and Cambodia, northwest of the
Philippines, east of the Malay peninsula and Sumatra up to the Strait of
Malacca in the west, and north of the Bangka–Belitung Islands and Borneo. One-third of the world’s shipping
sails through its waters and it is believed the sea holds huge oil and gas
reserves beneath its seabed.
- Several countries have made competing territorial
claims over the South China Sea. These disputes have been seen as Asia’s most
potentially dangerous point of conflict.
Both China and Taiwan claim almost the entire body as their own, demarcating their claims
within what is known as the nine-dash line. Competing claims over parts of the area include Indonesia, the Philippines, Vietnam, Brunei, Malaysia, Cambodia, and Thailand.
-
The
area may be rich in oil and natural gas deposits, although estimates vary.
The once abundant fishing opportunities within the region are
another motivation for claims. According to studies by
the Department of Environment and Natural Resources (Philippines), this body of
water holds one-third of the entire world’s marine biodiversity, making
it a very important area for the ecosystem. Finally, the area is one of the busiest shipping
routes in the world.
-
China
and Vietnam have both been vigorous in prosecuting their claims. The
Association of Southeast Asian Nations (ASEAN) in general and Malaysia in
particular have been keen to ensure that the territorial disputes within the
South China Sea do not escalate into armed conflicts. Joint Development
Authorities have been set up in areas of overlapping claims to jointly develop
the area and divide the profits equally, without settling the issue of
sovereignty. Generally, China has preferred to resolve competing
claims bilaterally, while some ASEAN countries prefer multi-lateral talks.
-
In
2011, China attempted to keep India away from the South China Sea waters and protested Indian-Vietnamese cooperation in the oil sector. Vietnam
and Japan reached an agreement early in 1978 on the development of oil in the
South China Sea, which gradually turned Vietnam into a powerful oil producer. In
2012 and 2013, Vietnam and Taiwan clashed over what Vietnam considered
anti-Vietnamese military exercises by Taiwan. In 2014, Indonesia imposed a policy
threatening any foreign fishermen caught illegally fishing in Indonesian waters
to destroy their vessels. Since then, many neighboring countries fishing
vessels have been blown up by Indonesian authorities. The
South China Sea had also become known for Indonesian and Filipino pirates.
-
The United States and China are currently in
disagreement over the South China Sea. The U.S. State Department voiced support for fair access by reiterating that freedom of
navigation and respect of international law are a matter of national interest to
the United States. China’s Foreign Ministry stated that this stand
was “in effect an attack on China.” China has repeatedly warned the U.S. to stay out of the issue and that its involvement may lead to a military conflict.
-
The
position of China on its maritime claims based on UNCLOS and history has been
ambiguous, particularly with the nine-dash line map.
China has also
repeatedly indicated that the Chinese claims are drawn on a historical
basis, but the
vast majority of international legal experts have concluded that China’s claims
based on historical claims are invalid.
Key Terms
- United Nations Convention on the Law of the Sea
-
The international agreement that defines the rights and responsibilities of nations with respect to their use of the world’s oceans, establishing guidelines for businesses, the environment, and the management of marine natural resources. It was concluded in 1982.
- Philippines v. China
-
An arbitration case brought by the Republic of the Philippines against the People’s Republic of China under Annex VII to the United Nations Convention on the Law of the Sea (UNCLOS) concerning certain issues in the South China Sea including the legality of China’s “nine-dash line” claim. In 2013, China declared that it would not participate but in 2015, the arbitral tribunal ruled that it has jurisdiction over the case. In 2016, the tribunal ruled in favor of the Philippines. China has rejected the ruling, as has Taiwan.
- exclusive economic zone
-
A sea zone prescribed by the United Nations Convention on the Law of the Sea, over which a state has special rights regarding the exploration and use of marine resources, including energy production from water and wind. It stretches from the baseline out to 200 nautical miles (nmi) from its coast. As opposed to the territorial sea, which confers full sovereignty over the waters, this type of a sea zone is merely a “sovereign right,” which refers to the coastal state’s rights below the surface of the sea. The surface waters are international waters.
- nine-dash line
-
A term that refers to the demarcation line used initially by the government of the Republic of China (ROC/Taiwan) and subsequently also by the government of the People’s Republic of China (PRC), for their claims of the major part of the South China Sea. The contested area in the South China Sea includes the Paracel Islands, the Spratly Islands, and various other areas including the Pratas Islands, the Macclesfield Bank, and the Scarborough Shoal. The claim encompasses the area of Chinese land reclamation known as the “great wall of sand.”
- Association of Southeast Asian Nations
-
A regional organization comprising ten Southeast Asian states, which promotes intergovernmental cooperation and facilitates economic integration amongst its members. Since its founding in 1967 by Indonesia, Malaysia, the Philippines, Singapore, and Thailand, the organization’s membership has expanded to include Brunei, Cambodia, Laos, Myanmar (Burma), and Vietnam. Its principal aims include accelerating economic growth, social progress, and sociocultural evolution among its members, alongside the protection of regional stability and the provision of a mechanism for member countries to resolve differences peacefully.
- South China Sea
-
A marginal sea that is part of the Pacific Ocean, encompassing an area from the Karimata and Malacca Straits to the Strait of Taiwan. One-third of the world’s shipping sails through its waters and it is believed the sea holds huge oil and gas reserves beneath its seabed. Several countries have made competing territorial claims over the area.
Territorial Disputes in the South China Sea
The South China Sea is a marginal sea that is part of the Pacific Ocean, encompassing an area from the Karimata and Malacca Straits to the Strait of Taiwan (around 3.5 million sq km or 1.4 million sq mi). The sea is located south of China, east of Vietnam and Cambodia, northwest of the Philippines, east of the Malay peninsula and Sumatra, up to the Strait of Malacca in the west, and north of the Bangka–Belitung Islands and Borneo.
The area’s importance results from the fact that one-third of the world’s shipping sails through its waters and it is believed to hold huge oil and gas reserves beneath its seabed.
Several countries have made competing territorial claims over the South China Sea. These disputes have been seen as Asia’s most potentially dangerous point of conflict. Both People’s Republic of China (PRC) and the Republic of China (ROC, commonly known as Taiwan) claim almost the entire body as their own, demarcating their claims within what is known as the nine-dash line. The area overlaps the exclusive economic zone (EEZ) claims of Brunei, Indonesia, Malaysia, the Philippines, Taiwan, and Vietnam.
Competing claims include:
Importance of the South China Sea
The area may be rich in oil and natural gas deposits although estimates vary from 7.5 billion to 125 billion barrels of oil and from 190 trillion cubic feet to 500 trillion cubic feet of gas. The once abundant fishing opportunities within the region are another motivation for claims. China believes that the value in fishing and oil from the sea may be as much as a trillion dollars.
According to studies made by the Department of Environment and Natural Resources (Philippines), this body of water holds one-third of the entire world’s marine biodiversity, making it a very important area for the ecosystem. However, the fish stocks in the area are depleted and countries are using fishing bans to assert their sovereignty claims. Finally, the area is one of the busiest shipping routes in the world. In the 1980s, at least 270 merchant ships used the route each day. Currently, more than half the tonnage of oil transported by sea passes through the South China Sea, a figure rising steadily with the growth of the Chinese consumption of oil. This traffic is three times greater than that passing through the Suez Canal and five times more than the Panama Canal.
Disputes
China and Vietnam have both been vigorous in prosecuting their claims. China (various governments) and South Vietnam each controlled part of the Paracel Islands before 1974. A brief conflict in 1974 resulted in 18 Chinese and 53 Vietnamese deaths and China has controlled the whole of Paracel since then. The Spratly Islands have been the site of a naval clash, in which over 70 Vietnamese sailors were killed in 1988. Disputing claimants regularly report clashes between naval vessels.
The Association of Southeast Asian Nations (ASEAN) in general and Malaysia in particular have been keen to ensure that the territorial disputes within the South China Sea do not escalate into armed conflicts. Joint Development Authorities have been set up in areas of overlapping claims to jointly develop the area and divide the profits equally, without settling the issue of sovereignty. Generally, China has preferred to resolve competing claims bilaterally, while some ASEAN countries prefer multi-lateral talks, believing that they are disadvantaged in bilateral negotiations with China and that because many countries claim the same territory, only multilateral talks could effectively resolve the competing claims. For example,
the International Court of Justice settled the overlapping claims over Pedra Branca/Pulau Batu Putih, including neighboring Middle Rocks, by Singapore and Malaysia in 2008, awarding Pedra Branca/Pulau Batu Puteh to Singapore and Middle Rocks to Malaysia.
In 2011, one of India’s amphibious assault vessels on a friendly visit to Vietnam was reportedly contacted at a distance of 45 nautical miles from the Vietnamese coast in the disputed South China Sea on an open radio channel by a vessel identifying itself as the Chinese Navy and stating that the ship was entering Chinese waters. The spokesperson for the Indian Navy clarified that as no ship or aircraft was visible and thus the vessel proceeded on her onward journey as scheduled. The same year, shortly after China and Vietnam had signed an agreement seeking to contain a dispute over the South China Sea, India’s state-run explorer, Oil and Natural Gas Corporation (ONGC) said that its overseas investment arm ONGC Videsh Limited had signed a three-year deal with PetroVietnam for developing long-term cooperation in the oil sector and that it had accepted Vietnam’s offer of exploration in certain specified blocks in the South China Sea. In response, Chinese Foreign Ministry spokesperson Jiang Yu issued a protest.
Vietnam and Japan reached an agreement early in 1978 on the development of oil in the South China Sea. By 2012,Vietnam had concluded some 60 oil and gas exploration and production contracts with various foreign companies. In 2011, Vietnam was the sixth-largest oil producer in the Asia-Pacific region, although the country is a net oil importer.
China’s first independently designed and constructed oil drilling platform in the South China Sea is the Ocean Oil 981. It began operation in 2012, 320 kilometers (200 mi) southeast of Hong Kong, employing 160 people. In 2014, the platform was moved near to the Paracel Islands, which propelled Vietnam to state that the move violated their territorial claims. Chinese officials said it was legal, stating the area lies in waters surrounding the Paracel Islands, which China occupies and militarily controls.
In 2012 and 2013, Vietnam and Taiwan clashed over what Vietnam considered anti-Vietnamese military exercises by Taiwan.
Prior to the dispute around the sea areas, fishermen from involved countries tended to enter on each other’s controlled islands and EEZ, which led to conflicts with the authorities that controlled the areas as they were unaware of the exact borders. Due to the depletion of the fishing resources in their maritime areas, fishermen felt compelled to fish in the neighboring country’s areas. After Joko Widodo became President of Indonesia in 2014, he imposed a policy threatening any foreign fishermen caught illegally fishing in Indonesian waters to destroy their vessels. Since then, many neighboring countries’ fishing vessels have been blown up by Indonesian authorities. On May 21, 2015, around 41 fishing vessels from China, Vietnam, Thailand, and the Philippines were blown up. On March 19, 2016, China Coast Guard prevented its fishermen from being detained by Indonesian authorities when the Chinese fishermen were caught fishing near the waters around Natuna, leading to a protest by Indonesian authorities. Further Indonesian campaigns against foreign fishermen resulted in 23 fishing boats from Malaysia and Vietnam being blown up on April 5, 2016. The South China Sea had also become known for Indonesian pirates, with frequent attacks on Malaysian, Singaporean, and Vietnamese vessels and for Filipino pirates attacking Vietnamese fishermen.
U.S. Position
The United States and China are currently in disagreement over the South China Sea, exacerbated by the fact that the US is not a member of the United Nations Convention on the Law of the Sea (the United States recognizes the UNCLOS as a codification of customary international law but has not ratified it). Nevertheless, the U.S. has stood by its claim that “peaceful surveillance activities and other military activities without permission in a country’s exclusive economic zone” are allowed under the convention. In relation to the dispute, former U.S. State Secretary Hillary Clinton voiced her support for fair access by reiterating that freedom of navigation and respect of international law is a matter of national interest to the United States. China’s Foreign Minister Yang Jiechi stated that the stand was “in effect an attack on China” and warned the United States against making the South China Sea an international or multilateral issue. Clinton testified in support of congressional approval of the Law of the Sea Convention, which would strengthen U.S. ability to support countries that oppose Chinese claims to certain islands in the area. Clinton also called for China to resolve the territorial dispute but China responded by demanding the U.S. stay out of the issue. This came at a time when both countries were engaging in naval exercises in a show of force to the opposing side, which increased tensions in the region. The U.S. Department of Defense released a statement in which it opposed the use of force to resolve the dispute and accused China of assertive behavior.
In 2014, the United States responded to China’s claims over the fishing grounds of other nations by stating that “China has not offered any explanation or basis under international law for these extensive maritime claims.” While the US pledged American support for the Philippines in its territorial conflicts with the PRC, the Chinese Foreign Ministry asked the United States to maintain a neutral position on the issue. In 2014 and 2015, the United States continued freedom of navigation operations, including in the South China Sea. In 2015, Secretary of Defense Ash Carter warned China to halt its rapid island-building. In November 2015, two US B-52 strategic bombers flew near artificial Chinese-built islands in the area of the Spratly Islands and were contacted by Chinese ground controllers but continued their mission undeterred.
In response to Rex Tillerson’s comments on blocking access to man-made islands in the South China Sea, in January 2017, the Communist Party-controlled Global Times warned of a “large-scale war” between the U.S. and China, noting, “Unless Washington plans to wage a large-scale war in the South China Sea, any other approaches to prevent Chinese access to the islands will be foolish.”
Independent Analysis
The position of China on its maritime claims based on UNCLOS and history has been ambiguous, particularly with the nine-dash line map. For example, in 2011, China stated that it has undisputed sovereignty over the islands and the adjacent waters, suggesting it is claiming sovereignty over its territorial waters, a position consistent with UNCLOS. However, it also stated that China enjoys sovereign rights and jurisdiction over the relevant waters along with the seabed and subsoil contained in this region, suggesting that China is claiming sovereignty over all of the maritime space (includes all the geographic features and the waters within the nine-dash line). China has also repeatedly indicated that the Chinese claims are drawn on a historical basis.
The vast majority of international legal experts have concluded that China’s claims based on historical claims are invalid. For example, in 2013,
the Republic of the Philippines brought an arbitration case against the People’s Republic of China under Annex VII to UNCLOS, concerning certain issues in the South China Sea including the legality of China’s “nine-dash line” claim (Philippines v. China, known also as the South China Sea Arbitration). China declared that it would not participate in the arbitration but in 2015, the arbitral tribunal ruled that it had jurisdiction over the case, taking up seven of the 15 submissions made by the Philippines. In 2016, the tribunal ruled in favor of the Philippines. It clarified that it would not “…rule on any question of sovereignty over land territory and would not delimit any maritime boundary between the Parties.” The tribunal also confirmed that China has “no historical rights” based on the “nine-dash line” map. China has rejected the ruling, as has Taiwan.
38.3.3: The Koreas in the Modern Day
Tensions between South Korea and North Korea continue to escalate as the countries never signed a peace treaty after the Korean War and thus formally remain at war, with each incident potentially triggering a military conflict.
Learning Objective
Summarize the remaining tensions between North and South Korea and how the two countries have developed
Key Points
-
In 1998,
South Korean President Kim Dae-jung announced the so-called Sunshine Policy
towards North Korea. The main aim of the policy was to soften North Korea’s
attitudes towards the South by encouraging interaction and economic assistance.
In 2000, the first Inter-Korean Summit between Kim Dae-jung and
Kim Jong-il took place. As a result, Kim Dae-jung was awarded the Nobel Peace
Prize.
-
The June 15 North–South
Joint Declaration the two leaders signed during the first South-North
summit stated that they would hold the second summit at an appropriate time. It
was originally envisaged that the second summit would be held in South Korea,
but that did not materialize. In 2007, South Korean President
Roh Moo-hyun and North Korean leader Kim Jong-il signed the peace declaration.
The document called for international talks to replace the Armistice that ended the Korean War with a permanent peace treaty.
-
In 2008, the new
president of the South Lee Myung-bak and his Grand National Party took a
different stance to North Korea, and the South Korean government stated that any
expansion of the economic cooperation at the Kaesong Industrial Region would
only happen if the North resolved the international standoff over its nuclear
weapons. In 2010, the
South Korean Unification Ministry officially declared the Sunshine Policy a
failure, thus bringing it to an end.
-
In
2011, the supreme leader of North Korea Kim Jong-il died from a heart attack.
His youngest son Kim Jong-un was announced as his successor. Under Kim Jong-un, North
Korea has continued to develop nuclear weapons. In 2016,
Kim Jong-un stated that North Korea would “not use nuclear weapons first
unless aggressive hostile forces use nuclear weapons to invade on our
sovereignty.” However, on other occasions, North Korea has threatened
“preemptive” nuclear attacks against a U.S.-led attack. Under Kim Jong-in, extreme human rights abuses and food insecurity remain major issues in North Korea.
-
Over the last years,
several incidents have contributed to the growing tensions between South Korea
and North Korea, including sinking of a South Korean ship caused by a North Korean torpedo, North Korea launching a scientific and technological satellite that reached
orbit, and North Korea planting planting a mine that went off at
the Korean Demilitarized Zone, wounding two South Korean soldiers.
-
In 2016, North Korea
carried out its fifth nuclear test as part of the state’s 68th anniversary
since its founding. South Korea responded with a plan to assassinate Kim
Jong-un. In February 2017, Kim
Jong-nam, the eldest son of Kim Jong-il and half-brother of Kim Jong-un who
from 1994 to 2001 was considered the heir apparent to his father, died after
being attacked with a chemical weapon at the Kuala Lumpur International Airport. Kim Myung-yeon, a spokesperson for
South Korea’s ruling party, described the killing as a “naked example of
Kim Jong-un’s reign of terror.”
Key Terms
- June 15th North–South Joint Declaration
-
An agreement adopted between leaders of North and South Korea in June 2000 after various diplomatic meetings between the North and the South. As a result of the talks, numerous separated families and relatives from the North and the South had meetings with their family members in Pyongyang and Seoul. Ministerial talks and North-South military working-level talks also followed in the second half of the year. North-South Red Cross talks and the working-level contacts for the North and South economic cooperation also took place.
- North Korean famine
-
A famine that killed somewhere between 240,000 and 3.5 million North Koreans between 1994 and 1998. It stemmed from a variety of factors. Economic mismanagement and the loss of Soviet support caused food production and imports to decline rapidly. A series of floods and droughts exacerbated the crisis. The North Korean government and its centrally planned system proved too inflexible to effectively curtail the disaster.
- Korean Demilitarized Zone
-
A highly militarized strip of land running across the Korean Peninsula. It was established at the end of the Korean War to serve as a buffer zone between the Democratic People’s Republic of Korea (North Korea) and the Republic of Korea (South Korea). It is a de facto border barrier that divides the Korean Peninsula roughly in half. It was created by agreement between North Korea, China, and the United Nations in 1953.
- Sunshine Policy
-
The foreign policy of South Korea towards North Korea from 1998 to 2008. Since its articulation by South Korean President Kim Dae-jung, the policy resulted in greater political contact between the two states and some historic moments in inter-Korean relations.
Sunshine Policy
In 1998, South Korean President Kim Dae-jung announced the so-called Sunshine Policy towards North Korea.
The main aim of the policy was to soften North Korea’s attitudes towards the South by encouraging interaction and economic assistance. The national security policy had three basic principles: no armed provocation by the North will be tolerated, the South will not attempt to absorb the North in any way, and the South actively seeks cooperation. Despite a naval clash in 1999, in 2000, the first Inter-Korean Summit between Kim Dae-jung and Kim Jong-il took place. As a result, Kim Dae-jung was awarded the Nobel Peace Prize. The summit was followed by the reunion of families divided by the Korean War. The same year, the North and South Korean teams marched together at the Sydney Olympics. Trade increased to the point where South Korea became North Korea’s largest trading partner. In 2003, the Kaesong Industrial Region was established to allow South Korean businesses to invest in the North. U.S. President George W. Bush, however, did not support the Sunshine Policy and in 2002 branded North Korea as a member of an Axis of Evil.
The June 15 North-South Joint Declaration
that the two leaders signed during the first South-North summit stated that they would hold the second summit at an appropriate time. It was originally envisaged that the second summit would be held in South Korea, but that did not materialize. South Korean President Roh Moo-hyun walked across the Korean Demilitarized Zone in 2007 and traveled on to Pyongyang for talks with Kim Jong-il. The two sides reaffirmed the spirit of the June 15 Joint Declaration and had discussions on various issues related to realizing the advancement of South-North relations, peace on the Korean Peninsula, common prosperity of the people, and the unification of Korea. South Korean President Roh Moo-hyun and North Korean leader Kim Jong-il signed the peace declaration. The document called for international talks to replace the Armistice which ended the Korean War with a permanent peace treaty.
In 2008, however, the new president of the South, Lee Myung-bak and his Grand National Party took a different stance to North Korea, and the South Korean government stated that any expansion of the economic cooperation at the Kaesong Industrial Region would only happen if the North resolved the international standoff over its nuclear weapons. Relations again chilled, with North Korea making military moves such as a series of short range ship-to-ship missile tests. South Korea’s response to the nuclear test included signing the Proliferation Security Initiative to prevent the shipment of nuclear materials to North Korea. In November 2010, the South Korean Unification Ministry officially declared the Sunshine Policy a failure, thus bringing the policy to an end.
Kim Jong-un’s Rule
In 2011, the supreme leader of North Korea Kim Jong-il died from a heart attack. His youngest son Kim Jong-un was announced as his successor.
In December 2011, the leading North Korean newspaper Rodong Sinmun announced that Kim Jong-un had been acting as chairman of the Central Military Commission and supreme leader of the country. In 2012, a large rally was held by Korean People’s Army in front of Kumsusan Memorial Palace to honor Kim Jong-un and demonstrate loyalty.
North Korea’s cult of personality around Kim Jong-un was stepped up following his father’s death.
Under Kim Jong-un, North Korea has continued to develop nuclear weapons. In 2013, Kim Jong-un announced that North Korea will adopt “a new strategic line on carrying out economic construction and building nuclear armed forces simultaneously.” According to several analysts, North Korea sees the nuclear arsenal as vital to deter an attack, but it is unlikely that the country would launch a nuclear war. In 2016, Kim Jong-un stated that North Korea would “not use nuclear weapons first unless aggressive hostile forces use nuclear weapons to invade on our sovereignty.” However, on other occasions, North Korea has threatened “preemptive” nuclear attacks against a U.S.-led attack. As of 2016, the United Nations has enacted five cumulative rounds of sanctions against North Korea for its nuclear program and missile tests.
Human rights violations under the leadership of Kim Jong-il were condemned by the UN General Assembly. Press reports indicate that they are continuing under Kim Jong-un. The 2013 report on the situation of human rights in North Korea by United Nations Special Rapporteur Marzuki Darusman proposed a United Nations commission of inquiry to document the accountability of Kim Jong-un and other individuals in the North Korean government for alleged crimes against humanity. The report of the commission of inquiry was published in 2014 and recommends making him accountable for crimes against humanity at the International Criminal Court.
A 2013 study reported that communicable diseases and malnutrition are responsible for 29% of the total deaths in North Korea. This figure is higher than that of high-income countries and South Korea, but half of the average 57% of all deaths in other low-income countries. Infectious diseases like tuberculosis, malaria, and hepatitis B are considered endemic as a result of the North Korean famine (1994-1998).
The famine had a significant impact on the population growth rate, which declined to 0.9% annually in 2002 and 0.53% in 2014.
In 2006, the World Food Program (WFP) and the Food and Agriculture Organization estimated a requirement of 5.3 to 6.5 million tons of grain in aid when domestic production fulfilled only 3.8 million tons. The country also faces land degradation after forests stripped for agriculture resulted in soil erosion. In 2008, a decade after the worst years of the famine, total production was 3.3 million tons (grain equivalent) compared with a need of 6 million tons. 37 percent of the population was deemed to be insecure in food access. Weather continued to pose challenges every year, but overall food production has grown gradually. In 2014, North Korea had an exceptionally good harvest, 5.08 million tonnes of cereal equivalent, almost sufficient to feed the entire population. While food production has recovered significantly since the hardest years of 1996 and 1997, the recovery is fragile, subject to adverse weather and year-to-year economic shortages.
North Korea’s GDP per capita has been less than $2,000 in the late 1990s and early 21st century.
Inter-Korean Relations Today
In recent years, several incidents have contributed to the growing tensions between South Korea and North Korea. In 2010, a South Korean ship with a crew of 104 sank in the Yellow Sea. Forty-six individuals died and 58 were rescued. A team of international researchers investigating the incident concluded that the sinking was caused by a North Korean torpedo. North Korea rejected the findings. South Korea agreed with the findings and President Lee Myung-bak declared that Seoul would cut all trade with North Korea as part of measures primarily aimed at striking back at North Korea diplomatically and financially. North Korea denied all such allegations and responded by severing ties between the countries and announced it abrogated the previous non-aggression agreement. The same year, North Korea’s artillery fired at South Korea’s Yeonpyeong island in the Yellow Sea and South Korea returned fire. The town was evacuated and South Korea warned of stern retaliation, with President Lee Myung-bak ordering the destruction of a nearby North Korea missile base if further provocation should occur.
Just two months later, North Korea launched a scientific and technological satellite and it reached orbit. The United States moved warships to the region. In 2013, tensions between North Korea and South Korea, the United States, and Japan escalated following the United Nations Security Council Resolution 2087, which condemned North Korea for the launch of the satellite. The crisis was marked by extreme escalation of rhetoric by the new North Korean administration under Kim Jong-un and actions suggesting imminent nuclear attacks against South Korea, Japan, and the United States.
In 2015, Kim Jong-un, in his New Year’s address to the country, stated that he was willing to resume higher-level talks with the South. However, in August 2015, a mine went off at the Korean Demilitarized Zone, wounding two South Korean soldiers. The South Korean government accused the North of planting the mine, which the North denied. Since then South Korea started propaganda broadcasts to the North. The same month, North Korea fired a shell on the city of Yeoncheon. South Korea launched several artillery rounds in response. Although there were no casualties, it caused the evacuation of an area of the west coast of South Korea and forced others to head for bunkers. The shelling caused both countries to adopt pre-war status, and a talk that was held by high-level officials in the Panmunjeom to relieve tensions. While talks were going on, North Korea deployed over 70 percent of their submarines. Talks, however, concluded when both parties reached an agreement and military tensions were eased.
In 2016, North Korea carried out its fifth nuclear test as part of the state’s 68th anniversary since its founding. South Korea responded with a plan to assassinate Kim Jong-un.
In February 2017, Kim Jong-nam, the eldest son of Kim Jong-il and half-brother of Kim Jong-un who from 1994 to 2001 was considered the heir apparent to his father, died after being attacked by two women in Malaysia with VX nerve agent (a chemical weapon) during his return trip to Macau, where he lived in exile, at the Kuala Lumpur International Airport. Kim Myung-yeon, a spokesperson for South Korea’s ruling party, described the killing as a “naked example of Kim Jong-un’s reign of terror.” The South Korean government accused the North Korean government of being responsible for Kim Jong-nam’s assassination and drew a parallel with the execution of Kim Jong-un’s own uncle and others. The government later held an emergency security council meeting where they condemned the murder of Kim Jong-nam. The acting President of South Korea, Hwang Kyo-ahn said that if the murder of Kim Jong-nam was confirmed to be masterminded by North Korea, it would clearly depict the brutality and inhumanity of the Kim Jong-un regime.
38.3.4: India under Modi
India under Modi, its right-wing, nationalistic Prime Minister, has gone through numerous neoliberal reforms that contribute to its impressive economic growth, pleasing businesspeople and industrialists but widening inequalities between the wealthy and the poor and highlighting the ongoing challenges of poverty, corruption, and gender violence.
Learning Objective
Explain who Narendra Modi is and the status of India in the 21st century
Key Points
-
Narendra
Modi is current Prime Minister of India (March 2017). He is a member of the
Bharatiya Janata Party and of the Rashtriya
Swayamsevak Sangh (RSS), a right-wing, Hindu nationalist, paramilitary volunteer
organization. Modi
was appointed chief minister of Gujarat in 2001. His administration has been
considered complicit in the 2002 Gujarat riots. In 2012, Modi was cleared of
complicity in the violence by a Special Investigation Team (SIT) appointed by
the Supreme Court of India, but the question remains controversial.
-
Modi
led the BJP in the 2014 general election, which gave the party a majority in
the parliament. The
economic policies of Modi’s government focused on privatization and liberalization
of the economy based on a neoliberal framework. Modi updated India’s
foreign direct investment policies to allow more foreign investment in several
industries, including defense and the railways. Other reforms included
removal of many of the country’s labor laws to make it harder for workers to form
unions and easier for employers to hire and fire them. These reforms met with
support from institutions such as the World Bank, but opposition from scholars and unions.
-
In
2014, Modi introduced the Make in India initiative to encourage foreign
companies to manufacture products in India with the goal of turning the
country into a global manufacturing hub. In 2015, he launched a program intended to develop 100 smart cities and the
Housing for All By 2022 project, which intends to eliminate slums in India by
building about 20 million affordable homes for India’s urban poor.
-
Modi’s
government reduced the amount of money spent by the government on healthcare
and launched a New Health Policy, which emphasizes the role of private
healthcare.
He also launched the Clean India campaign
(2014) to eliminate open defecation
and manual scavenging. As part of the program, the Indian government began
constructing millions of toilets in rural areas and encouraging people to use
them.
-
In naming his cabinet, Modi renamed the Ministry
of Environment and Forests the Ministry of Environment, Forests, and Climate
Change. In the first budget of the government, the money allotted to this
ministry was reduced by more than 50%. The new ministry also removed or diluted
a number of laws related to environmental protection.
- Massive corruption, widespread poverty, and violence against girls and women constitute some of the greatest challenges in 21st-century India.
According to the 2014 revised World Bank
methodology, India had 179.6 million people below the poverty line, which means
that with 17.5% of total world’s population, India had 20.6% share of
world’s poor. Findings
from the World Economic Forum have repeatedly indicated that India is one of
the worst countries in the world in terms of gender inequality.
Key Terms
- Rashtriya Swayamsevak Sangh
-
A right-wing, Hindu nationalist, paramilitary volunteer organization in India widely regarded as the parent organization of the ruling party of India, the Bharatiya Janata Party. Founded in 1925, it is the world’s largest non-governmental organization that claims commitment to selfless service to India.
- Bharatiya Janata Party
-
One of the two major political parties in India, along with the Indian National Congress. As of 2017, it is the country’s largest political party in terms of representation in the national parliament and state assemblies and the world’s largest party in terms of primary membership. It is a right-wing party with close ideological and organizational links to the Hindu nationalist Rashtriya Swayamsevak Sangh.
- 2002 Gujarat riots
-
A three-day period of inter-communal violence in the western Indian state of Gujarat in 2002. Following the initial incident, there were further outbreaks of violence in Ahmedabad for three weeks. Statewide, there were further outbreaks of communal riots against the minority Muslim population for three months.
The Chief Minister of Gujarat at that time, Narendra Modi, has been accused of initiating and condoning the violence as have police and government officials who allegedly directed the rioters and gave them lists of Muslim-owned properties.
Narendra Modi
Narendra Modi (b. 1950) is current Prime Minister of India (March 2017), in office since May 2014. He was the Chief Minister of Gujarat from 2001 to 2014. He is the Member of Parliament for the Varanasi district (Utter Pradesh), a member of the Bharatiya Janata Party (BJP; one of the two major political parties in India, along with the Indian National Congress), and member of the Rashtriya Swayamsevak Sangh (RSS), a right-wing, Hindu nationalist, paramilitary volunteer organization in India widely regarded as the parent organization of the BJP.
Born to a Gujarati family in Vadnagar, Modi helped his father sell tea as a child and later ran his own stall. He was introduced to the RSS at age eight, beginning a long association with the organization. He left home after graduating from school, partly because of an arranged marriage, which he did not accept. Modi traveled around India for two years and visited a number of religious centers. In 1971 he became a full-time worker for the RSS. During the state of emergency imposed across the country in 1975, Modi was forced to go into hiding. The RSS assigned him to the BJP in 1985 and he held several positions within the party hierarchy until 2001, rising to the rank of general secretary.
Modi was appointed chief minister of Gujarat in 2001. His administration has been considered complicit in the 2002 Gujarat riots, a three-day period of inter-communal violence. Following the initial incident, there were further outbreaks of violence in Ahmedabad for three weeks. Statewide, communal riots against the minority Muslim population occurred for three months. According to official figures, the riots resulted in the deaths of 790 Muslims and 254 Hindus. 2,500 people were injured non-fatally and 223 more were reported missing. There were instances of rape, children being burned alive, and widespread looting and destruction of property. Modi has been accused of initiating and condoning the violence as have police and government officials who allegedly directed the rioters and gave them lists of Muslim-owned properties. In 2012, Modi was cleared of complicity in the violence by a Special Investigation Team (SIT) appointed by the Supreme Court of India. The SIT also rejected claims that the state government had not done enough to prevent the riots. The Muslim community reacted with anger and disbelief. In 2013, allegations were made that the SIT had suppressed evidence, but the Supreme Court expressed satisfaction over the SIT’s investigations. While officially classified as a communalist riot, the events have been described as a pogrom by many scholars. Other observers have stated that these events met the legal definition of genocide and called it an instance of state terrorism or ethnic cleansing.
India Under Modi
Modi led the BJP in the 2014 general election, which gave the party a majority in the parliament, the first time a single party had achieved this since 1984. Credited with engineering a political realignment towards right-wing politics, Modi remains a figure of controversy domestically and internationally over his Hindu nationalist beliefs and his role during the 2002 Gujarat riots, cited as evidence of an exclusionary social agenda.
The economic policies of Modi’s government focused on privatization and liberalization of the economy based on a neoliberal framework. Modi updated India’s foreign direct investment policies to allow more foreign investment in several industries, including defense and the railways. Other reforms included removing many of the country’s labor laws to make it harder for workers to form unions and easier for employers to hire and fire them. These reforms met with support from institutions such as the World Bank, but opposition from scholars within the country. The labor laws also drew strong opposition from unions. The funds dedicated to poverty reduction programs and social welfare measures were greatly decreased by the Modi administration. The government also lowered corporate taxes, abolished the wealth tax, reduced customs duties on gold and jewelry, and increased sales taxes.
In 2014, Modi introduced the Make in India initiative to encourage foreign companies to manufacture products in India, with the goal of turning the country into a global manufacturing hub. Supporters of economic liberalization supported the initiative, while critics argued it would allow foreign corporations to capture a greater share of the Indian market. To enable the construction of private industrial corridors, the Modi administration passed a land-reform bill that allowed it to acquire private agricultural land without conducting social impact assessment and without the consent of the farmers who owned it. The bill was passed via an executive order after it faced opposition in parliament, but was eventually allowed to lapse. In 2015, Modi launched a program intended to develop 100 smart cities, which is expected to bring information technology companies an extra benefit of ₹20 billion (US$300 million). Modi also launched the Housing for All By 2022 project, which intends to eliminate slums in India by building about 20 million affordable homes for India’s urban poor.
Modi’s government reduced the amount of money spent by the government on healthcare and launched a New Health Policy, which emphasizes the role of private healthcare. This represented a shift away from the policy of the previous Congress government, which had supported programs to assist public health goals, including reducing child and maternal mortality rates. Modi also launched the Clean India campaign (2014) to eliminate open defecation and manual scavenging. As part of the program, the Indian government began constructing millions of toilets in rural areas and encouraging people to use them. The government also announced plans to build new sewage treatment plants.
In naming his cabinet, Modi renamed the Ministry of Environment and Forests the Ministry of Environment, Forests, and Climate Change. In the first budget of the government, the money allotted to this ministry was reduced by more than 50%. The new ministry also removed or diluted a number of laws related to environmental protection. These included no longer requiring clearance from the National Board for Wildlife for projects close to protected areas and allowing certain projects to proceed before environmental clearance was received. Modi also relaxed or abolished a number of other environmental regulations, particularly those related to industrial activity. A government committee stated that the existing system only created corruption and that the government should instead rely on the owners of industries to voluntarily inform the government about the pollution they were creating. In addition, Modi lifted a moratorium on new industrial activity in the most polluted areas. The changes were welcomed by businesspeople, but criticized by environmentalists.
Challenges in 21st-Century India
Corruption
has been one of the pervasive problems affecting India.
In 2015, India was ranked 76th out of 168 countries in Transparency International’s Corruption Perceptions Index. The largest contributors to the corruption are social welfare programs and social spending schemes enacted by the Indian government. The media have widely published allegations of corrupt Indian citizens stashing millions of rupees in Swiss banks. Swiss authorities, however, denied these allegations, which were subsequently proven in 2015-2016. The Indian media is mainly owned by corrupt politicians and industrialists who also play a major role in most of these scams, misleading public with wrong information and using media for mud-slinging their political and business opponents. The causes of corruption in India include excessive regulations; complicated tax and licensing systems; numerous government departments, each with opaque bureaucracy and discretionary powers; a monopoly of government-controlled institutions on certain goods and services, and the lack of transparent laws and processes.
Poverty in India continues to be a critical issue, despite having one of the fastest growing economies in the world. According to Global Wealth Report 2016 compiled by Credit Suisse Research Institute, India is the second most unequal country in the world with the top one percent of the population owning nearly 60% of the total wealth. Another urgent problem facing India’s economy is the sharp and growing regional variations among its different states and territories in terms of poverty, availability of infrastructure, and socio-economic development. Six low-income states – Assam, Chhattisgarh, Nagaland, Madhya Pradesh, Odisha, and Uttar Pradesh – are home to more than one-third of India’s population. Severe disparities exist among states in terms of income, literacy rates, life expectancy, and living conditions. Following Modi’s liberalization, the more advanced states have been better placed to benefit from reforms, with well-developed infrastructure and an educated and skilled workforce that attract the manufacturing and service sectors. There is a continuing debate on whether India’s economic expansion has been pro-poor or anti-poor. Studies suggest that the economic growth has been pro-poor and has reduced poverty in India although the statistics continue to paint a dire picture.
According to the 2014 revised World Bank methodology, India had 179.6 million people below the poverty line, which means that with 17.5% of total world’s population, India had 20.6% share of world’s poor.
Women in India continue to face numerous problems, including violent victimization through rape, acid throwing, dowry killings, marital rape, and the forced prostitution of young girls. In 2012, the Thomson Reuters Foundation ranked India as the worst G20 country in which to be a woman.
Although this report has faced criticism for inaccuracy,
findings from the World Economic Forum have repeatedly indicated that India is one of the worst countries in the world in terms of gender inequality.
38.4: Africa in the 21st Century
38.4.1: Sudan and the Conflict in Darfur
A major armed conflict in the Darfur region of Sudan began in 2003 when the Sudan Liberation Movement and the Justice and Equality Movement rebel groups accused the government of Sudan of oppressing Darfur’s non-Arab population, leading to the massive humanitarian crisis in a country ravaged by civil wars for decades.
Learning Objective
Discuss the controversy over the events in Darfur
Key Points
-
The
War in Darfur is a major armed conflict in the Darfur region of
Sudan that began in 2003 when the Sudan Liberation Movement and the Justice and Equality Movement rebel
groups began fighting the government of Sudan, which they accused of oppressing
Darfur’s non-Arab population. Other factors at the roots of the event were conflicts between semi-nomadic livestock herders and those who practice
sedentary agriculture, water access, and the Second Sudanese Civil War.
-
In
response, the government mounted a campaign of aerial bombardment supporting
ground attacks by an Arab militia, the Janjaweed. The government-supported
Janjaweed were accused of committing major human rights violations, including
mass killing, looting, and systematic rape of the non-Arab population of
Darfur. They have frequently burned down whole villages, driving the surviving
inhabitants to flee to refugee camps, mainly in Darfur and Chad.
- The Government of
Sudan and the SLM of Minni Minnawi signed a Darfur Peace
Agreement in 2006, but since only one rebel group subscribed to the
agreement, the conflict continued. The 2011 Darfur Peace Agreement, also known as the Doha Agreement, was
signed between the government of Sudan and the Liberation and Justice Movement. Although the conflict is considered resolved, civil conflicts in Sudan continue.
-
Immediately after the
Janjaweed entered the conflict, rapes of women and young girls were reported at a
staggering rate. Multiple
casualty estimates have been published since the war began, ranging from
roughly 10,000 civilians (Sudan government) to hundreds of thousands.
In
2004, United States
Secretary of State Colin Powell declared the Darfur conflict to be genocide although experts continue to disagree over whether the war crimes committed during the conflict fall into that category.
-
International
attention to the Darfur conflict largely began with reports of war crimes by
Amnesty International and the International Crisis Group in 2003. However,
widespread media coverage did not start until the outgoing United Nations
Resident and Humanitarian Coordinator for Sudan, Mukesh Kapila, called Darfur
the “world’s greatest humanitarian crisis” in 2004. In
2008, the International Criminal Court filed ten charges of war crimes against Sudan’s President Omar al-Bashir.
-
In
2011, a referendum was held to determine whether South Sudan should become an
independent country and separate from Sudan. South Sudan, with the majority of the population adhering either to
indigenous religions or Christianity, formally became independent from Sudan
(predominantly Muslim). The country continues to be ravaged by civil wars, is the least developed country in the world, and faces a massive humanitarian crisis.
Key Terms
- War in Darfur
-
A major armed conflict in the Darfur region of Sudan that began in 2003 when the Sudan Liberation Movement and the Justice and Equality Movement rebel groups began fighting the government of Sudan, which they accused of oppressing Darfur’s non-Arab population. As of 2017, the war is nominally resolved.
- Second Sudanese Civil War
-
A conflict from 1983 to 2005 between the central Sudanese government and the Sudan People’s Liberation Army. Although it originated in southern Sudan, the civil war spread to the Nuba mountains and Blue Nile. It lasted for 22 years and is one of the longest civil wars on record. The war resulted in the independence of South Sudan six years after it ended.
- South Sudanese Civil War
-
A conflict in South Sudan between forces of the government and opposition forces. In 2013, President Kiir accused his former deputy Riek Machar and ten others of attempting a coup d’état. Machar denied trying to start a coup and fled. Fighting broke out, igniting the civil war. Ugandan troops were deployed to fight alongside South Sudanese government. The United Nations has peacekeepers in the country as part of the United Nations Mission in South Sudan.
- Janjaweed
-
A militia that operate in western Sudan and eastern Chad. Using the United Nations definition, it comprised Sudanese Arab tribes, the core of whom are from the Abbala (camel herder) background with significant recruitment from the Baggara (cattle herder) people. This UN definition may not necessarily be accurate, as instances of members from other tribes have been noted.
War in Darfur
The War in Darfur is a major armed conflict in the Darfur region of Sudan that began in 2003 when the Sudan Liberation Movement (SLM or Sudan Liberation Army – SLA) and the Justice and Equality Movement (JEM) rebel groups began fighting the government of Sudan, which they accused of oppressing Darfur’s non-Arab population.
Several other factors have been identified at the roots of the present conflict. One involves the land disputes between semi-nomadic livestock herders and those who practice sedentary agriculture. Water access has also been a major source of the conflict. The Darfur crisis is also related to the Second Sudanese Civil War, raged in southern Sudan for decades between the northern, Arab-dominated government and Christian and animist black southerners.
The region became the scene of a rebellion in 2003 when the JEM and the SLM accused the government of oppressing non-Arabs in favor of Arabs. The government was also accused of neglecting the Darfur region. In response, it mounted a campaign of aerial bombardment supporting ground attacks by an Arab militia, the Janjaweed. The government-supported Janjaweed were accused of committing major human rights violations, including mass killing, looting, and systematic rape of the non-Arab population of Darfur. They have frequently burned down whole villages, driving the surviving inhabitants to flee to refugee camps, mainly in Darfur and Chad. By mid-2004, 50,000 to 80,000 people had been killed and at least a million driven from their homes, causing a major humanitarian crisis in the region.
The Government of Sudan and the SLM of Minni Minnawi signed a Darfur Peace Agreement in 2006. Only one rebel group, the SLM, subscribed to the agreement. The JEM rejected it, resulting in a continuation of the conflict. The agreement included provisions for wealth sharing and power sharing and established a Transitional Darfur Regional Authority to help administer Darfur until a referendum could take place on the future of the region. The leader of the SLM, Minni Minnawi, was appointed Senior Assistant to the President of Sudan and Chairman of the transitional authority in 2007.
In 2010, representatives of the Liberation and Justice Movement (LJM), an umbrella organization of ten rebel groups formed that year, started a fresh round of talks with the Sudanese Government in Doha, Qatar. A new rebel group, the Sudanese Alliance Resistance Forces in Darfur, was formed and the JEM planned further talks. These talks ended without a new peace agreement, but participants agreed on basic principles, including a regional authority and a referendum on autonomy for Darfur. In 2011, the leader of the LJM, Tijani Sese, stated that the movement had accepted the core proposals of the Darfur peace document proposed by the joint-mediators in Doha.
The 2011 Darfur Peace Agreement, also known as the Doha Agreement, was signed between the government of Sudan and the LJM. This agreement established a compensation fund for victims of the Darfur conflict, allowed the President of Sudan to appoint a vice president from Darfur, and established a new Darfur Regional Authority to oversee the region until a referendum can determine its permanent status within the Republic of Sudan. The agreement also provided for power sharing at the national level.
Social Impact of War
Immediately after the Janjaweed entered the conflict, the rape of women and young girls, often by multiple militiamen and throughout entire nights, was reported at a staggering rate. Children as young as 2 years old were victims, while mothers were assaulted in front of their children. Young women were attacked so violently that they were unable to walk following the attack. Non-Arab individuals were reportedly raped by Janjaweed militiamen as a result of the Sudanese government’s goal to completely eliminate black Africans and non-Arabs from Darfur.
Multiple casualty estimates have been published since the war began, ranging from roughly 10,000 civilians (Sudan government) to hundreds of thousands. In 2005, the UN’s Emergency Relief Coordinator Jan Egeland estimated that 10,000 were dying each month, excluding deaths due to ethnic violence. An estimated 2.7 million people had been displaced from their homes, mostly seeking refuge in camps in Darfur’s major towns. In 2010, the Center for Research on the Epidemiology of Disasters published an article in a special issue of The Lancet. The article, entitled Patterns of mortality rates in Darfur conflict, estimated with 95% confidence that the excess number of deaths is between 178,258 and 461,520 (with a mean of 298,271), with 80% of these due to disease.
In 2004, in testimony before the Senate Foreign Relations Committee, United States Secretary of State Colin Powell declared the Darfur conflict to be genocide. However, in 2005, an International Commission of Inquiry on Darfur, authorized by UN Security Council Resolution 1564 of 2004, issued a report stating that “the Government of the Sudan has not pursued a policy of genocide.” Nevertheless, the Commission cautioned, “The conclusion that no genocidal policy has been pursued and implemented in Darfur by the Government authorities, directly or through the militias under their control, should not be taken in any way as detracting from the gravity of the crimes perpetrated in that region. International offences such as the crimes against humanity and war crimes that have been committed in Darfur may be no less serious and heinous than genocide.” In 2007, the International Criminal Court (ICC) issued arrest warrants against the former Minister of State for the Interior, Ahmad Harun, and a Janjaweed militia leader, Ali Kushayb, for crimes against humanity and war crimes. In 2008, the ICC filed ten charges of war crimes against Sudan’s President Omar al-Bashir, three counts of genocide, five of crimes against humanity, and two of murder. Prosecutors claimed that al-Bashir “masterminded and implemented a plan to destroy in substantial part” three tribal groups in Darfur because of their ethnicity. In 2009, the ICC issued a warrant for al-Bashir’s arrest for crimes against humanity and war crimes, but not genocide. This is the first warrant issued by the ICC against a sitting head of state.
International Response
International attention to the Darfur conflict largely began with reports of war crimes by Amnesty International and the International Crisis Group in 2003. However, widespread media coverage did not start until the outgoing United Nations Resident and Humanitarian Coordinator for Sudan, Mukesh Kapila, called Darfur the “world’s greatest humanitarian crisis” in 2004. Organizations such as STAND: A Student Anti-Genocide Coalition, later under the umbrella of Genocide Intervention Network, and the Save Darfur Coalition emerged and became particularly active in the areas of engaging the United States Congress and President.
It is expected that al-Bashir will not face trial until he is apprehended in a nation which accepts ICC jurisdiction, as Sudan is not a party to the Rome Statute, which it signed but did not ratify. The Sudanese government has announced that the Presidential plane would be accompanied by jet fighters. However, the Arab League announced solidarity with al-Bashir. Since the warrant, he has visited Qatar and Egypt. The African Union also condemned the charges. Some analysts argue that the ICC indictment is counterproductive and harms the peace process. Only days after the ICC indictment, al-Bashir expelled 13 international aid organizations from Darfur and disbanded three domestic aid organizations. In the aftermath of the expulsions, conditions in the displaced camps deteriorated.
South Sudan
The Second Sudanese Civil War was a conflict from 1983 to 2005 between the central Sudanese government and the Sudan People’s Liberation Army. It was largely a continuation of the First Sudanese Civil War of 1955 to 1972. It is one of the longest civil wars on record (22 years). A peace agreement was signed in 2005 and one of its promises was the autonomy of the south within the next six years, followed by a referendum on independence.
In 2011, a referendum was held to determine whether South Sudan should become an independent country and separate from Sudan. 98.83% of the population voted for independence. South Sudan, with the majority of population adhering either to indigenous religions or Christianity, formally became independent from Sudan (predominantly Muslim), although certain disputes still remained, including the division of oil revenues, as 75% of all the former Sudan’s oil reserves are in South Sudan. South Sudan continues to be ravaged by civil wars, with tens of thousands displaced. The fighters accuse the government of plotting to stay in power indefinitely, not fairly representing and supporting all tribal groups while neglecting development in rural areas. Inter-ethnic warfare that in some cases predates the war of independence is widespread.
In 2013, a political power struggle broke out between President Kiir and his former deputy Riek Machar, as the president accused Machar and ten others of attempting a coup d’état. Fighting broke out, igniting the South Sudanese Civil War. Up to 300,000 people are estimated to have been killed in the war, including in massacres. Although both men have supporters from across South Sudan’s ethnic divides, subsequent fighting has been communal, with rebels targeting members of Kiir’s Dinka ethnic group and government soldiers attacking Nuers. About 3 million people have been displaced in a country of 12 million, with about 2 million internally displaced and about 1 million fleeing to neighboring countries, especially Kenya, Sudan, and Uganda.
Ravaged by conflicts, South Sudan has the least developed economy in the world and
is acknowledged to have some of the worst health indicators in the world. About half the population does not have access to an improved water source, defined as a protected well, standpipe, or a handpump within 1 km. In 2017, South Sudan and the United Nations declared a famine in parts of the country, with the warning that it could spread rapidly without further action. The UN World Food Program notes that 40% of the population of South Sudan, 4.9 million people, need food urgently.
38.4.2: Nigeria and Boko Haram
Boko Haram is an Islamic extremist group based in northeastern Nigeria, which pledged its allegiance to the Islamic State of Iraq and the Levant. Since 2009 it has been trying to overthrow the Nigerian government to establish an Islamic state.
Learning Objective
Account for the rise of Boko Haram in Nigeria
Key Points
-
Boko Haram is an Islamic extremist group
based in northeastern Nigeria, also active in Chad, Niger, and northern
Cameroon. Mohammed Yusuf founded it in
2002 when he
established a religious complex and school that attracted poor Muslim families
from across Nigeria and neighboring countries. The center had the political
goal of creating an Islamic state and became a recruiting ground for jihadists.
By denouncing the police and state corruption, Yusuf attracted followers from
unemployed youths.
-
The
government repeatedly ignored warnings about the increasingly militant
character of the organization. Yusuf’s arrest elevated him to hero status. Stephen Davis, a former Anglican clergyman who has negotiated with Boko
Haram many times, blames local Nigerian politicians who support local bandits
to make life difficult for their political opponents. In
particular, Davis has blamed the former governor of Borno State, Ali Modu
Sheriff, who initially supported Boko Haram.
-
Boko
Haram seeks the establishment of an Islamic state in Nigeria. It opposes the
Westernization of Nigerian society and the concentration of the wealth of the
country among members of a small political elite. The
sharia law imposed by local authorities may have promoted links between
Boko Haram and political leaders. The group had alleged links to al-Qaeda, but
in March 2015, it announced its allegiance to the Islamic State of Iraq and the
Levant.
-
Boko Haram conducted its operations more or less
peacefully during the first seven years of its existence. That changed in 2009
when the Nigerian government launched an investigation into the group’s
activities following reports that its members were arming themselves. Since then, Boko Haram has been attempting to overthrow the Nigerian government through various militant, including terrorist, strategies.
- Boko
Haram began to target schools in 2010, killing hundreds of students by 2014. A
spokesperson for the group said such attacks would continue as long as the
Nigerian government continued to interfere with traditional Islamic education.
Boko Haram has also been known to kidnap girls, who it believes should not be
educated, and use them as cooks or sex slaves. In
2014, Boko Haram kidnapped 276 female students from the Government Secondary
School in the town of Chibok in Borno. As of January 2017, 195 of the 276 girls were still in
captivity.
- The Nigerian government’s response has revealed the political and military weaknesses of the state apparatus and as of March 2017, Boko Haram continues its terrorist activities. While human rights abuses committed by Boko Haram are widely known, the conflict has also seen numerous human rights
abuses conducted by the Nigerian security forces in an effort to control the
violence, as well as their encouragement of the formation of numerous vigilante
groups.
Key Terms
- Islamic State of Iraq and the Levant
-
A Salafi jihadist extremist militant groups led by and
mainly composed of Sunni Arabs from Syria and Iraq. In 2014, the group
proclaimed itself a caliphate, with religious, political, and military
authority over all Muslims worldwide. As of March 2015, it had control over
territory occupied by ten million people in Syria and Iraq and nominal
control over small areas of Libya, Nigeria, and Afghanistan. It also operates
or has affiliates in other parts of the world, including North Africa and South
Asia.
- al-Qaeda
-
A militant Sunni Islamist multi-national organization founded
in 1988 by Osama bin Laden, Abdullah Azzam, and several other
Arab volunteers who fought against the Soviet invasion of
Afghanistan in the 1980s. It has been widely designated as a terrorist
group.
- Boko Haram
-
An Islamic extremist group based in northeastern Nigeria, also active in Chad, Niger and northern Cameroon. The group had alleged links to al-Qaeda, but in March 2015, it announced its allegiance to the Islamic State of Iraq and the Levant (ISIL). It was ranked as the world’s deadliest terror group by the Global Terrorism Index in 2015.
- sharia
-
The religious law forming part of the
Islamic tradition. It is derived from the religious precepts of
Islam, particularly the Quran and the Hadith. In Arabic, the term refers
to God’s divine law and is contrasted with fiqh, which refers to its
scholarly interpretations. Its application in modern times
has been a subject of dispute between Muslim traditionalists and reformists.
Origins of Boko Haram
Boko Haram is an Islamic extremist group based in northeastern Nigeria, also active in Chad, Niger, and northern Cameroon.
Mohammed Yusuf founded the sect that became known as Boko Haram in 2002 in Maiduguri, the capital of the northeastern state of Borno. He established a religious complex and school that attracted poor Muslim families from across Nigeria and neighboring countries. The center had the political goal of creating an Islamic state and became a recruiting ground for jihadists. By denouncing the police and state corruption, Yusuf attracted followers from unemployed youths. It has been speculated that Yusuf founded Boko Haram because he saw an opportunity to exploit public outrage at government corruption by linking it to Western influence.
The government repeatedly ignored warnings about the increasingly militant character of the organization. The Council of Ulama advised the government and the Nigerian Television Authority not to broadcast Yusuf’s preaching, but their warnings were ignored. Yusuf’s arrest elevated him to hero status. Borno’s Deputy Governor Alhaji Dibal has reportedly claimed that al-Qaeda had ties with Boko Haram, but broke them when they decided that Yusuf was an unreliable person. Stephen Davis, a former Anglican clergyman who has negotiated with Boko Haram many times, blames local Nigerian politicians who support local bandits to make life difficult for their political opponents. In particular, Davis has blamed the former governor of Borno State, Ali Modu Sheriff, who initially supported Boko Haram but no longer needed them after the 2007 elections, when the group became much more powerful.
Boko Haram seeks the establishment of an Islamic state in Nigeria. It opposes the Westernization of Nigerian society and the concentration of the wealth of the country among members of a small political elite, mainly in the Christian south of the country. Nigeria is Africa’s biggest economy, but 60% of its population of 173 million (2013) live in dire poverty. The sharia law imposed by local authorities, beginning with Zamfara in 2000 and covering 12 northern states by late 2002, may have promoted links between Boko Haram and political leaders.
The group had alleged links to al-Qaeda, but in March 2015, announced its allegiance to the Islamic State of Iraq and the Levant (ISIL or ISIS) and since then
publicly uses the name “ISIL-West Africa Province” or its variants.
Boko Haram Insurgency
Boko Haram conducted its operations more or less peacefully during the first seven years of its existence. That changed in 2009 when the Nigerian government launched an investigation into the group’s activities following reports that its members were arming themselves. Prior to that, the government reportedly repeatedly ignored warnings about the increasingly militant character of the organization, including those from a military officer. When the government came into action, several members of the group were arrested in Bauchi, sparking deadly clashes with Nigerian security forces that led to the deaths of an estimated 700 people. During the conflict with the security forces, Boko Haram fighters reportedly “used fuel-laden motorcycles” and “bows with poison arrows” to attack a police station. The group’s founder and then-leader Mohammed Yusuf was killed during this time while still in police custody. After Yusuf’s killing, Abubakar Shekau became the leader and held this
position until August 2016, when he was succeeded by Abu Musab al-Barnawi, the first surviving son of Mohammed Yusuf.
The group suffered a split in 2016 and Shekau and his supporters continued to operate independently.
After the killing of Yusuf, the group carried out its first terrorist attack in Borno in 2010, which resulted in the killing of four people. Since then, the violence has only escalated in terms of both frequency and intensity. In 2010, a Bauchi prison break freed more than 700 Boko Haram militants, replenishing their force. In 2011, a few hours after Goodluck Jonathan was sworn in as President of Nigeria, several bombings purportedly by Boko Haram killed 15 and injured 55. The same year, Boko Haram claimed to have conducted the Abuja police headquarters bombing, the first known suicide attack in Nigeria. Two months later the United Nations building in Abuja was bombed, signifying the first time that Boko Haram attacked an international organization. By early 2012, the group was responsible for over 900 deaths. In 2013, Nigerian government forces launched an offensive in the Borno region in an attempt to dislodge Boko Haram fighters after a state of emergency was called. The offensive was initially successful but eventually failed.
Chibok Schoolgirls Kidnapping
Boko Haram began to target schools in 2010, killing hundreds of students by 2014. A spokesperson for the group said such attacks would continue as long as the Nigerian government continued to interfere with traditional Islamic education. Boko Haram has also been known to kidnap girls, who it believes should not be educated, and use them as cooks or sex slaves.
In 2014, Boko Haram kidnapped 276 female students from the Government Secondary School in the town of Chibok in Borno. Fifty-seven of the schoolgirls managed to escape over the next few months and some have described their capture in appearances at international human rights conferences. A child born to one of the girls and believed by medical personnel to be about 20 months old also was released, according to the Nigerian president’s office. Newspaper reports have suggested that Boko Haram was hoping to use the girls as a negotiating pawns in exchange for some of their commanders in jail. In 2016, one of the missing girls, Amina Ali, was found. She claimed that the remaining girls were still there, but that six had died.
As of January 2017, 195 of the 276 girls were still in captivity. Furthermore, thousands of other children have disappeared in the nearby regions. Despite the high-profile campaign #BringBackOurGirls, international efforts to free the kidnapped girls have failed.
Government’s Response
The Nigerian military is, in the words of a former British military attaché speaking in 2014, “a shadow of what it’s reputed to have once been.” They are short of basic equipment, morale is said to be low, and senior officers are alleged to be skimming military procurement budget funds intended to pay for the standard issue equipment of soldiers. In 2013, the Nigerian military shut down mobile phone coverage in the three northeastern states to disrupt the Boko Haram’s communication and ability to detonate improvised explosive devices (IEDs). The shutdown was successful from a military-tactical point of view but it angered citizens in the region and engendered negative opinions toward the state and new emergency policies. While citizens and organizations developed various coping and circumventing strategies, Boko Haram evolved from an open network model of insurgency to a closed centralized system, shifting the center of its operations to the Sambisa Forest. This fundamentally changed the dynamics of the conflict.
In mid-2014, Nigeria was estimated to have had the highest number of terrorist killings in the world over the past year, 3477 killed in 146 attacks. The governor of Borno, Kashim Shettima noted in 2014: “Boko Haram are better armed and are better motivated than our own troops. Given the present state of affairs, it is absolutely impossible for us to defeat Boko Haram.”
In 2015, it was reported that Nigeria had employed hundreds of mercenaries from South Africa and the former Soviet Union to assist in making gains against Boko Haram. U.S. efforts to train and share intelligence with regional military forces is credited with helping to push back against Boko Haram, but officials warn that the group remains a grave threat. As of March 2017, Boko Haram continues suicide bombings and other terrorist strategies.
The conflict has also seen numerous human rights abuses conducted by the Nigerian security forces in an effort to control the violence as well as their encouragement of the formation of numerous vigilante groups. Amnesty International accused the Nigerian government of human rights abuses after 950 suspected Boko Haram militants died in detention facilities run by Nigeria’s military Joint Task Force in 2013. Furthermore, the Nigerian government has been accused of incompetence and supplying misinformation about events in more remote areas.
Human Rights Watch has also reported that Boko Haram uses child soldiers, including 12-year-olds. According to an anonymous source working on peace talks with the group, up to 40 percent of the fighters in the group are underage. The group has forcibly converted non-Muslims to Islam.
38.4.3: Somalia’s Challenges
Somalia has been ravaged by the ongoing civil war, political instability, and droughts and famines. This has made it one of the least developed and most fragile states in the world, where most residents—particularly girls and women—are constantly at a risk of losing health or life.
Learning Objective
Analyze why Somalia is often considered a failed state
Key Points
- Somalia
is a country located in the Horn of Africa, with an estimated population of
12.3 million. The
Supreme Revolutionary Council seized power in 1969 and established the Somali
Democratic Republic. Led by Mohamed Siad Barre, the government collapsed in
1991 as the Somali Civil War broke out. During this period, due to the absence of a
central government, Somalia was a failed state. This term refers to a political
body that has disintegrated to a point where basic conditions and
responsibilities of a sovereign government no longer function properly.
-
The
early 2000s saw the creation of fledgling interim federal administrations. The
Transitional National Government (TNG) was established in 2000, followed by the
formation of the Transitional Federal Government (TFG) in 2004, which
reestablished national institutions such as the military. In 2006, the TFG,
assisted by Ethiopian troops, assumed control of most of the nation’s southern
conflict zones from the newly formed Islamic Courts Union (ICU) . This Islamist organization assumed control of much of the southern
part of the country and promptly imposed sharia law.
-
Following
this defeat, the ICU splintered into several different factions. Some of the
more radical elements, including Al-Shabaab, regrouped to continue their
insurgency against the TFG and oppose the Ethiopian military’s presence in
Somalia. By
mid-2012, the insurgents lost most of the territory that they had seized.
- In 2011–2012, a political process providing benchmarks for the establishment of
permanent democratic institutions was launched. Within this administrative
framework a new provisional constitution was passed in 2012, reforming Somalia as a federation. Following the end of the TFG’s interim mandate, the
Federal Government of Somalia, the first permanent central government in the
country since the start of the civil war, was formed and a period of
reconstruction began.
-
By
2014, international stakeholders and analysts
have begun to describe Somalia as a fragile state making some
progress towards stability. A fragile state is a low-income country
characterized by weak state capacity and/or weak state legitimacy, leaving
citizens vulnerable to a range of shocks.
As
the war continues, the country is facing a plethora of challenges
caused not only by the decades of fighting but also by hostile environmental conditions.
- According
to the Central Bank of Somalia, about 80% of the population is nomadic or
semi-nomadic pastoralists. The UN notes
that extreme “inequalities across different social groups” are
widening and continue to be “a major driver of conflict.” Droughts
and resulting famines continue to ravage the country. Somalia is also consistently ranked as one of the worst places in the world to live for a woman.
Key Terms
- Transitional Federal Government
-
The internationally recognized government of the Republic of Somalia until August 2012, when its tenure officially ended and the Federal Government of Somalia was inaugurated. It was established as one of the Transitional Federal Institutions (TFIs) of government as defined in the Transitional Federal Charter (TFC) adopted in 2004 by the Transitional Federal Parliament (TFP).
- Al-Shabaab
-
A Salafist jihadist fundamentalist group based in East Africa. In 2012, it pledged allegiance to the militant Islamist organization Al-Qaeda. In February of the year, some of the group’s leaders quarreled with Al-Qaeda over the union and quickly lost ground.
The group describes itself as waging jihad against “enemies of Islam,” and is engaged in combat against the Federal Government of Somalia and the African Union Mission to Somalia (AMISOM).
- Somali Civil War
-
An ongoing civil war taking place in Somalia that grew out of resistance to the Siad Barre regime during the 1980s. By 1988–90, the Somali Armed Forces began engaging various armed rebel groups, including the Somali Salvation Democratic Front in the northeast, the Somali National Movement in the northwest, and the United Somali Congress in the south. The clan-based armed opposition groups eventually managed to overthrow the Barre government in 1991, but the war continues.
- failed state
-
A political body that has disintegrated to a point where basic conditions and responsibilities of a sovereign government no longer function properly. The Fund for Peace notes the following characteristics: loss of control of its territory or of the monopoly on the legitimate use of physical force therein; erosion of legitimate authority to make collective decisions; inability to provide public services; and inability to interact with other states as a full member of the international community.
- Islamic Courts Union
-
A group of sharia courts that united themselves to form a rival administration to the Transitional Federal Government (TFG) of Somalia, with Sharif Sheikh Ahmed as their head. Western media often refer to the group as Somali Islamists.
- fragile state
-
A low-income country characterized by weak state capacity and/or weak state legitimacy, leaving citizens vulnerable to a range of shocks.
Somalia: Background
Somalia is a country located in the Horn of Africa, with an estimated population of 12.3 million. Around 85% of its residents are ethnic Somalis and the majority are Muslim. In antiquity, Somalia was an important commercial center and during the Middle Ages, several powerful Somali empires dominated the regional trade. In the late 19th century, the British and Italian empires gained control of parts of the coast and established the colonies of British Somaliland and Italian Somaliland. Italy acquired full control of the northeastern, central, and southern parts of the area. Italian occupation lasted until 1941, yielding to British military administration. British Somaliland would remain a protectorate, while Italian Somaliland became a United Nations Trusteeship under Italian administration in 1949. In 1960, the two regions united to form the independent Somali Republic under a civilian government.
The Supreme Revolutionary Council seized power in 1969 and established the Somali Democratic Republic. Led by Mohamed Siad Barre, the government collapsed in 1991 as the Somali Civil War broke out. Various armed factions began competing for influence in the power vacuum. During this period, due to the absence of a central government, Somalia was a failed state. The term
refers to a political body that has disintegrated to a point where basic conditions and responsibilities of a sovereign government no longer function properly.
Common characteristics of a failed state include a central government so weak or ineffective that it is unable to raise taxes or other support and has little practical control over much of its territory (hence there is a non-provision of public services). As a result, widespread corruption and criminality, the intervention of non-state actors, the involuntary movement of populations, and sharp economic decline can occur. In the 1990s, Somalis returned to customary and religious law in most regions. Some autonomous regions, including the Somaliland and Puntland, emerged.
Path to Central Government
The early 2000s saw the creation of fledgling interim federal administrations. The Transitional National Government (TNG) was established in 2000, followed by the formation of the Transitional Federal Government (TFG) in 2004, which reestablished national institutions such as the military. In 2006, the TFG, assisted by Ethiopian troops, assumed control of most of the nation’s southern conflict zones from the newly formed Islamic Courts Union (ICU), which subsequently splintered into more radical groups.
The ICU is an Islamist organization, which assumed control of much of the southern part of the country and promptly imposed sharia law.
Following this defeat, the ICU splintered into several different factions. Some of the more radical elements, including Al-Shabaab, regrouped to continue their insurgency against the TFG and oppose the Ethiopian military’s presence in Somalia. Throughout 2007 and 2008, Al-Shabaab scored military victories, seizing control of key towns and ports in both central and southern Somalia. By 2009, Al-Shabaab and other militias managed to force the Ethiopian troops to retreat, leaving behind an under-equipped African Union peacekeeping force to assist the TFG’s troops.
Due to a lack of funding and human resources, an arms embargo that made it difficult to reestablish a national security force, and general indifference on the part of the international community, Somali President
Abdullahi Yusuf Ahmed found himself obliged to deploy thousands of troops from Puntland to Mogadishu to sustain the battle against insurgent elements in the southern part of the country. In 2008, Ahmed announced his resignation, expressing regret at failing to end the country’s 17-year conflict as his government had been mandated to do. He also blamed the international community for its failure to support the government.
By mid-2012, the insurgents lost most of the territory that they had seized. In 2011–2012, a political process providing benchmarks for the establishment of permanent democratic institutions was launched. Within this administrative framework a new provisional constitution was passed in 2012, which reformed Somalia as a federation. Following the end of the TFG’s interim mandate, the Federal Government of Somalia, the first permanent central government in the country since the start of the civil war, was formed and a period of reconstruction began.
Fragile State
By 2014, Somalia was no longer at the top of the fragile states index, dropping to second place behind South Sudan. International stakeholders and analysts have begun to describe Somalia as a fragile state making some progress towards stability.
A fragile state is a low-income country characterized by weak state capacity and/or weak state legitimacy leaving citizens vulnerable to a range of shocks. As the war continues as of March 2017, the country is facing a plethora of challenges caused not only by the decades of fighting, mismanagement, and political chaos, but also by hostile environmental conditions.
Despite the civil war, Somalia has maintained an informal economy based mainly on livestock, remittance/money transfers from abroad, and telecommunications. Due to a dearth of formal government statistics, it is difficult to determine the actual condition of the Somali economy. Unlike the pre-civil war period when most services and the industrial sector were government-run, there has been substantial, albeit unmeasured, private investment in commercial activities. This has been largely financed by the Somali diaspora and includes trade and marketing, money transfer services, transportation, communications, fishery equipment, airlines, telecommunications, education, health, construction, and hotels. Somalia has some of the lowest development indicators in the world. According to the Central Bank of Somalia, about 80% of the population are nomadic or semi-nomadic pastoralists who keep goats, sheep, camels, and cattle. The nomads also gather resins and gums to supplement their income. The UN notes that extreme “inequalities across different social groups” are widening and continue to be “a major driver of conflict.”
Droughts and resulting famines continue to ravage the country. Between mid-2011 and mid-2012, a severe drought affected the entire East Africa region, causing a severe food crisis across Somalia, Djibouti, Ethiopia, and Kenya that threatened the livelihood of 9.5 million people. Many refugees from southern Somalia fled to neighboring Kenya and Ethiopia, where crowded, unsanitary conditions together with severe malnutrition led to a large number of deaths. The food crisis in Somalia primarily affected farmers in the south rather than the northern pastoralists. Human Rights Watch (HRW) consequently noted that most of the displaced persons belonged to the agro-pastoral Rahanweyn clan and the agricultural Bantu ethnic minority group. The United Nations officially declared famine in two regions in the southern part of the country, the first time a famine had been declared in the region by the UN in nearly thirty years. Tens of thousands of people are believed to have died in southern Somalia before famine was declared. This was mainly a result of Western governments preventing aid from reaching affected areas to weaken the Al-Shabaab militant group against whom they were engaged. The food crisis in southern Somalia was no longer at emergency levels by the beginning of 2012.
In 2011, Maryan Qasim, a medical doctor, former minister for women’s development and family affairs, and an adviser in the TFG, wrote a column for The Guardian titled “The women of Somalia are living in hell.” In it, she professed to be “shocked” that Somalia was ranked 5th worst place in the world to be a woman, arguing that the country is “the worst [place] in the world” for women. She notes that it is not the war but being pregnant that constitutes the greatest risk for women’s life. The lack of medical care and infrastructure puts pregnant women at risk of death, the rate of which is higher only in Afghanistan. She concludes, “Add to this the constant risk of getting shot or raped, as well as the ubiquitous practice of female genital mutilation (FGM) – something 95% of girls aged 4 to 11 face – make women’s lives in Somalia almost unlivable.”
As of March 2017, a new wave of drought ravages Somalia. It has left more than 6 million people, or half the country’s population, facing food shortages with several water supplies becoming undrinkable due to the possibility of infection. In February 2017, a senior United Nations humanitarian official in Somalia warned of a famine in some of the worst drought-affected areas without massive and urgent humanitarian assistance. He also stated that the omission of such an immediate response “will cost lives, further destroy livelihoods, and could undermine the pursuit of key state-building and peacebuilding initiatives.” In March 2017, UN Secretary-General António Guterres urged a massive scale-up in international support to avert a famine.
38.4.4: South Africa’s Economic Growth
The South African economy has recorded impressive growth, which in 2011 enabled the country to join the prestigious BRIC group. However, the country continues to struggle with many challenges, including high unemployment, a public health crisis, and one of the highest rates of income inequality in the world.
Learning Objective
Explain why South Africa was added to the BRIC bloc of countries.
Key Points
- BRICS
is the acronym for an association of five major emerging national economies:
Brazil, Russia, India, China, and South Africa. Originally the
first four were grouped as BRIC.
The BRICS members are developing or newly industrialized countries,
distinguished by their large, sometimes fast-growing economies and
significant influence on regional affairs. In
2010, South Africa joined the BRIC
grouping, after
being formally invited by the BRIC countries. The group was
renamed BRICS – with the “S” standing for South Africa – to reflect
the expanded membership.
-
The
economy of South Africa is the largest in Africa. South Africa accounts
for 24 percent of Africa’s gross domestic product and is ranked as an
upper-middle-income economy by the World Bank – one of only four such countries
in Africa. Since 1996, at the end of over 12 years of international
sanctions, South Africa’s GDP has almost tripled to $400 billion and foreign
exchange reserves have increased from $3 billion to nearly $50 billion,
creating a diversified economy with a growing and sizable middle class within
two decades of establishing democracy and ending apartheid.
-
After
1994, government policy brought down inflation, stabilized
public finances, and attracted foreign capital. However, economic growth was
still subpar until 2004, when it picked up significantly. Both
employment and capital formation increased. During the presidency of Jacob Zuma, the government has begun to increase
the role of state-owned enterprises.
- Unlike most of the world’s formerly poor and now
developing countries, South Africa does not have a thriving informal economy.
Only 15% of South African jobs are in the informal sector. Mining
has been the main driving force behind the history and development of Africa’s
most advanced economy. South Africa is one of
the world’s leading mining and mineral-processing countries.
The
agricultural industry contributes around 10% of formal employment, relatively
low compared to other parts of Africa, contributing around 2.6% of GDP.
-
The
manufacturing industry’s contribution to the economy is relatively small,
providing just 13.3% of jobs and 15% of GDP. Labor costs are low, but not
nearly as low as in most other emerging markets, and the cost of the transport,
communications, and general living is much higher. Over the last
few decades, South Africa and particularly the Cape Town region has established
itself as a successful call center and business process outsourcing
destination. Tourism also creates a substantial percentage of jobs in the country.
-
High
levels of unemployment, income inequality, growing public debt, political
mismanagement, low levels of education, no reliable access to electricity, and
crime are serious problems that have negatively impacted the South African
economy. In 2016, the top five challenges to doing business in the country were
inefficient government bureaucracy, restrictive labor regulations, a shortage
of educated workers, political instability, and corruption. South Africa continues to have a relatively high rate of poverty and
is ranked in the top 10 countries in the world for income inequality.
Key Terms
- apartheid
-
A system of institutionalized racial segregation and discrimination in South Africa between 1948 and 1991, when it was abolished. The country’s first multiracial elections under a universal franchise were held three years later in 1994. Broadly speaking, the system was delineated into petty, which entailed the segregation of public facilities and social events, and grand, which dictated housing and employment opportunities by race.
- G-20
-
An international forum for the governments and central bank governors from 20 major economies. It was founded in 1999 with the aim of studying, reviewing, and promoting high-level discussion of policy issues pertaining to the promotion of international financial stability. It seeks to address issues that go beyond the responsibilities of any one organization.
- BRICS
-
The acronym for an association of five major emerging national economies. Its members are leading developing or newly industrialized countries, distinguished by their large, sometimes fast-growing economies and significant influence on regional affairs. All five are G-20 members.
BRICS
BRICS is the acronym for an association of five major emerging national economies: Brazil, Russia, India, China, and South Africa. Originally the first four were grouped as BRIC, before the induction of South Africa in 2010. The BRICS members are leading developing or newly industrialized countries, distinguished by their large, sometimes fast-growing economies and significant influence on regional affairs. All five are G-20 members. Since 2009, the BRICS nations have met annually at formal summits. In 2015, the five BRICS countries represented over 3.6 billion people, or half of the world population. All five members are in the top 25 of the world by population and four are in the top 10. The World bank expects BRICS growth to pick up to 5.3% in 2017. Bilateral relations among BRICS nations have mainly been conducted on the basis of non-interference, equality, and mutual benefit.
In 2010, South Africa began the formal process of admission to join the BRIC grouping, becoming a member at the end of that year and joining officially in 2011 after being formally invited by the BRIC countries. The group was renamed BRICS – with the “S” standing for South Africa – to reflect the expanded membership.
South African Economy in the 21st Century
The economy of South Africa is the largest in Africa. South Africa accounts for 24 percent of Africa’s gross domestic product and is ranked as an upper-middle-income economy by the World Bank – one of only four such countries in Africa. Since 1996, at the end of over 12 years of international sanctions, South Africa’s GDP has almost tripled to $400 billion and foreign exchange reserves have increased from $3 billion to nearly $50 billion, creating a diversified economy with a growing and sizable middle class within two decades of establishing democracy and ending apartheid. The nation is the only African member of the G-20.
After 1994, three years after apartheid was abolished and the year of first interracial elections, government policy brought down inflation, stabilized public finances, and some foreign capital was attracted. However, growth was still subpar, but increased significantly in 2004 when both employment and capital formation increased. During the presidency of Jacob Zuma (elected in 2009 and reelected in 2014), the government has begun to increase the role of state-owned enterprises. Some of the biggest state-owned companies are Eskom, the electric power monopoly, South African Airways (SAA), and Transnet, the railroad and ports monopoly. Some of these state-owned companies have not been profitable, which has required bailouts totaling 30 billion rand ($2.3 billion) over 20 years.
South Africa has a mixed economy (consisting of a mixture of markets and economic interventionism). Unlike most of the world’s formerly poor and now developing countries, South Africa does not have a thriving informal economy. Only 15% of South African jobs are in the informal sector, compared with around half in Brazil and India and nearly three-quarters in Indonesia. The OECD attributes this difference to South Africa’s widespread welfare system.
Mining has been the main driving force behind the history and development of Africa’s most advanced economy. Large-scale and profitable mining started with the discovery of a diamond in 1867 and in the 21st century, South Africa is one of the world’s leading mining and mineral-processing countries. Although mining’s contribution to the national GDP has fallen from 21% in 1970 to 6% in 2011, it still represents almost 60% of exports. The mining sector has a mix of privately owned and state-controlled mines.
The agricultural industry contributes around 10% of formal employment, relatively low compared to other parts of Africa, contributing around 2.6% of GDP. Due to the aridity of the land, only 13.5% can be used for crop production and only 3% is considered high potential land. The sector continues to face problems, with increased foreign competition and crime being two of the major challenges. The government has been accused of either putting in too much effort or not enough effort to tackle the problem of farm attacks as opposed to other forms of violent crime.
The manufacturing industry’s contribution to the economy is relatively small, providing just 13.3% of jobs and 15% of GDP. Labor costs are low, but not nearly as low as in most other emerging markets, and the cost of the transport, communications, and general living is much higher. The South African automotive industry accounts for about 10% of South Africa’s manufacturing exports, contributing 7.5% to the country’s GDP. BMW, Ford, Volkswagen, Daimler-Chrysler, General Motors, Nissan, and Toyota all have production plants in South Africa. There are also about 200 automotive component manufacturers in South Africa and more than 150 others that supply the industry on a non-exclusive basis.
The domestic telecommunications infrastructure provides modern and efficient service to urban areas, including cellular and internet services. Over the last few decades, South Africa and particularly the Cape Town region has established itself as a successful call center and business process outsourcing destination. With a highly talented pool of productive labor and with Cape Town sharing cultural affinity with Britain, large overseas firms such as Lufthansa, Amazon.com, ASDA, the Carphone Warehouse, Delta Airlines, and many more have established inbound call centers within Cape Town.
South Africa is also a popular tourist destination. According to the World Travel & Tourism Council, travel and tourism support around 10% of jobs in the country.
Challenges
South Africa has an extreme and persistent high unemployment rate of over 25%, which interacts with other economic and social problems such as inadequate education, poor health outcomes, and crime. The poor have limited access to economic opportunities and basic services. The official unemployment rate, although very high by international standards, still understates the magnitude of the problem because it includes only adults who are actively looking for work, excluding those who have given up looking for jobs. Only 41% of the population of working age has any kind of job (formal or informal). This rate is 30% lower than that of China and about 25% lower than that of Brazil or Indonesia.
There has been substantial human capital flight from South Africa in recent years. South Africa’s Bureau of Statistics estimates that between 1 million and 1.6 million people in skilled, professional, and managerial occupations have emigrated since 1994 and that for every emigrant, 10 unskilled people lose their jobs. Among the reasons cited for wishing to leave the country were declining quality of life and high levels of crime. Furthermore, the government’s affirmative action policy was identified as a factor influencing the emigration of skilled white South Africans. The results of a 1998 survey indicate that skilled white South Africans are strongly opposed to this policy and the arguments advanced in support of it.
Refugees and immigrants from poorer neighboring countries, including the Democratic Republic of the Congo, Mozambique, Zimbabwe, Malawi, and others, represent a large portion of the informal sector. With high unemployment levels among poorer South Africans, xenophobia is prevalent and many South Africans feel resentful of immigrants who are seen as depriving the native population of jobs. Although many South African employers have employed migrants from other countries for lower pay than South African citizens, especially in the construction, tourism, agriculture, and domestic service industries, many immigrants continue to live in poor conditions.
According to a 2015 UNAIDS Report, South Africa has an estimated 7 million people living with HIV, more than any other country in the world. A 2008 study revealed that HIV/AIDS infection in South Africa is distinctly divided along racial lines: 13.6% of black South Africans are HIV-positive, whereas only 0.3% of white South Africans have the disease. Most casualties have been economically active individuals, resulting in AIDS orphans who in many cases depend on the state for care and financial support. It is estimated that there are 1,200,000 orphans in South Africa.
High levels of unemployment, income inequality, growing public debt, political mismanagement, low levels of education, no reliable access to electricity, and crime are serious problems that have negatively impacted the South African economy. In 2016, the top five challenges to doing business in the country were inefficient government bureaucracy, restrictive labor regulations, a shortage of educated workers, political instability, and corruption, while the country’s strong banking sector was rated as a strongly positive feature of the economy. South Africa continues to have a relatively high rate of poverty and is ranked in the top 10 countries in the world for income inequality.
38.4.5: Health Crises
Health crises in Africa have stemmed from outbreaks of deadly diseases such as HIV/AIDS, malaria, and Ebola, but have also been caused and intensified by poverty, malnutrition, ongoing civil wars, and environmental disasters linked to famines.
Learning Objective
Analyze some of the health crises that have ravaged Africa
Key Points
-
A
public health crisis is a difficult situation or complex
health system that affects humans in one or more geographic areas, from a
particular locality tompass the entire planet. Health crises generally
have significant impacts on community health, loss of life, and the economy.
They may result from disease, industrial processes, environmental disasters, or
poor policy. Africa continues to be ravaged by multiple health crises.
- HIV/AIDS
is a major public health concern in many parts of Africa.
Although the continent is home to about 15% of the world’s population,
over 67% of the infected in 2015, more than 25.5 million individuals, were Africans. Out of
this number, nearly 19 million lived in eastern and southern Africa, while 6.5
million lived in western and central Africa.
High-risk
behavior patterns have been cited as being largely responsible for the
significantly greater spread of HIV/AIDS in sub-Saharan Africa than in other
parts of the world.
In
2015, UN reported that the leading cause of death among HIV-positive persons is
tuberculosis.
-
In 2015, there were 296 million cases of malaria
worldwide resulting in an estimated 731,000 deaths. Approximately 90% of
both cases and deaths occurred in Africa. In Africa, malaria is estimated
to result in losses of US$12 billion a year due to increased healthcare costs,
lost ability to work, and negative effects on tourism. Although malaria is presently endemic not
only in sub-Saharan Africa but also in a broad band around the equator, which
includes many parts of the Americas and Asia, 85–90% of malaria fatalities
occur in sub-Saharan Africa.
-
The
West African Ebola virus epidemic (2013–2016) was the most widespread outbreak
of Ebola virus disease (EVD) in history—causing major loss of life and
socioeconomic disruption in the region, mainly in Guinea,
Liberia, and Sierra Leone, with minor outbreaks occurring elsewhere. It caused
significant mortality, with the fatality rate reported at slightly
above 70% although the rate among hospitalized patients was 57–59%.
-
A number of regions in Africa have experienced environmental disasters
that led to food crises and famines.
A major culprit continent is drought, which in combination with ongoing civil
wars may produce disastrous results. An acute shortage of food or famine affected Niger (2006), the countries in the Horn of Africa (Somalia, Djibouti, and
Ethiopia), asnortheastern Kenya (2006), Africa’s Sahel region and many
parts of the neighboring Senegal River Area (2010), and entire East Africa (2011-2012). As of March 2017, Somalia and South Sudan
are experiencing severe droughts and experts estimate famines will affect
millions of people in both regions.
-
In
addition to recurrent food crises, malnutrition and poverty are endemic problems that
affect the health of massive numbers of Africans across the continent, with particularly tragic impact on children and women. Children’s health and maternal health indicators are particularly alarming in sub-Saharan Africa.
Key Term
- health crisis
-
A difficult situation or complex health system that affects humans in one or more geographic areas, from a particular locality to the entire planet. It generally has significant impacts on community health, loss of life, and the economy. It may result from disease, industrial processes, environmental disasters, or poor policy.
Health Crises in Africa
A public health crisis is a difficult situation or complex health system that affects humans in one or more geographic areas, from a particular locality to the entire planet. Health crises generally have significant impacts on community health, loss of life, and the economy. They may result from disease, industrial processes, environmental disasters, or poor policy.
Africa continues to be ravaged by multiple health crises, some of which are systemic (e.g., consistently high rates of maternal and infant deaths caused by lack of proper health care or poor nutrition), recurrent (e.g., famines caused by droughts), or caused by an outbreak of a particular diseases (e.g., malaria, Ebola, HIV/AIDS).
HIV/AIDS
HIV/AIDS is a major public health concern and cause of death in many parts of Africa. Although the continent is home to about 15% of the world’s population, over 67% of the infected, more than 25.5 million individuals, were Africans according to data collected by the World Health Organization (WHO) and UNAIDS in 2015. Out of this number, nearly 19 million lived in eastern and southern Africa, while 6.5 million lived in western and central Africa (North Africa, grouped with the Middle East, recorded 230 thousand infected). In the most affected countries of sub-Saharan Africa, AIDS has raised death rates and lowered life expectancy among adults between the ages of 20 and 49 by about 20 years. In fact, the life expectancy in many parts of Africa is declining, largely as a result of the HIV/AIDS epidemic, with life expectancy in some countries as low as 34 years.
High-risk behavioral patterns have been cited as largely responsible for the significantly greater spread of HIV/AIDS in sub-Saharan Africa than in other parts of the world. Chief among these are traditionally liberal attitudes espoused by many communities toward multiple sexual partners and sexual activity before or outside marriage. HIV transmission is most likely in the first few weeks after infection and the risk is increased when people have more than one sexual partner in the same time period. Within the cultures of sub-Saharan Africa, it is relatively common for both men and women to have concurrent sexual relations with more than one person. In addition, lack of AIDS-awareness education provided to youth and no access to proper health care contribute to the high rates of infections and deaths. Even if medical facilities are available, patents on many drugs have hindered the ability to make low-cost alternatives.
In 2015, UN reported that the leading cause of death among HIV-positive persons is tuberculosis; synergy between HIV and tuberculosis, termed a co-epidemic, is deadly. One in three HIV-infected individuals die of tuberculosis. “Tuberculosis and HIV co-infections are associated with special diagnostic and therapeutic challenges and constitute an immense burden on healthcare systems of heavily infected countries.” In many countries without adequate resources, the tuberculosis case rate has increased five to ten-fold since the identification of HIV. Without proper treatment, an estimated 90% of persons living with HIV die within months after contracting tuberculosis.
Malaria
Malaria is a mosquito-borne infectious disease affecting humans and other animals.
The disease is widespread in the tropical and subtropical regions that exist in a broad band around the equator, including much of sub-Saharan Africa, Asia, and Latin America. In 2015, there were 296 million cases of malaria worldwide resulting in an estimated 731,000 deaths. Approximately 90% of deaths occurred in Africa, where malaria is estimated to result in losses of US$12 billion a year due to increased healthcare costs, lost ability to work, and negative effects on tourism.
The WHO estimates that in 2015 there were 214 million new cases of malaria resulting in 438,000 deaths. Others have estimated the number of cases at between 350 and 550 million. The majority (65%) occur in children younger than age 15. In sub-Saharan Africa, maternal malaria is associated with up to 200,000 estimated infant deaths yearly. Efforts at decreasing the disease in Africa since the turn of millennium have been partially effective, with rates dropping by an estimated 40% on the continent. Although malaria is presently endemic not only in sub-Saharan Africa but also in a broad band around the equator, which includes many parts of the Americas and Asia, 85–90% of malaria fatalities occur in Sub-Saharan Africa.
Ebola
The West African Ebola virus epidemic (2013–2016) was the most widespread outbreak of Ebola virus disease (EVD) in history—causing major loss of life and socioeconomic disruption in the region, mainly in Guinea, Liberia, and Sierra Leone, with minor outbreaks occurring elsewhere. It caused significant mortality, with a fatality rate reported at slightly above 70% although the rate among hospitalized patients was 57–59%. Small outbreaks occurred in Nigeria and Mali and isolated cases were recorded in Senegal. The number of cases peaked in October 2014 and then began to decline gradually following the commitment of substantial international resources. In March 2016, the WHO terminated the Public Health Emergency of International Concern status of the outbreak. Subsequent flare-ups occurred.
The outbreak left about 17,000 survivors of the disease, many of whom report symptoms termed post-Ebola syndrome, often severe enough to require medical care for months or even years. An additional cause for concern is the apparent ability of the virus to “hide” in a recovered survivor’s body for an extended period of time and then become active months or years later, either in the same individual or in a sexual partner. In December 2016, the WHO announced that a two-year trial of the rVSV-ZEBOV vaccine appeared to offer protection from the strain of Ebola responsible for the West Africa outbreak. The vaccine has not yet had regulatory approval, but it is considered so effective that 300,000 doses have already been stockpiled.
Food Shortages
Since 2000, a number of regions in Africa have experienced environmental disasters that led to food crises and famines. For example, a severe localized food security crisis occurred in some regions of Niger from 2005 to 2006. It was caused by an early end to the 2004 rains, desert locust damage to some pasture lands, high food prices, and chronic poverty. In the affected area, 2.4 million of 3.6 million people were considered highly vulnerable to food insecurity. An international assessment stated that of these, over 800,000 faced extreme food insecurity and another 800,000 in moderately insecure food situations were in need of aid.
The crisis had long been predicted after swarms of locusts consumed nearly all crops in parts of Niger during the 2004 agricultural season. In other areas, insufficient rainfall resulted in exceptionally poor harvests and dry pastures, affecting both farmers and livestock breeders. The fertility rate in Niger is the highest in the world at 7.6 children per woman. The consequence is that population of Niger is projected to increase tenfold in the 21st century to more than 200 million people in 2100. Experts predict population growth-induced famines in the 21st century, because the agricultural production cannot keep up with the population growth.
A major culprit of food shortages on the continent is drought, which in combination with ongoing civil wars may produce disastrous results.
In 2006, an acute shortage of food affected the countries in the Horn of Africa (Somalia, Djibouti, and Ethiopia) as well as northeastern Kenya. The United Nations’s Food and Agriculture Organization (FAO) estimated that more than 11 million people in these countries were affected by widespread famine, largely attributed to a severe drought and exacerbated by military conflicts in the region. A large-scale, drought-induced famine occurred in Africa’s Sahel region and many parts of the neighboring Senegal River Area from February to August 2010. It is one of many famines to hit the region in recent times. The Sahel is the ecoclimatic and biogeographic zone of transition between the Sahara desert in the north of Africa and the Sudanian savannas in the south. Similarly, between July 2011 and mid-2012, a severe drought affected the entire East Africa region. Said to be “the worst in 60 years,” the drought caused a severe food crisis across Somalia, Djibouti, Ethiopia, and Kenya that threatened the livelihood of 9.5 million people. Many refugees from southern Somalia fled to neighboring Kenya and Ethiopia, where crowded, unsanitary conditions and severe malnutrition led to a large number of deaths. Other countries in East Africa, including Sudan, South Sudan, and parts of Uganda, were also affected by a food crisis. As of March 2017, Somalia and South Sudan are experiencing severe droughts and experts estimate famines will affect millions of people in both regions.
In addition to recurrent food crises, malnutrition is an endemic problem that affects massive numbers of Africans across the continent, which has had a particularly tragic impact on children.
Globally, more than one third of under-5 deaths are attributable to malnourishment. In Africa, some progress has been registered over the decades but the situation in some regions remains dire. Sub-Saharan Africa accounted for 3,370,000 deaths of children under 5 in 2011 (WHO, 2012), which corresponds to 9,000 children dying every day and six children dying every minute. Out of 3 million neonatal deaths worldwide, approximately 1.1 million are found in sub-Saharan Africa (WHO, 2012).
Given that vitamin A is critical for proper functioning of the visual system and for maintaining immune defenses, its deficiency remains a public health issue. In Africa, vitamin A deficiency contributes to 23% of child deaths. In 2009, the prevalence of low serum retinol, associated with vitamin A deficiency, was 37.7% in Ethiopia, 49% in the Congo, and 42% in Madagascar. The immediate causes of this deficiency are the low rates of consumption of animal products, the poor bioavailability of vitamin A in cereal-based diets, the consumption of green leaves with low lipid content, and an increased bodily demand for vitamin A owing to the infections that frequently affect African children.
There are equally disturbing levels of zinc deficiencies, which has seriously adverse effects on growth, the risk and severity of infections, and level of immune function. Although the actual prevalence is unclear, zinc deficiency is recognized as one of the main risk factors for morbidity and mortality. It contributes to over 450,000 deaths per year among children under 5 years, particularly in sub-Saharan Africa. It affected 57% of children under 5 in Senegal, 72% in Burkina Faso, and 41.5% in Nigeria in 2004. The main causes of this deficiency in children are a lack of zinc-rich easily absorbed foods (such as meat, poultry, and seafood) and the over-consumption of foods that inhibit zinc absorption, such as cereals, roots, and tubers, which are among Africa’s staples.
Anemia is quite prevalent in Africa, especially among young children, due mainly to a diet that is low in animal-based foods. In 2006, about 67.6% of children under 5 and overall 83.5 million children were anemic. Through its effects on metabolic processes, iron deficiency retards growth and development. It impairs the immune response and increases susceptibility to infection, delays motor development, and diminishes concentration (impairing cognitive and behavioral capacities). It therefore prevents 40-60% of African children from attaining their full mental capacities. Moreover, of the 26 health risks reported by the WHO Global Burden of Disease project, iron deficiency is ranked ninth in terms of years of life lost.
Maternal Health
According to a UN report,
A woman’s chance of dying or becoming disabled during pregnancy and childbirth is closely connected to her social and economic status, the norms and values of her culture, and the geographic remoteness of her home. Generally speaking, the poorer and more marginalized a woman is, the greater her risk of death. In fact, maternal mortality rates reflect disparities between wealthy and poor countries more than any other measure of health. A woman’s lifetime risk of dying as a result of pregnancy or childbirth is 1 in 39 in Sub-Saharan Africa, as compared to 1 in 4,700 in industrialized countries.
The risk for maternal death (during pregnancy or childbirth) in sub-Saharan Africa is 175 times higher than in developed countries and risk for pregnancy-related illnesses and negative consequences after birth is even higher. Poverty, maternal health, and outcomes for the child are all interconnected. Poverty is detrimental to the health of both mother and child.
38.5: The Americas in the 21st Century
38.5.1: Brazil’s Economic Success and Corruption Woes
Brazil, a member of the BRICS group, had one of the world’s fastest growing major economies until 2010, with its economic reforms giving the country new international recognition and influence.
Learning Objective
Connect the allegations of widespread corruption to Brazil’s economic growth in the 2000s
Key Points
- In 1985, Brazil entered its contemporary era, ushering in a period of re-democratization after decades of dictatorships and military rule.
- The adoption of Brazil’s current Constitution in 1988 completed the process of re-establishment of the democratic institutions. Since then, six presidential terms have elapsed, without rupture to the constitutional order.
- In 2002, Luiz Inácio Lula da Silva of the PT (Workers’ Party) won the presidency with more than 60% of the national vote.
- His presidency was characterized by major economic growth, increased international prestige, and numerous corruption scandals.
- Despite these scandals, Lula’s popularity rose to a record of 80%, the highest for a Brazilian president since the end of the military regime.
- One major mark of Lula’s second term were his efforts to expand Brazil’s political influence worldwide, including Brazil’s membership in G20, a global discussion forum of major powers.
- On October 31, 2010, Dilma Rousseff, also from the Worker’s Party, was the first woman elected President of Brazil, with her term beginning on January 1, 2011.
- Sparked by indignation and frustrations accumulated over decades (against corruption, police brutality, inefficiencies of political establishment, and public service), numerous peaceful protests erupted in Brazil from the middle of first term of Rousseff.
- Amidst political and economic crises, evidence that politicians from all main political parties were involved in several bribery and tax evasion schemes, and large street protests for and against her, Rousseff was impeached by the Brazilian Congress in 2016.
- She was succeeded by Vice President Michel Temer.
Key Terms
- Luiz Inácio Lula da Silva
-
A Brazilian politician who served as President of Brazil from January 1, 2003 to January 1, 2011. He is a founding member of the Workers’ Party (PT – Partido dos Trabalhadores). He is often regarded as one of the most popular politicians in the history of Brazil and, at the time of his mandate, one of the most popular in the world. Social programs like Bolsa Família and Fome Zero are hallmarks of his time in office. He played a prominent role in recent international relations developments, including the nuclear program of Iran and global warming, and was described as “a man with audacious ambitions to alter the balance of power among nations.”
- Dilma Rousseff
-
A Brazilian economist and politician who was the 36th President of Brazil from 2011 until her impeachment and removal from office on August 31, 2016. She is the first woman to have held the Brazilian presidency and previously served as Chief of Staff to President Luiz Inácio Lula da Silva from 2005 to 2010.
- G20
-
An international forum for the governments and central bank governors from 20 major economies. It was founded in 1999 with the aim of studying, reviewing, and promoting high-level discussion of policy issues pertaining to the promotion of international financial stability. It seeks to address issues that go beyond the responsibilities of any one organization. The participating heads of government or heads of state have periodically conferred at summits since their initial meeting in 2008, and the group also hosts separate meetings of finance ministers and central bank governors.
Background: Re-Democratization of Brazil
Leading up to the 21st century, Brazil saw a return to democratic rule after a period of dictatorship during the Vargas Era (1930–1934 and 1937–1945) and a period of military rule (1964–1985) under Brazilian military government. In January 1985 the process of negotiated transition towards democracy reached its climax with the election of Tancredo Neves of the PMDB party (the party that had always opposed the military regime) as the first civilian president since 1964. He died before being sworn in, and the elected vice president, José Sarney, was sworn in as president in his place.
In 1986 the Sarney government fulfilled Tancredo’s promise of passing in Congress a Constitutional Amendment to the Constitution inherited from the military period, summoning elections for a National Constituent Assembly to draft and adopt a new Constitution for the country. The Constituent Assembly began deliberations in February 1987 and concluded its work on October 5, 1988.
The adoption of Brazil’s current Constitution in 1988 completed the process of re-establishment of the democratic institutions. The new Constitution replaced the authoritarian legislation that still remained in place and that had been inherited from the days of the military regime.
In 1989 the first elections for president by direct popular ballot since the military coup of 1964 were held under the new Constitution, and Fernando Collor was elected. Collor was inaugurated on March 15, 1990. With the inauguration of the first president elected under the 1988 Constitution, the last step in the long process of democratization took place, and the phase of transition was finally over.
Since then, six presidential terms have elapsed, without rupture to the constitutional order: the first term corresponded to the Collor and Franco administrations (Collor was impeached on charges of corruption in 1992 and resigned the presidency. He was succeeded by Franco, his vice president.); the second and third terms corresponded to Fernando Henrique Cardoso’s administration; the fourth and fifth presidential terms corresponded to Luiz Inácio Lula da Silva’s administration; and the sixth term corresponded to Dilma Rousseff’s first administration. In 2015, Mrs. Rousseff started another term in office, due to end in 2018, but was impeached for violations of budgetary and fiscal responsibility norms in 2016. She was succeeded by Vice President Michel Temer.
Lula Administration
In 2002, Luiz Inácio Lula da Silva of the PT (Workers’ Party) won the presidency with more than 60% of the national vote. In the first months of his mandate, inflation rose perilously, reflecting the markets’ uncertainty about the government’s monetary policy. However, the markets’ confidence in the government was promptly regained as Lula chose to maintain his predecessor’s policies, particularly the continuation of Central Bank’s task of keeping inflation down. After that, the country underwent considerable economic growth and employment expansion. On the other hand, Lula’s mainstream economic policies disappointed his most radical leftist allies, which led to a breakdown of the PT (Workers’ Party) that resulted in the creation of the PSOL (Socialism and Liberty Party).
Several corruption scandals occurred during Lula’s presidency. In 2005, Roberto Jefferson, chairman of the Brazilian Labour Party (PTB), was implicated in a bribery case. As a Parliamentary Commission of Inquiry was set up, Jefferson testified that the members of parliament were being paid monthly stipends to vote for government-backed legislation. Later, in August of the same year, after further investigation, campaign manager Duda Mendonça admitted that he had used illegal undeclared money to finance the PT electoral victory of 2002. The money in both cases was found to have originated from private sources as well as from the advertising budget of state-owned enterprises headed by political appointees, both laundered through Duda’s Mendonça advertising agency. The collection of these incidents was dubbed the Mensalão scandal. On August 24, 2007, the Brazilian Supreme Court (Supremo Tribunal Federal) accepted the indictments of 40 individuals relating to the Mensalão scandal, most of whom were former or current federal deputies, and all of whom were still allies of the Brazilian president.
The loss of support resulting from these scandals was outweighed by the president’s popularity among the voters of the lower classes, whose income per capita rose as a consequence of both higher employment, expansion of domestic credit to consumers, and government social welfare programs. The stable and solid economic situation of the country, which Brazil had not experienced in the last 20 years, with fast growth in production both for internal consumers and exportation as well as a soft but noticeable decrease in social inequality, may also partially explain the high popularity of Lula’s administration even after several scandals of corruption involving important politicians connected to Lula and to PT. Hence Lula’s re-election in 2006: After almost winning in the first round, Lula won the run-off against Geraldo Alckmin of the PSDB (Brazilian Social Democracy Party) by a 20 million vote margin.
Following Lula’s second victory, his approval ratings started to rise again (fueled by the continuity of the economic and social achievements obtained during the first term) to a record of 80%, the highest for a Brazilian president since the end of the military regime. The focus of Lula’s second term was to further stimulate the economy by investments in infrastructure and measures to keep expanding the domestic credit to producers, industry, commerce, and consumers alike.
Another mark of Lula’s second term was his efforts to expand Brazil’s political influence worldwide, specially after G20 (from which Brazil and other emerging economies participate) replaced the G8 as the main world forum of discussions. Lula is an active defender of the Reform of the United Nations Security Council, as Brazil is one of the four nations (the others being Germany, India, and Japan) officially coveting a permanent seat in the council. Lula also helped orchestrate Brazil’s membership in BRICS, the acronym for an association of five major emerging national economies: Brazil, Russia, India, China, and South Africa.
Lula is also notorious for seeing himself as a friendly, peacemaker conciliator Head of State, managing to befriend leaders of rival countries from the likes of Presidents George W. Bush and Barack Obama from the United States to Venezuelan leader Hugo Chávez, Cuban former president Fidel Castro, the President of Bolivia Evo Morales, and lastly, Iranian President Mahmoud Ahmadinejad, fueling protests inside and outside the country due to Ahmadinejad’s polemical anti-Semitic statements. Lula took part in a deal with the governments of Turkey and Iran regarding Iran’s nuclear program despite the United States’ desire to strengthen the sanctions against the country, fearing the possibility of Iran developing nuclear weapons.
Rousseff Administration
On October 31, 2010, Dilma Rousseff, also from the Worker’s Party, was the first woman elected President of Brazil, with her term beginning on January 1, 2011. In her winning speech, Rousseff, who was also a key member in Lula’s administration, made clear that her mission during her term would be to keep enforcing her predecessor’s policies to mitigate poverty and ensure Brazil’s current economic growth.
On June 2011, Rouseff announced a program called “Brasil Sem Miséria” (Brazil Without Poverty). With the ambitious task of drastically reducing absolute poverty in Brazil until the end of her term, which currently afflicts 16 million people in the country or a little less than a tenth of the population. The program involves broadening the reach of the Bolsa Família social welfare program while creating new job opportunities and establishing professional certification programs. In 2012, another program called “Brasil Carinhoso” (Tenderful Brazil) was launched with the objective to provide extra care to all children in the country who live below the poverty threshold.
Although there was criticism from the local and international press regarding the lower-than-expected economic results achieved during her first term ahead of the government and the measures taken to solve it, Rouseff’s approval rates reached levels higher than any other president since the end of the military regime, until a wave of protests struck the country in mid 2013 reflecting dissatisfaction from the people with the current transport, healthcare, and education policies, among other issues that affected the popularity not only of the president, but several other governors and mayors from key areas in the country as well.
In 2014, Rousseff won a second term by a narrow margin, but was unable to to prevent her popularity from falling. In June 2015, her approval dropped to less than 10% after another wave of protests, this time organized by opposition who wanted her ousted from power, amid revelations that numerous politicians, including those from her party, were being investigated for accepting bribes from the state-owned energy company Petrobras from 2003 to 2010, during which time she was on the company’s board of directors. In 2015, a process of impeachment was opened against Rousseff that culminated with her temporarily removal from power in May 12, 2016 with Vice President Michel Temer assuming power temporarily until the final trial was concluded in August 31, 2016, when Rousseff was officially impeached and Temer was sworn in as president for the remainder of the term. During the impeachment process, Brazil hosted the 2016 Summer Olympics.
38.5.2: Venezuela and Chavismo
Under the presidency of Hugo Chávez from 1999 to 2013, Venezuela saw sweeping and radical shifts in social policy, marked by a move away from the government officially embracing a free market economy and neoliberal reform principles and a move towards the government embracing socialist income redistribution and social welfare programs.
Learning Objective
Summarize the defining characteristics of Chavismo
Key Points
- With many Venezuelans tired of politics in the country, the 1998 elections had the lowest voter turnout in Venezuelan history, with Hugo Chávez winning the presidency on December 6, 1998 with 56.4% of the popular vote.
- Following the adoption of a new constitution in 1999, Chávez focused on enacting social reforms as part of his “Bolivarian Revolution.”
- Using record-high oil revenues of the 2000s, his government nationalized key industries, created participatory democratic Communal Councils, and implemented social programs known as the Bolivarian Missions to expand access to food, housing, healthcare, and education.
- Venezuela received high oil profits in the mid-2000s and there were improvements in areas such as poverty, literacy, income equality, and quality of life occurring primarily between 2003 and 2007.
- At the end of Chávez’s presidency in the early 2010s, economic actions performed by his government during the preceding decade such as deficit spending and price controls proved to be unsustainable, with Venezuela’s economy faltering while poverty, inflation, and supply shortages in Venezuela increased.
- Chávez died of cancer on March 5, 2013 at the age of 58, and was was succeeded by Nicolás Maduro (initially as interim president before he narrowly won the 2013 presidential election).
- Maduro continued many of the policies of Chávez, leading to hundreds of thousands of Venezuelans protesting over high levels of criminal violence, corruption, hyperinflation, and chronic scarcity of basic goods due to policies of the federal government.
Key Terms
- Hugo Chávez
-
A Venezuelan politician who served as the 64th President of Venezuela from 1999 to 2013. He was also leader of the Fifth Republic Movement from its foundation in 1997 until 2007, when it merged with several other parties to form the United Socialist Party of Venezuela (PSUV), which he led until 2012. Following the adoption of a new constitution in 1999, he focused on enacting social reforms as part of the Bolivarian Revolution, which is a type of socialist revolution. Using record-high oil revenues of the 2000s, his government nationalized key industries, created participatory democratic Communal Councils, and implemented social programs known as the Bolivarian Missions to expand access to food, housing, healthcare, and education.
- Nicolás Maduro
-
A Venezuelan politician who has been the 65th President of Venezuela since 2013. Previously he served under President Hugo Chávez as Minister of Foreign Affairs from 2006 to 2013 and as Vice President of Venezuela from 2012 to 2013.
- Chavismo
-
A left-wing political ideology that is based on the ideas, programs, and government style associated with the former president of Venezuela, Hugo Chávez. It combines elements of socialism, left-wing populism, patriotism, internationalism, bolivarianism, post-democracy, feminism, green politics, and Caribbean and Latin American integration.
Bolivarian Revolution in Venezuela
The Bolivarian Revolution is a leftist social movement and political process in Venezuela that was led by late Venezuelan president Hugo Chávez, the founder of the Fifth Republic Movement and later the United Socialist Party of Venezuela. The “Bolivarian Revolution” is named after Simón Bolívar, an early 19th-century Venezuelan and Latin American revolutionary leader, prominent in the Spanish American wars of independence in achieving the independence of most of northern South America from Spanish rule. According to Chávez and other supporters, the “Bolivarian Revolution” seeks to build a mass movement to implement Bolivarianism, popular democracy, economic independence, equitable distribution of revenues, and an end to political corruption in Venezuela. They interpret Bolívar’s ideas from a socialist perspective.
Hugo Chávez
Hugo Chávez, a former paratroop lieutenant-colonel who led an unsuccessful coup d’état in 1992, was elected President in December 1998 on a platform that called for the creation of a “Fifth Republic,” a new constitution, a new name (“the Bolivarian Republic of Venezuela”), and a new set of relations between socioeconomic classes. In 1999, voters approved a referendum on a new constitution and in 2000, re-elected Chávez, also placing many members of his Fifth Republic Movement party in the National Assembly. Supporters of Chávez called the process symbolized by him the Bolivarian Revolution and were organized into different government-funded groups, including the Bolivarian Circles. Chávez’s first few months in office were dedicated primarily to constitutional reform, while his secondary focus was on immediately allocating more government funds to new social programs.
However, as a recession triggered by historically low oil prices and soaring international interest rates rocked Venezuela, the shrunken federal treasury provided very little of the resources Chávez required for his promised massive populist programs. The economy, which was still staggering, shrunk by 10% and the unemployment rate increased to 20%, the highest level in since the 1980s.
Chávez sharply diverged from previous administrations’ economic policies, terminating their practice of extensively privatizing Venezuela’s state-owned holdings, such as the national social security system, holdings in the aluminum industry, and the oil sector. Chávez worked to reduce Venezuelan oil extraction in the hopes of garnering elevated oil prices and, at least theoretically, elevated total oil revenues, thereby boosting Venezuela’s severely deflated foreign exchange reserves. He extensively lobbied other OPEC (Organization of the Petroleum Exporting Countries) countries to cut their production rates as well. As a result of these actions, Chávez became known as a “price hawk” in his dealings with the oil industry and OPEC. Chávez also attempted a comprehensive renegotiation of 60- year-old royalty payment agreements with oil giants Philips Petroleum and ExxonMobil. These agreements had allowed the corporations to pay in taxes as little as 1% of the tens of billions of dollars in revenues they were earning from their extraction of Venezuelan oil. Afterwards, Chávez stated his intention to complete the nationalization of Venezuela’s oil resources.
In April 2002, Chávez was briefly ousted from power in the 2002 Venezuelan coup d’état attempt following popular demonstrations by his opponents, but he was returned to power after two days as a result of demonstrations by poor Chávez supporters in Caracas, as well as actions by the military.
Chávez also remained in power after an all-out national strike that lasted from December 2002 to February 2003, including a strike/lockout in the state oil company PDVSA. The strike produced severe economic dislocation, with the country’s GDP falling 27% during the first four months of 2003, and costing the oil industry $13.3 billion. In the subsequent decade, the government was forced into several currency devaluations. These devaluations have done little to improve the situation of the Venezuelan people who rely on imported products or locally produced products that depend on imported inputs, while dollar-denominated oil sales account for the vast majority of Venezuela’s exports. The profits of the oil industry have been lost to “social engineering” and corruption, instead of investments needed to maintain oil production.
Chávez survived several further political tests, including an August 2004 recall referendum. He was elected for another term in December 2006 and re-elected for a third term in October 2012. However, he was never sworn in for his third term due to medical complications. Chávez died on March 5, 2013 after a nearly two-year fight with cancer. The presidential election that took place on April 14, 2013, was the first since Chávez took office in 1999 in which his name did not appear on the ballot.
Chavez’s ideas, programs, and style form the basis of “Chavismo,” a political ideology closely associated with Bolivarianism and socialism of the 21st century, which continued but declined after his death. Internationally, Chávez aligned himself with the Marxist–Leninist governments of Fidel and then Raúl Castro in Cuba, and the socialist governments of Evo Morales (Bolivia), Rafael Correa (Ecuador), and Daniel Ortega (Nicaragua). His presidency was seen as a part of the socialist “pink tide” sweeping Latin America. Chávez described his policies as anti-imperialist, being a prominent adversary of the United States’s foreign policy as well as a vocal critic of U.S.-supported neoliberalism and laissez-faire capitalism. He described himself as a Marxist. He supported Latin American and Caribbean cooperation and was instrumental in setting up the pan-regional Union of South American Nations, the Community of Latin American and Caribbean States, the Bolivarian Alliance for the Americas, the Bank of the South, and the regional television network TeleSUR.
Nicolás Maduro
Nicolás Maduro has been the President of Venezuela since April 14, 2013, after winning the second presidential election after Chávez’s death with 50.61% of the votes against the opposition’s candidate Henrique Capriles Radonski, who had 49.12% of the votes. The Democratic Unity Roundtable contested his election as fraudulent, and as a violation of the constitution. However, the Supreme Court of Venezuela ruled that under Venezuela’s Constitution, Nicolás Maduro is the legitimate president and was invested as such by the Venezuelan National Assembly.
Beginning in February 2014, hundreds of thousands of Venezuelans have protested over high levels of criminal violence, corruption, hyperinflation, and chronic scarcity of basic goods due to policies of the federal government. Demonstrations and riots have left over 40 fatalities in the unrest between both Chavistas and opposition protesters, and has led to the arrest of opposition leaders such as Leopoldo López and Antonio Ledezma. Human rights groups have strongly condemned the arrest of Leopoldo López.
In the 2015 Venezuelan parliamentary election, the opposition gained a majority.
The following year, in a July 2016 decree, President Maduro used his executive power to declare a state of economic emergency. The decree could force citizens to work in agricultural fields and farms for 60-day (or longer) periods to supply food to the country. Colombian border crossings have been temporarily opened to allow Venezuelans to purchase food and basic household and health items in Colombia in mid-2016. In September 2016, a study published in the Spanish-language Diario Las Américas indicated that 15% of Venezuelans are eating “food waste discarded by commercial establishments.”
38.5.3: Democracy in Chile and Argentina
Chile and Argentina both transitioned from military dictatorships to democratic regimes in the 1980s, leading to relative political stability in both countries in the 21st century.
Learning Objective
Evaluate the democratic systems currently in place in Chile and Argentina
Key Points
- Chileans elected a new president and the majority of members of a two-chamber congress on December 14, 1989, thus ending the rule of the oppressive military dictatorship of Augusto Pinochet.
- Christian Democrat Patricio Aylwin, the candidate of a coalition of 17 political parties called the Concertación, received an absolute majority of votes (55%).
- The Concertación coalition has continued to dominate Chilean politics for the last two decades: Aylwin was succeeded by another Christian Democrat, Eduardo Frei Ruiz-Tagle (son of Frei-Montalva), leading the same coalition for a 6-year term.
- Center-right investor and businessman Sebastián Piñera of the National Renewal assumed the presidency on March 11, 2010, after Bachelet’s term expired, with Bachelet returning to office once again after his term limits ended.
- On October 30, 1983, Argentines went to the polls to choose a president; vice-president; and national, provincial, and local officials in elections deemed by international observers to be fair and honest, thus beginning the country’s transition to a democratic government.
- Since then, Argentina had seen several democratically elected presidents, including Carlos Menem, who embraced neo-liberal policies, and De la Rúa, who kept Menem’s economic plan despite economic crisis, which led to growing social discontent.
- Néstor Kirchner was elected as the new president in 2002, boosting neo-Keynesian economic policies and ending the economic crisis, attaining significant fiscal and trade surpluses, and steep GDP growth.
- He did not run for reelection, promoting instead the candidacy of his wife, senator Cristina Fernández de Kirchner, who was elected in 2007 and reelected in 2011.
- On 22 November 2015, after a tie in the first round of presidential elections on 25 October, Mauricio Macri became the first democratically elected non-radical or peronist president since 1916.
Key Terms
- Trial of the Juntas
-
The judicial trial of the members of the de facto military government that ruled Argentina during the dictatorship of the Proceso de Reorganización Nacional (el proceso), which lasted from 1976 to 1983.
- Peronist
-
A person who follows the Argentinian political movement based on the ideology and legacy of former President Juan Domingo Perón and his second wife, Eva Perón. The Justicialist Party derives its name from the concept of social justice. Since its inception in 1946, candidates from his party have won 9 of the 12 presidential elections from which they have not been banned. As of 2016, Perón was the only Argentine to have been elected president three times.
- Concertación
-
A coalition of center-left political parties in Chile, founded in 1988. Presidential candidates under its banner won every election from when military rule ended in 1990 until the conservative candidate Sebastián Piñera won the Chilean presidential election in 2010. In 2013 it was replaced by the New Majority coalition.
- “disappearances”
-
In international human rights law, this occurs when a person is secretly abducted or imprisoned by a state or political organization or by a third party with the authorization, support, or acquiescence of a state or political organization, followed by a refusal to acknowledge the person’s fate and whereabouts, with the intent of placing the victim outside the protection of the law.
Chile’s Transition to Democracy
The Chilean transition to democracy began when a constitution establishing a transition itinerary was approved in a vote. From March 11, 1981 to March 1990, several organic constitutional laws were approved leading to the final restoration of democracy. After the 1988 plebiscite, the 1980 Constitution, still in force today, was amended to ease provisions for future amendments to the constitution, create more seats in the senate, diminish the role of the National Security Council, and equalize the number of civilian and military members (four members each).
Christian Democrat Patricio Aylwin served from 1990 to 1994 and was succeeded by another Christian Democrat, Eduardo Frei Ruiz-Tagle (son of Frei-Montalva), leading the same coalition for a 6-year term. Ricardo Lagos Escobar of the Socialist Party and the Party for Democracy led the Concertación (a coalition of center-left political parties in Chile, founded in 1988) to a narrower victory in the 2000 presidential elections. His term ended on March 11, 2006 when Michelle Bachelet of the Socialist Party took office. Center-right investor and businessman Sebastián Piñera, of the National Renewal, assumed the presidency on March 11, 2010 after Bachelet’s term expired.
Part of the transition from the military dictatorship to democracy entailed investigating the human right’s abuses under the previous regimes. In February 1991 Aylwin created the National Commission for Truth and Reconciliation, which released in February 1991 the Rettig Report on human rights violations committed during the military rule. This report counted 2,279 cases of “disappearances” that could be proved and registered. Of course, the very nature of “disappearances” made such investigations very difficult. The same issue arose several years later with the Valech Report released in 2004, which counted almost 30,000 victims of torture, among testimonies from 35,000 persons.
Chile in the 21st Century
The Concertación has continued to dominate Chilean politics for last two decades. Frei Ruiz-Tagle was succeeded in 2000 by Socialist Ricardo Lagos, who won the presidency in an unprecedented runoff election against Joaquín Lavín of the rightist Alliance for Chile.
In January 2006 Chileans elected their first female president, Michelle Bachelet, of the Socialist Party. She was sworn in on March 11, 2006, extending the Concertación coalition governance for another four years.
Chile signed an association agreement with the European Union in 2002; signed an extensive free trade agreement with the United States in 2003, and signed an extensive free trade agreement with South Korea in 2004, expecting a boom in the import and export of local produce and expecting to become a regional trade-hub. Continuing the coalition’s free-trade strategy, in August 2006 President Bachelet promulgated a free trade agreement with the People’s Republic of China (signed under the previous administration of Ricardo Lagos), the first Chinese free-trade agreement with a Latin American nation; similar deals with Japan and India were promulgated in August 2007. In October 2006, Bachelet promulgated a multilateral trade deal with New Zealand, Singapore and Brunei, the Trans-Pacific Strategic Economic Partnership (P4), also signed under Lagos’ presidency. Regionally, she has signed bilateral free-trade agreements with Panama, Peru, and Colombia.
After 20 years, Chile went in a new direction marked by the win of center-right Sebastián Piñera in the Chilean presidential election of 2009–2010. On February 27, 2010, Chile was struck by an 8.8 MW earthquake, the fifth largest ever recorded at the time. More than 500 people died (most from the ensuing tsunami) and over a million people lost their homes. The earthquake was also followed by multiple aftershocks. Initial damage estimates were in the range of US$15–30 billion, around 10 to 15 percent of Chile’s real gross domestic product.
Chile achieved global recognition for the successful rescue of 33 trapped miners in 2010. On August 5, 2010 the access tunnel collapsed at the San José copper and gold mine in the Atacama Desert near Copiapó in northern Chile, trapping 33 men 2,300 feet below ground. A rescue effort organized by the Chilean government located the miners 17 days later. All 33 men were brought to the surface two months later on October 13, 2010 over a period of almost 24 hours, an effort that was carried on live television around the world.
Good macroeconomic indicators failed to halt the social dissatisfaction claiming for a better and fairer education, which was traced to massive protests demanding more democratic and equitable institutions and a permanent disapproval of Piñera’s administration.
Due to term limits, Sebastián Piñera did not stand for re-election in 2013, and his term expired in March 2014 resulting in Michelle Bachelet returning to office. In 2015 a series of corruption scandals became public, threatening the credibility of the political and business class.
Contemporary Era in Argentina
Argentina also experienced a transition from a military dictatorship to a democracy in the 1980s. Raúl Alfonsín won the 1983 elections campaigning for the prosecution of those responsible for human rights violations during the military dictatorship. The Trial of the Juntas and other martial courts sentenced all the coup’s leaders but, under military pressure, Alfonsín also enacted the Full Stop and Due Obedience laws, which halted prosecutions further down the chain of command. The worsening economic crisis and hyperinflation reduced his popular support and the Peronist Carlos Menem won the 1989 election. Soon after, riots forced Alfonsín to an early resignation.
Menem embraced neo-liberal policies: a fixed exchange rate, business deregulation, privatizations ,and dismantling of protectionist barriers normalized the economy for a while. He pardoned the officers who had been sentenced during Alfonsín’s government. The 1994 Constitutional Amendment allowed Menem to be elected for a second term. The economy began to decline in 1995, with increasing unemployment and recession; led by Fernando de la Rúa, the UCR (Radical Civic Union, a centrist social-liberal political party) returned to the presidency in the 1999 elections.
De la Rúa kept Menem’s economic plan despite the worsening crisis, which led to growing social discontent. A massive capital flight was responded to with a freezing of bank accounts, generating further turmoil. The December 2001 riots forced him to resign. Congress appointed Eduardo Duhalde as acting president, who repealed the fixed exchange rate established by Menem. By the late 2002 the economic crisis began to recess, but the assassination of two protestors by the police caused political commotion, prompting Duhalde to move elections forward. Néstor Kirchner was elected as the new president.
Boosting the neo-Keynesian economic policies laid by Duhalde, Kirchner ended the economic crisis attaining significant fiscal and trade surpluses, and steep GDP growth. Under his administration Argentina restructured its defaulted debt with an unprecedented discount of about 70% on most bonds, paid off debts with the International Monetary Fund, purged the military of officers with doubtful human rights records, nullified and voided the Full Stop and Due Obedience laws, ruled them as unconstitutional, and resumed legal prosecution of the Juntas’ crimes. He did not run for reelection, promoting instead the candidacy of his wife, senator Cristina Fernández de Kirchner, who was elected in 2007 and reelected in 2011.
On November 22, 2015, after a tie in the first round of presidential elections on October 25, Mauricio Macri won the first ballotage in Argentina’s history, beating Front for Victory candidate Daniel Scioli and becoming president-elect. Macri is the first democratically elected non-radical or peronist president since 1916, although he had the support of the first mentioned. He took office on December 10, 2015. In April 2016, the Macri Government introduced austerity measures intended to tackle inflation and public deficits.
38.5.4: Mexico’s Transition to True Democracy
The Partido Revolucionario Institucional (PRI), the political party that controlled national and state politics in Mexico since 1929, was finally voted out of power in 2000 with the election of Vicente Fox Quesada, the candidate of the National Action Party (PAN).
Learning Objective
Determine to what extent Mexico has achieved a democratic political system
Key Points
- A new era began in Mexico following the fraudulent 1988 presidential elections.
- The Institutional Revolutionary Party barely won the presidential election, and President Carlos Salinas de Gortari began implementing sweeping neoliberal reforms in Mexico.
- These reforms required the amendment of the Constitution, especially curtailing the power of the Mexican state to regulate foreign business enterprises, but also lifted the suppression of the Roman Catholic Church in Mexico.
- Mexico’s economy was further integrated with that of United States and Canada after the North American Free Trade Agreement or NAFTA agreement began lowering trade barriers in 1994.
- Seven decades of PRI rule ended in 2000 with the election of Vicente Fox of the Partido Acción Nacional (PAN).
- His successor, Felipe Calderón, also of the PAN, embarked on a war against drug mafias in Mexico, one which has resulted in tens of thousands of deaths.
- In the face of extremely violent drug wars, the PRI returned to power in 2012, promising that it had reformed itself.
Key Terms
- North American Free Trade Agreement
-
An agreement signed by Canada, Mexico, and the United States, creating a trilateral trade bloc in North America. The agreement came into force on January 1, 1994. It superseded the Canada–United States Free Trade Agreement between the Canada and the United States. The goal of the agreement was to eliminate barriers to trade and investment between the United States, Canada, and Mexico. The implementation of the agreement on January 1, 1994 brought the immediate elimination of tariffs on more than one-half of Mexico’s exports to the United States and more than one-third of U.S. exports to Mexico.
- Zapatista Army of National Liberation
-
A revolutionary leftist political and militant group based in Chiapas, the southernmost state of Mexico. Since 1994 the group has been in a declared war “against the Mexican state” and against military, paramilitary, and corporate incursions into Chiapas. This war has been primarily defensive. In recent years, it has focused on a strategy of civil resistance. The group’s main body is made up of mostly rural indigenous people, but includes some supporters in urban areas and internationally.
- Institutional Revolutionary Party
-
A Mexican political party founded in 1929 that held power uninterruptedly in the country for 71 years from 1929 to 2000.
Background: Decline of the PRI
A phenomenon of the 1980s in Mexico was the growth of organized political opposition to de facto one-party rule by the Institutional Revolutionary Party (Spanish: Partido Revolucionario Institucional or PRI), which held power uninterruptedly in the country for 71 years from 1929 to 2000. The National Action Party (PAN), founded in 1939 and until the 1980s a marginal political party and not a serious contender for power, began to gain voters, particularly in Mexico’s north. They made gains in local elections initially, but in 1986 the PAN candidate for the governorship of Chihuahua had a good chance of winning.
The 1988 Mexican general election was pivotal in Mexican history. The PRI’s candidate was Carlos Salinas de Gortari, an economist who was educated at Harvard and who had never held an elected office. Cuauhtemoc Cárdenas, the son of former President Lázaro Cárdenas, broke with the PRI and ran as a candidate of the Democratic Current, later forming into the Party of Democratic Revolution (PRD). The PAN candidate Manuel Clouthier ran a clean campaign in long-standing pattern of the party.
The election was marked by irregularities on a massive scale. The Ministry of the Interior administered the electoral process, which meant in practice that the PRI controlled it. During the vote count, the government computers were said to have crashed, something the government called “a breakdown of the system.” One observer said, “For the ordinary citizen, it was not the computer network but the Mexican political system that had crashed.” When the computers were said to be running again after a considerable delay, the election results they recorded were an extremely narrow victory for Salinas (50.7%), Cárdenas (31.1%), and Clouthier (16.8%). Cárdenas was widely seen to have won the election, but Salinas was declared the winner. There might have been violence in the wake of such fraudulent results, but Cárdenas did not call for it, “sparing the country a possible civil war.” Years later, former Mexican President Miguel de la Madrid (1982–88) was quoted in the New York Times stating that the results were indeed fraudulent.
Salinas embarked on a program of neoliberal reforms that fixed the exchange rate, controlled inflation, and culminated with the signing of the North American Free Trade Agreement (NAFTA), which came into effect on January 1, 1994. The same day, the Zapatista Army of National Liberation (EZLN) started a two-week-long armed rebellion against the federal government, and has continued as a non-violent opposition movement against neoliberalism and globalization.
In 1994, Salinas was succeeded by Ernesto Zedillo, followed by the Mexican peso crisis and a $50 billion bailout by the International Monetary Fund (IMF). Major macroeconomic reforms were initiated by President Zedillo, and the economy rapidly recovered and growth peaked at almost 7% by the end of 1999.
President Vicente Fox Quesada (2000–2006)
Emphasizing the need to upgrade infrastructure, modernize the tax system and labor laws, integrate with the U.S. economy, and allow private investment in the energy sector, Vicente Fox Quesada, the candidate of the National Action Party (PAN), was elected the 69th president of Mexico on July 2, 2000, ending PRI’s 71-year-long control of the office. Though Fox’s victory was due in part to popular discontent with decades of unchallenged PRI hegemony, Fox’s opponent, president Zedillo, conceded defeat on the night of the election—a first in Mexican history. A further sign of the quickening of Mexican democracy was the fact that PAN failed to win a majority in both chambers of Congress—a situation that prevented Fox from implementing his reform pledges. Nonetheless, the transfer of power in 2000 was quick and peaceful.
Fox was a very strong candidate, but an ineffective president who was weakened by PAN’s minority status in Congress. Historian Philip Russell summarizes the strengths and weaknesses of Fox as president:
Marketed on television, Fox made a far better candidate than he did president. He failed to take charge and provide cabinet leadership, failed to set priorities, and turned a blind eye to alliance building….By 2006, as political scientist Soledad Loaeza noted, ‘the eager candidate became a reluctant president who avoided tough choices and appeared hesitant and unable to hide the weariness caused by the responsibilities and constraints of the office. …’ He had little success in fighting crime. Even though he maintained the macroeconomic stability inherited from his predecessor, economic growth barely exceeded the rate of population increase. Similarly, the lack of fiscal reform left tax collection at a rate similar to that of Haiti….Finally, during Fox’s administration, only 1.4 million formal-sector jobs were created, leading to massive immigration to the United States and an explosive increase in informal employment.
President Felipe Calderón Hinojosa (2006–2012)
President Felipe Calderón Hinojosa (PAN) took office after one of the most hotly contested elections in recent Mexican history; Calderón won by such a small margin (.56% or 233,831 votes) that the runner-up, Andrés Manuel López Obrador of the leftist Party of the Democratic Revolution (PRD), contested the results.
Despite imposing a cap on salaries of high-ranking public servants, Calderón ordered a raise on the salaries of the Federal Police and the Mexican armed forces on his first day as president.
Calderón’s government also ordered massive raids on drug cartels upon assuming office in December 2006 in response to an increasingly deadly spate of violence in his home state of Michoacán. The decision to intensify drug enforcement operations led to an ongoing conflict between the federal government and the Mexican drug cartels.
President Enrique Peña Nieto (2006-Present)
On July 1, 2012, Enrique Peña Nieto was elected president of Mexico with 38% of the vote. He is a former governor of the state of Mexico and a member of the PRI. His election returned the PRI to power after 12 years of PAN rule. He was officially sworn into office on December 1, 2012.
The Pacto por México was a cross party alliance that called for the accomplishment of 95 goals. It was signed on December 2, 2012 by the leaders of the three main political parties in Chapultepec Castle. The Pact has been lauded by international pundits as an example for solving political gridlock and for effectively passing institutional reforms. Among other legislation, it called for education reform, banking reform, fiscal reform, and telecommunications reform, all of which were eventually passed. Most importantly, the Pact wanted a revaluation of PEMEX. This ultimately resulted in the dissolution of the agreement when in December 2013 the center-left PRD refused to collaborate with the legislation penned by the center-right PAN and PRI that ended PEMEX’s monopoly and allowed for foreign investment in Mexico’s oil industry.
38.5.5: Drug Cartels
Drug cartels have been a major force in contemporary Latin America, sometimes rivaling the power of some nations’ governments and military, and causing hundreds of thousands of deaths through violence between competing cartels and between cartels and governments.
Learning Objective
Examine the powerful role drug cartels play across Latin America
Key Points
- A drug cartel is any criminal organization with the intention of supplying drug trafficking operations, and can range from loosely managed agreements among various drug traffickers to formalized commercial enterprises with billions of dollars in annual profits.
- Drug cartels came to power in the 1970s and 80s, controlling the vast majority of illegal drug trafficking throughout Latin America and the United States.
- Pablo Escobar with his Medellín Cartel supplied an estimated 80% of the cocaine smuggled into the United States at the height of his career, turning over US $21.9 billion a year in personal income.
- Each year from 1982 to 1992 Forbes magazine ranked Escobar as one of the top ten most powerful people in the World and he was considered by the Colombian Government and the U.S. Government to be “The unofficial dictator of Colombia.”
- The Mexican drug cartels began with Miguel Ángel Félix Gallardo (“The Godfather”), who founded the Guadalajara Cartel in 1980 and controlled most of the illegal drug trade in Mexico and the trafficking corridors across the Mexico–U.S. border throughout the 80s.
- Since then there have numerous cartels, often violently vying for power, with one of the largest in recent years being the Gulf Cartel.
- The Mexican Drug War is an ongoing war between the Mexican Government and various drug trafficking syndicates, started in 2006 when the Mexican military began to intervene in drug trafficking violence.
- Estimates set the death toll of the Mexican Drug War above 120,000 killed by 2013, not including 27,000 missing.
Key Terms
- Pablo Escobar
-
A Colombian drug lord, drug trafficker, and narco-terrorist. His cartel supplied an estimated 80% of the cocaine smuggled into the United States at the height of his career, turning over US $21.9 billion a year in personal income. He was often called “The King of Cocaine” and was the wealthiest criminal in history, with an estimated known net worth of US $30 billion by the early 1990s (equivalent to about $55 billion as of 2016), making him one of the richest men in the world at his prime.
- Miguel Ángel Félix Gallardo
-
A convicted Mexican drug lord who formed the Guadalajara Cartel in the 1980s, and controlled almost all of the drug trafficking in Mexico and the corridors along the Mexico–U.S. border.
- drug cartel
-
Any criminal organization with the intention of supplying drug trafficking operations. They range from loosely managed agreements among various drug traffickers to formalized commercial enterprises.
Drug Cartels
A drug cartel is any criminal organization with the intention of supplying drug trafficking operations. They range from loosely managed agreements among various drug traffickers to formalized commercial enterprises. The term was applied when the largest trafficking organizations reached an agreement to coordinate the production and distribution of cocaine. Since that agreement was broken up, drug cartels are no longer actually cartels, but the term stuck and it is now popularly used to refer to any criminal narcotics related organization.
The basic structure of a drug cartel is as follows:
- Falcons (Spanish: Halcones): Considered the “eyes and ears” of the streets, the “falcons” are the lowest rank in any drug cartel. They are responsible for supervising and reporting the activities of the police, the military, and rival groups.
- Hitmen (Spanish: Sicarios): The armed group within the drug cartel, responsible for carrying out assassinations, kidnappings, thefts, extortions, operating protection rackets, and defending their plaza (turf) from rival groups and the military.
- Lieutenants (Spanish: Lugartenientes): The second highest position in the drug cartel organization, responsible for supervising the hitmen and falcons within their own territory. They are allowed to carry out low-profile executions without permission from their bosses.
- Drug lords (Spanish: Capos): The highest position in any drug cartel, responsible for supervising the entire drug industry, appointing territorial leaders, making alliances, and planning high-profile executions.
It is worth noting that there are other operating groups within the drug cartels. For example, the drug producers and suppliers, although not considered in the basic structure, are critical operators of any drug cartel, along with the financiers and money launderers. In addition, the arms suppliers operate in a completely different circle, and are technically not considered part of the cartel’s logistics.
Mexican Drug Cartels
Origins
The birth of most Mexican drug cartels is traced to former Mexican Judicial Federal Police agent Miguel Ángel Félix Gallardo (“The Godfather”), who founded the Guadalajara Cartel in 1980 and controlled most of the illegal drug trade in Mexico and the trafficking corridors across the Mexico–U.S. border along with Juan Garcia Abrego throughout the 1980s. He started off by smuggling marijuana and opium into the United States and was the first Mexican drug chief to link up with Colombia’s cocaine cartels in the 1980s. Through his connections, Félix Gallardo became the point man for the Medellín Cartel, which was run by Pablo Escobar. This was easily accomplished because Félix Gallardo had already established an infrastructure that stood ready to serve the Colombia-based traffickers.
There were no cartels at that time in Mexico. Félix Gallardo oversaw all operations; there was just him, his cronies, and the politicians who sold him protection. However, the Guadalajara Cartel suffered a major blow in 1985 when the group’s co-founder Rafael Caro Quintero was captured and later convicted for the murder of DEA agent Enrique Camarena. Félix Gallardo afterwards kept a low profile and in 1987 he moved with his family to Guadalajara.
“The Godfather” then decided to divide up the trade he controlled, as it would be more efficient and less likely to be brought down in one law enforcement swoop. In a way, he was privatizing the Mexican drug business while sending it back underground, to be run by bosses who were less well known or not yet known by the DEA. Gallardo convened the nation’s top drug traffickers at a house in the resort of Acapulco where he designated the plazas or territories.
The Tijuana route would go to the Arellano Felix brothers. The Ciudad Juárez route would go to the Carrillo Fuentes family. Miguel Caro Quintero would run the Sonora corridor. The control of the Matamoros, Tamaulipas corridor—then becoming the Gulf Cartel—would be left undisturbed to its founder Juan García Ábrego. Meanwhile, Joaquín Guzmán Loera and Ismael Zambada García would take over Pacific coast operations, becoming the Sinaloa Cartel. Guzmán and Zambada brought veteran Héctor Luis Palma Salazar back into the fold. Félix Gallardo still planned to oversee national operations, as he maintained important connections, but he would no longer control all details of the business.
Félix Gallardo was arrested on 8 April 1989.
Gulf Cartel
The Gulf Cartel (Cartel del Golfo or CDG), based in Matamoros, Tamaulipas, has been one of Mexico’s two dominant cartels in recent years. In the late 1990s, it hired a private mercenary army (an enforcer group now called Los Zetas), which in 2006 stepped up as a partner but, in February 2010, their partnership was dissolved and both groups engaged in widespread violence across several border cities of Tamaulipas state, turning several border towns into “ghost towns.”
The CDG was strong at the beginning of 2011, holding off several Zetas incursions into its territory. However, as the year progressed, internal divisions led to intra-cartel battles in Matamoros and Reynosa, Tamaulipas state. The infighting resulted in several arrests and deaths in Mexico and in the United States. The CDG has since broken apart, and it appears that one faction, known as Los Metros, has overpowered its rival Los Rojos faction and is now asserting its control over CDG operations.
Mexican Drug War
The Mexican Drug War is the Mexican theater of the United States’ War on Drugs, involving an ongoing war between the Mexican Government and various drug trafficking syndicates. Since 2006, when the Mexican military began to intervene, the government’s principal goal has been to reduce the drug-related violence. Additionally, the Mexican government has claimed that their primary focus is on dismantling the powerful drug cartels, rather than on preventing drug trafficking, which is left to U.S. functionaries.
Although Mexican drug cartels, or drug trafficking organizations, have existed for several decades, their influence has increased since the demise of the Colombian Cali and Medellín cartels in the 1990s. Mexican drug cartels now dominate the wholesale illicit drug market and in 2007 controlled 90% of the cocaine entering the United States. Arrests of key cartel leaders, particularly in the Tijuana and Gulf cartels, has led to increasing drug violence as cartels fight for control of the trafficking routes into the United States.
Although violence between drug cartels had been occurring long before the war began, the government held a generally passive stance regarding cartel violence in the 1990s and early 2000s. That changed on December 11, 2006, when newly elected President Felipe Calderón sent 6,500 federal troops to the state of Michoacán to end drug violence there (Operation Michoacán). This action is regarded as the first major operation against organized crime, and is generally viewed as the starting point of the war between the government and the drug cartels. As time progressed, Calderón continued to escalate his anti-drug campaign, in which there are now about 45,000 troops involved in addition to state and federal police forces. In 2010 Calderón said that the cartels seek “to replace the government” and “are trying to impose a monopoly by force of arms, and are even trying to impose their own laws.”
As of 2011, Mexico’s military captured 11,544 people who were believed to have been involved with the cartels and organized crime. In the year prior, 28,000 individuals were arrested on drug-related charges. The decrease in eradication and drug seizures, as shown in statistics calculated by federal authorities, poorly reflects Calderón’s security agenda. Since the war began, over forty thousand people have been killed as a result of cartel violence. During Calderón’s presidential term, the murder rate of Mexico increased dramatically.
Medellín Cartel
The Medellín Cartel was a Colombian drug cartel originating in the city of Medellín. The drug cartel operated from the mid-1970s until the early-1990s in Bolivia, Colombia, Honduras, Peru, and the United States, as well as in Canada and Europe. It was founded and run by Ochoa Vázquez brothers Jorge Luis, Juan David, and Fabio, together with Pablo Escobar, Carlos Lehder, and José Gonzalo Rodríguez Gacha. By 1993, the resistance group, Los Pepes (or PEPES), controlled by the Cali Cartel, and the Colombian government, in collaboration with the Cali Cartel, right-wing paramilitary groups, and the U.S. government, had dismantled the Medellín Cartel by imprisoning or assassinating its members.
At the height of Pablo Escobar’s reign of the Medellín Cartel, and for 20 years, Pablo Escobar was the most powerful and richest drug lord in the world. And for 25 years, Escobar was also the most violent, ruthless, deadliest, dangerous, and feared drug lord in the World. From 1981-1993, Pablo Escobar was the 7th richest person in the World, Escobar had an astronomical and amazing net worth of $30-$42 Billion (which is equivalent to $107 Billion as of 2017). Escobar became Colombia’s top drug kingpin in 1976, but he became the world’s top drug kingpin in 1981, around that time Pablo Escobar became the most powerful and dangerous man in Colombia, and during Pablo Escobar’s regime, the Medellín Cartel became bigger and more powerful than the Colombian Government.
Escobar had more power, man power, weapons, influences, resources, and reach than the Colombian government and the Colombian military. For almost 2 decades, Escobar was responsible for ordering hundreds of atrocities, such as 1,300 bombings all over Colombia. Escobar’s most notorious bombings were the Avianca Flight 203 bombing, which killed 110 people; the DAS Building bombing, which killed 75 people and severely injured over 1,800 people; a truck bomb that killed a total of 489 people and severely injured 3,000 people; a bus bomb that killed a total of 260 people and wounded around 1,000 people; a series of 7 car bombs in the same day, which killed a total of 194 people and injured nearly 800 people; and a car bomb that killed 137 adults, 112 children, and severely injured 600 more people. Over a 20 year period, Escobar ordered the murders of at least 110,000 people.
38.5.6: The United States in the 21st Century
The beginning of the 21st century saw the September 11 attacks by Al-Qaeda, subsequent U.S. invasions of Afghanistan and Iraq, and in 2008, the worst U.S. economic crisis since the Great Depression.
Learning Objective
Describe the role of the United States in the 21st century
Key Points
- The 21st century in American began with the highly-contested election of Republican George W. Bush.
- The September 11 terrorist attacks occurred eight months into Bush’s first term as president, to which he responded with what became known as the Bush Doctrine: launching a “War on Terror,” an international military campaign that included the war in Afghanistan in 2001 and the Iraq War in 2003.
- In 2008, the unpopularity of President Bush and the Iraq war, along with the 2008 financial crisis, led to the election of Barack Obama, the first African-American President of the United States.
- Obama’s domestic initiatives included the Patient Protection and Affordable Care Act, which by means of large reforms to the American healthcare system, created a National Health Insurance program.
- President Obama eventually withdrew combat troops from Iraq, and shifted the country’s efforts in the War on Terror to Afghanistan, where a troop surge was initiated in 2009.
- In 2010, due to continued public discontent with the economic situation, unemployment, and federal spending, Republicans regained control of the House of Representatives and reduced the Democratic majority in the Senate.
- On November 8, 2016, GOP presidential nominee Donald Trump defeated Democratic nominee Hillary Clinton to become the President-elect of the United States, taking office on January 20, 2017.
Key Terms
- George W. Bush
-
An American politician who served as the 43rd President of the United States from 2001 to 2009. He was also the 46th Governor of Texas from 1995 to 2000. The September 11 terrorist attacks occurred eight months into his first term as president. He responded with what became known as the Bush Doctrine: launching a “War on Terror,” an international military campaign that included the war in Afghanistan in 2001 and the Iraq War in 2003. He also promoted policies on the economy, health care, education, Social Security reform, and amending the Constitution to prohibit same-sex marriage.
- Barack Obama
-
An American politician who served as the 44th President of the United States from 2009 to 2017. He is the first African American to have served as president, as well as the first born outside the contiguous United States. During his first two years in office, he signed many landmark bills. Main reforms were the Patient Protection and Affordable Care Act; the Dodd–Frank Wall Street Reform and Consumer Protection Act; and the Don’t Ask, Don’t Tell Repeal Act of 2010.
- Great Recession
-
A period of general economic decline observed in world markets during the late 2000s and early 2010s. The scale and timing of the recession varied from country to country. In terms of overall impact, the International Monetary Fund concluded that it was the worst global recession since World War II.
- 9/11
-
A series of four coordinated terrorist attacks by the Islamic terrorist group al-Qaeda on the United States on the morning of Tuesday, September 11, 2001. The attacks killed 2,996 people, injured over 6,000 others, and caused at least $10 billion in property and infrastructure damage and $3 trillion in total costs.
George W. Bush
In 2000, Republican George W. Bush was elected president in one of the closest and most controversial elections in U.S. history. Early in his term, his administration approved education reform and a large across-the-board tax cut aimed at stimulating the economy. Following the September 11 attacks in 2001, the United States embarked on the Global War on Terrorism, starting with the 2001 war in Afghanistan. In 2003, the United States invaded Iraq, which deposed the controversial regime of Saddam Hussein, but also resulted in a prolonged conflict that would continue over the course of the decade. The Homeland Security Department was formed and the controversial Patriot Act was passed to bolster domestic efforts against terrorism. In 2006, criticism over the handling of the disastrous Hurricane Katrina (which struck the Gulf Coast region in 2005), political scandals, and the growing unpopularity of the Iraq War helped the Democrats gain control of Congress. Saddam Hussein was later tried, charged for war crimes and crimes against humanity, and executed by hanging. In 2007, President Bush ordered a troop surge in Iraq, which ultimately led to reduced casualties.
9/11 and the Iraq War
On September 11, 2001 (“9/11”), the United States was struck by a terrorist attack when 19 al-Qaeda hijackers commandeered four airliners to be used in suicide attacks. They intentionally crashed two into both twin towers of the World Trade Center and the third into the Pentagon, killing 2,937 victims—206 aboard the three airliners, 2,606 who were in the World Trade Center and on the ground, and 125 who were in the Pentagon. The fourth plane was re-taken by the passengers and crew of the aircraft. While they were not able to land the plane safely, they were able to re-take control of the aircraft and crash it into an empty field in Pennsylvania, killing all 44 people including the four terrorists on board, thereby saving whatever target the terrorists were aiming for. All in all, a total of 2,977 people perished in the attacks. In response, President George W. Bush on September 20 announced a “War on Terror.” On October 7, 2001, the United States and NATO then invaded Afghanistan to oust the Taliban regime, which had provided safe haven to al-Qaeda and its leader Osama bin Laden.
The federal government established new domestic efforts to prevent future attacks. The controversial USA PATRIOT Act increased the government’s power to monitor communications and removed legal restrictions on information sharing between federal law enforcement and intelligence services. A cabinet-level agency called the Department of Homeland Security was created to lead and coordinate federal counter-terrorism activities. Some of these anti-terrorism efforts, particularly the U.S. government’s handling of detainees at the prison at Guantanamo Bay, led to allegations against the U.S. government of human rights violations.
In 2003, from March 19 to May 1, the United States launched an invasion of Iraq, which led to the collapse of the Iraq government and the eventual capture of Iraqi dictator Saddam Hussein, with whom the United States had long-standing tense relations. The reasons for the invasion cited by the Bush administration included the spreading of democracy, the elimination of weapons of mass destruction, and the liberation of the Iraqi people. Despite some initial successes early in the invasion, the continued Iraq War fueled international protests and gradually saw domestic support decline as many people began to question whether or not the invasion was worth the cost.
In 2008, the unpopularity of President Bush and the Iraq War, along with the 2008 financial crisis, led to the election of Barack Obama, the first African-American President of the United States. After his election, Obama reluctantly continued the war effort in Iraq until August 31, 2010, when he declared that combat operations had ended. However, 50,000 American soldiers and military personnel were kept in Iraq to assist Iraqi forces, help protect withdrawing forces, and work on counter-terrorism until December 15, 2011, when the war was declared formally over and the last troops left the country.
Great Recession
In September 2008, the United States, and most of Europe, entered the longest post-World War II recession, often called the “Great Recession.” Multiple overlapping crises were involved, especially the housing market crisis, a subprime mortgage crisis, soaring oil prices, an automotive industry crisis, rising unemployment, and the worst financial crisis since the Great Depression. The financial crisis threatened the stability of the entire economy in September 2008 when Lehman Brothers failed and other giant banks were in grave danger. Starting in October, the federal government lent $245 billion to financial institutions through the Troubled Asset Relief Program, which was passed by bipartisan majorities and signed by Bush.
Following his election victory by a wide electoral margin in November 2008, Bush’s successor, Barack Obama, signed into law the American Recovery and Reinvestment Act of 2009, which was a $787 billion economic stimulus aimed at helping the economy recover from the deepening recession. Obama, like Bush, took steps to rescue the auto industry and prevent future economic meltdowns. These included a bailout of General Motors and Chrysler, putting ownership temporarily in the hands of the government, and the “cash for clunkers” program, which temporarily boosted new car sales.
The recession officially ended in June 2009, and the economy slowly began to expand once again. The unemployment rate peaked at 10.1% in October 2009 after surging from 4.7% in November 2007, and returned to 5.0% as of October 2015. However, overall economic growth has remained weaker in the 2010s compared to expansions in previous decades.
Recent Events
From 2009 to 2010, the 111th Congress passed major legislation such as the Patient Protection and Affordable Care Act; the Dodd–Frank Wall Street Reform and Consumer Protection Act; and the Don’t Ask, Don’t Tell Repeal Act, which were signed into law by President Obama. Following the 2010 midterm elections, which resulted in a Republican-controlled House of Representatives and a Democratic-controlled Senate, Congress presided over a period of elevated gridlock and heated debates over whether or not to raise the debt ceiling, extend tax cuts for citizens making over $250,000 annually, and many other key issues. In the fall of 2012, Mitt Romney challenged Barack Obama for the presidency. Congressional gridlock continued as Congressional Republicans’ call for the repeal of the Patient Protection and Affordable Care Act—popularly known as “Obamacare”—along with other various demands, resulted in the first government shutdown since the Clinton administration and almost led to the first default on U.S. debt since the 19th century. As a result of growing public frustration with both parties in Congress since the beginning of the decade, Congressional approval ratings fell to record lows, with only 11% of Americans approving as of October 2013.
Other major events that have occurred during the 2010s include the rise of new political movements, such as the conservative Tea Party movement and the liberal Occupy movement. There was also unusually severe weather during the early part of the decade. In 2012, over half the country experienced record drought and Hurricane Sandy caused massive damage to coastal areas of New York and New Jersey.
The ongoing debate over the issue of rights for the LGBT community, most notably that of same-sex marriage, began to shift in favor of same-sex couples, and has been reflected in dozens of polls released in the early part of the decade. In 2012, President Obama became the first president to openly support same-sex marriage, and the 2013 Supreme Court decision in the case of United States v. Windsor provided for federal recognition of same-sex unions. In June 2015, the United States Supreme Court legalized gay marriage nationally in the case of Obergefell v. Hodges.
Political debate has continued over issues such as tax reform, immigration reform, income inequality and U.S. foreign policy in the Middle East, particularly with regards to global terrorism, the rise of the Islamic State of Iraq and the Levant, and an accompanying climate of Islamophobia.
After unprecedented media coverage and a hostile presidential campaign, businessman Donald Trump defeated former Secretary of State Hillary Clinton in the 2016 election, leading to Republicans gaining control of all branches of government. His first weeks in office have largely been characterized by a series of executive orders restricting abortion rights and the effects of the Affordable Care Act, the construction of the pipelines in North Dakota and a wall along the Mexican-American border, and the refusal to admit citizens of several Muslim majority countries.
38.6: Global Concerns
38.6.1: The International Framework in the 21st Century
Although international relations in the 21st
century are increasingly characterized by the formation of international and
regional institutions, their effectiveness alongside the sovereign actions of
states has been questioned.
Learning Objective
Characterize the international system as it
stands today
Key Points
- The beginning of
the 21st century has thus far been marked by the rise of a global economy and
Third World consumerism, mistrust in government, deepening global concern over
terrorism, and an increase in the power of private enterprise.
-
The United
States emerged as the sole superpower after the Cold War, but China simultaneously
began its rise and the BRICS countries aimed to create more balance in the
global political and economic spectrum.
- After the Cold
War, the UN saw a radical expansion in its peacekeeping duties, taking on more
missions in ten years than it had in the previous four decades.
-
Though the UN
Charter had been written primarily to prevent aggression by one nation against
another, in the early 1990s the UN faced a number of simultaneous, serious
crises within nations such as Somalia, Haiti, Mozambique, and the former
Yugoslavia that tested its founding principles and institutional effectiveness.
- The World Trade
Organization (WTO) is an intergovernmental organization that regulates
international trade. Due to an impasse in negotiations within the WTO between
developed and developing countries, there have been an increasing number of
bilateral free trade agreements between governments.
-
The renewed academic interest in regionalism, the emergence of new
regional formations, and international trade agreements like NAFTA and the
development of a European Single Market demonstrate the upgraded importance of regional
political cooperation and economic competitiveness.
Key Terms
- confidence- and security-building measures
-
Actions taken to reduce fear of attack by two or more parties in a situation of tension with or without physical conflict.
Confidence- and security-building measures emerged from attempts by the Cold
War superpowers and their military alliances (the North Atlantic Treaty Organization
and the Warsaw Pact) to avoid nuclear war by accident or miscalculation.
However, these measures also exist at other levels of conflict and in different
regions of the world.
- Short Twentieth Century
-
Originally proposed by Ivan Berend of the
Hungarian Academy of Sciences but defined by Eric Hobsbawm, a British Marxist
historian and author, this term refers to the period between 1914 and
1991, the beginning of World War I and the fall of the Soviet
Union.
- BRICS
-
The acronym used to refer to an association of five
major emerging national economies: Brazil, Russia, India, China, and South Africa.
The beginning of the 21st
century has thus far been marked by the rise of a global economy and Third
World consumerism, mistrust in government, deepening global concern over
terrorism, and an increase in the power of private enterprise. The long-term
effects of increased globalization are unknown, but there are many who are
concerned about its implications. The Arab Spring of the early 2010s led to
mixed outcomes in the Arab world. The Digital Revolution, which began around
the 1980s, continues into the present. Millennials and Generation Z are
coming of age and rising to prominence during this century as well.
In contemporary history, the
21st century essentially began in 1991 (the end of the Short Twentieth Century)
with the United States as the sole superpower in the absence of the Soviet
Union, while China began its rise and the BRICS countries aimed to create more
balance in the global political and economic spectrum.
United Nations
After the Cold War, the UN
saw a radical expansion in its peacekeeping duties, taking on more missions in
ten years than it had in the previous four decades. Between 1988 and 2000, the
number of adopted Security Council resolutions more than doubled, and the
peacekeeping budget increased more than tenfold. The UN negotiated an end to
the Salvadoran Civil War, launched a successful peacekeeping mission in
Namibia, and oversaw democratic elections in post-apartheid South Africa and
post-Khmer Rouge Cambodia. In 1991, the UN authorized a U.S.-led coalition that
repulsed the Iraqi invasion of Kuwait. Brian Urquhart, Under-Secretary-General
from 1971 to 1985, later described the hopes raised by these successes as a
“false renaissance” for the organization given the more troubled
missions that followed.
Though the UN Charter was written primarily to prevent aggression by one nation against another, in
the early 1990s the UN faced a number of simultaneous, serious crises within
nations such as Somalia, Haiti, Mozambique, and the former Yugoslavia. The UN
mission in Somalia was widely viewed as a failure after the U.S. withdrawal
following casualties in the Battle of Mogadishu, and the UN mission to Bosnia
faced “worldwide ridicule” for its indecisive and confused mission in
the face of ethnic cleansing. In 1994, the UN Assistance Mission for Rwanda
failed to intervene in the Rwandan Genocide amid indecision in the Security
Council.
Beginning in the last decades
of the Cold War, American and European critics of the UN condemned the
organization for perceived mismanagement and corruption. In 1984, U.S. President
Ronald Reagan withdrew his nation’s funding from UNESCO (the United Nations
Educational, Scientific and Cultural Organization, founded 1946) over
allegations of mismanagement, followed by Britain and Singapore. Boutros
Boutros-Ghali, UN Secretary-General from 1992 to 1996, initiated a reform of
the Secretariat, reducing the size of the organization. His successor,
Kofi Annan (1997–2006), initiated further management reforms in the face of
threats from the United States to withhold its UN dues.
In the late 1990s and 2000s,
international interventions authorized by the UN took a wider variety of forms.
The UN mission in the Sierra Leone Civil War of 1991–2002 was supplemented by
British Royal Marines, and the invasion of Afghanistan in 2001 was overseen by
NATO. In 2003, the United States invaded Iraq despite failing to pass a UN
Security Council resolution for authorization, prompting a new round of
questioning of the organization’s effectiveness. Under the eighth
Secretary-General, Ban Ki-moon, the UN has intervened with peacekeepers in crises
including the War in Darfur in Sudan and the Kivu conflict in the Democratic
Republic of Congo. During this time, the UN has also sent observers and
chemical weapons inspectors to Syria during its civil war. In 2013, an internal
review of UN actions in the final battles of the Sri Lankan Civil War in 2009
concluded that the organization had suffered “systemic failure.”
Additionally, 101 UN personnel died in the 2010 Haiti earthquake, the worst loss
of life in the organization’s history.
The Millennium Summit was
held in 2000 to discuss the UN’s role in the 21st century. The three-day meeting was the largest gathering of world leaders in history and
culminated in the adoption by all member states of the Millennium Development
Goals (MDGs), a commitment to achieve international development in areas such
as poverty reduction, gender equality, and public health. Progress towards
these goals, which were to be met by 2015, was ultimately uneven. The 2005
World Summit reaffirmed the UN’s focus on promoting development, peacekeeping,
human rights, and global security. The Sustainable Development Goals were
launched in 2015 to succeed the Millennium Development Goals.
In addition to addressing
global challenges, the UN has sought to improve its accountability and democratic
legitimacy by engaging more with civil society and fostering a global
constituency. To enhance transparency, the UN held its first
public debate between candidates for Secretary-General in 2016. On January 1,
2017, Portuguese diplomat António Guterres, who previously served as UN High
Commissioner for Refugees, became the ninth secretary-general. Guterres has
highlighted several key goals for his administration, including an emphasis on
diplomacy for preventing conflicts, more effective peacekeeping efforts, and
streamlining the organization to be more responsive and versatile to global
needs.
World Trade Organization
The World Trade Organization
(WTO) is an intergovernmental organization that regulates international trade.
The WTO officially commenced on January 1, 1995, under the Marrakesh Agreement
signed by 123 nations on April 15, 1994, replacing the General Agreement on
Tariffs and Trade (GATT), which commenced in 1948. The WTO deals with
regulation of trade between participating countries by providing a framework
for negotiating trade agreements and a dispute resolution process aimed at
enforcing participants’ adherence to WTO agreements, which are signed by
representatives of member governments and ratified by their legislatures. Most
of the issues that the WTO focuses on derive from previous trade negotiations,
especially from the Uruguay Round (1986–1994).
The WTO is attempting to
complete negotiations on the Doha Development Round, which was launched in 2001
to lower trade barriers around the world with an explicit focus on facilitating
the spread of global trade benefits to developing countries. The conflict
between free trade on industrial goods and services but retention of
protectionism on farm subsidies for developed countries’ domestic agricultural
sector and the substantiation of fair trade on agricultural products (requested
by developing countries) remain the major obstacles. This impasse has made it
impossible to launch new WTO negotiations beyond the Doha Development Round. As
a result, there have been an increasing number of bilateral free trade
agreements between governments. Adoption of the Bali Ministerial Declaration,
which for the first time successfully addressed bureaucratic barriers to
commerce, passed on December 7, 2013, advancing a small part of the Doha Round
agenda. However, as of January 2014, the future of the Doha Round remains
uncertain.
Regional Integration
Regional integration is a
process by which neighboring states enter into agreements to upgrade
cooperation through common institutions and rules. The objectives of the
agreement could range from economic to political to environmental, although it
has typically taken the form of a political economy initiative where commercial
interests are the focus for achieving broader sociopolitical and security
objectives as defined by national governments. Regional integration has been
organized either via supranational institutional structures, intergovernmental decision-making, or a combination of both.
Past efforts at regional
integration have often focused on removing barriers to free trade within regions,
increasing the free movement of people, labor, goods, and capital across
national borders, reducing the possibility of regional armed conflict (for
example, through confidence- and security-building measures), and adopting
cohesive regional stances on policy issues, such as the environment, climate
change, and migration.
Since the 1980s,
globalization has changed the international economic environment for
regionalism. The renewed academic interest in regionalism, the emergence of new
regional formations, and international trade agreements like the North American
Free Trade Agreement (NAFTA) and the development of a European Single Market
demonstrate the upgraded importance of regional political cooperation and
economic competitiveness. The African Union was launched on July 9, 2002, and a
proposal for a North American region was made in 2005 by the Council on Foreign
Relations’ Independent Task Force on the Future of North America. In Latin
America, however, the proposal to extend NAFTA into a Free Trade Area of the
Americas that would stretch from Alaska to Argentina was ultimately rejected by
nations such as Venezuela, Ecuador, and Bolivia. It has been superseded by the
Union of South American Nations (UNASUR), which was constituted in 2008.
Regionalism contrasts with
regionalization, which is, according to the New Regionalism Approach, the
expression of increased commercial and human transactions in a defined
geographical region. Regionalism refers to an intentional political process,
typically led by governments with similar goals and values in pursuit of the
overall development within a region. Regionalization, however, is simply the
natural tendency to form regions, or the process of forming regions, due to
similarities between states in a given geographical space.
38.6.2: The Environment
The international community’s efforts to combat
climate change have often been frustrated by the economic concerns of member
states.
Learning Objective
Evaluate the efforts made by the global
community to combat climate change
Key Points
- Global warming
and climate change are terms for the observed century-scale rise in the average
temperature of the Earth’s climate system and its related effects.
- Most countries
participate in the United Nations Framework Convention on Climate
Change (UNFCCC), which commits state parties to reduce greenhouse gas (GHG) emissions
based on the premise that global warming exists and human-made CO2 emissions
have caused it.
- The current
state of global warming politics is frustration over a perceived
lack of progress within the UNFCCC, which has existed for 18 years but has
been unable to curb global GHG emissions.
- The Kyoto
Protocol is an international treaty that extends the 1992 UNFCCC based on the
principle of common but differentiated responsibilities, placing the obligation
to reduce current emissions on developed countries on the basis that they are
historically responsible for the current levels of GHGs in the atmosphere.
- Of the 192
parties to the Kyoto Protocol, only 37 countries have binding targets within
the framework of the Protocol, and only seven of the 37 countries have ratified
their obligations within this framework.
-
The Paris Agreement is an agreement within the UNFCCC dealing with GHG
emissions mitigation, adaptation, and finance to be implemented starting in the
year 2020. It is the world’s first comprehensive climate agreement and has been
described as an incentive for and driver of fossil fuel divestment.
Key Terms
- fossil fuel
divestment
-
The removal of investment assets, including
stocks, bonds, and investment funds, from companies involved in extracting
fossil fuels in an attempt to reduce climate change by tackling its ultimate
causes.
- greenhouse gas
-
A gas in the atmosphere that
absorbs and emits radiation within the thermal infrared range. This process is
the fundamental cause of the greenhouse effect, which warms the planet’s
surface to a temperature above what it would be without its atmosphere.
Global warming and climate
change are terms for the observed century-scale rise in the average temperature
of the Earth’s climate system and its related effects. Multiple lines of
evidence show that the climate system is warming. Many of the observed changes
since the 1950s are unprecedented over tens to thousands of years.
UNFCCC
Most countries participate in the United Nations Framework Convention on Climate Change
(UNFCCC), which commits state parties to reduce greenhouse gas (GHG) emissions
based on the premise that (a) global warming exists and (b) human-made CO2
emissions have caused it. The ultimate objective of the Convention is to
prevent dangerous human interference of the climate system. As stated in the
Convention, this requires that GHG concentrations are stabilized in the
atmosphere at a level where ecosystems can adapt naturally to climate change,
food production is not threatened, and economic development can proceed in a
sustainable fashion. The Framework Convention was agreed in 1992, but since
then, global emissions have risen.
The current state of global
warming politics is frustration over a perceived lack of progress
within the UNFCCC, which has existed for 18 years but has been unable to
curb global GHG emissions. Todd Stern—the U.S. climate change envoy—has expressed
the challenges with the UNFCCC process as follows, “Climate change is not
a conventional environmental issue … It implicates virtually every aspect of
a state’s economy, so it makes countries nervous about growth and development.
This is an economic issue every bit as it is an environmental one.” He
went on to explain that the UNFCCC as a multilateral body can be an inefficient
system for enacting international policy. Because the framework includes
over 190 countries and negotiations are governed by consensus, small
groups of countries can often block progress. As a result, some have argued
that perhaps the consensus-driven model could be replaced with a majority vote
model. However, that would likely drive disagreement at the country level
by countries who do not wish to ratify any global agreement that might be
governed via majority vote.
Kyoto Protocol
The Kyoto Protocol is an
international treaty that extends the 1992 UNFCCC. The Kyoto Protocol was
adopted in Kyoto, Japan, on December 11, 1997, and entered into force on
February 16, 2005. There are currently 192 parties to the Protocol. The
Protocol is based on the principle of common but differentiated
responsibilities: it puts the obligation to reduce current emissions on
developed countries on the basis that they are historically responsible for the
current levels of GHGs in the atmosphere. This is justified on the basis that
the developed world’s emissions have contributed most to the accumulation of GHGs
in the atmosphere, per-capita emissions (i.e., emissions per head of
population) were still relatively low in developing countries, and the
emissions of developing countries would grow to meet their development needs.
The Protocol’s first
commitment period started in 2008 and ended in 2012. A second commitment period
was agreed on in 2012, known as the Doha Amendment to the protocol, in which 37
countries have binding targets: Australia, the European Union (and its 28
member states), Belarus, Iceland, Kazakhstan, Liechtenstein, Norway,
Switzerland, and Ukraine. Belarus, Kazakhstan, and Ukraine have stated that
they may withdraw from the Protocol or not put into legal force the Amendment
with second-round targets. Japan, New Zealand, and Russia have participated in
Kyoto’s first round but have not taken on new targets in the second commitment
period. Other developed countries without second-round targets are Canada
(which withdrew from the Kyoto Protocol in 2012) and the United States (which
has not ratified the Protocol). As of July 2016, 66 states have accepted the
Doha Amendment, while entry into force requires the acceptance of 144 states.
Of the 37 countries with binding commitments, seven have ratified.
Paris Agreement
The Paris Agreement is an
agreement within the UNFCCC dealing with GHG emissions mitigation, adaptation,
and finance to be implemented starting in the year 2020. The language of the
agreement was negotiated by representatives of 195 countries at the 21st
Conference of the Parties of the UNFCCC in Paris and adopted by consensus on
December 12, 2015. It was opened for signature on April 22, 2016, (Earth Day) at
a ceremony in New York. As of December 2016, 194 UNFCCC members have signed the
treaty, 136 of which have ratified it. After several European Union states ratified
the agreement in October 2016, enough countries had ratified
the agreement that produce enough of the world’s GHGs for it to
enter into force. The agreement went into effect on November 4, 2016.
The aim of the convention is
described in Article 2. It outlines a goal of “enhancing the implementation” of
the UNFCCC via the following means:
- Holding increases in global
average temperatures to below 2 °C above pre-industrial levels while pursuing
efforts to limit these increases to 1.5 °C above pre-industrial levels
-
Increasing adaptability to
the adverse impacts of climate change while fostering climate resilience and
low GHG emissions in a manner that does not endanger food production
-
Encouraging finance flows
that are consistent with low GHG emissions and climate-resilient development.
The Paris Agreement is the world’s first comprehensive climate agreement
and has been described as an incentive for and driver of fossil fuel
divestment.
38.6.3: Nuclear Proliferation
Five countries are recognized as nuclear weapons states and four other countries have acquired or are presumed to have
acquired nuclear weapons after the passage of the Nuclear Non-Proliferation
Treaty.
Learning Objective
List the countries that currently control
nuclear weapons
Key Points
Key Term
- Nuclear proliferation
-
The spread of nuclear weapons, fissionable
material, and weapons-applicable nuclear technology and information.
Nuclear proliferation is the
spread of nuclear weapons, fissionable material, and weapons-applicable nuclear
technology and information to nations not recognized as “Nuclear Weapon States”
by the Treaty on the Non-Proliferation of Nuclear Weapons, also known as the
Nuclear Non-Proliferation Treaty (NPT). Proliferation has been opposed by many
nations with and without nuclear weapons, the governments of which fear that as
more countries obtain nuclear weapons, the possibility of nuclear war (up to
and including the so-called “countervalue” targeting of civilians with nuclear
weapons) will also increase, leading to the destabilization of international or
regional relations and potential infringements upon the national sovereignty of
states.
Four countries besides the
five recognized nuclear weapons states have acquired, or are presumed to have
acquired, nuclear weapons: India, Pakistan, North Korea, and Israel. None of
these four is a party to the NPT, although North Korea acceded to the NPT in
1985 and then withdrew in 2003, then proceeded to conduct
announced nuclear tests in 2006, 2009, 2013, and 2016. One critique of the NPT
is that it is discriminatory in recognizing as nuclear weapon states only those
countries that tested nuclear weapons before 1968 and requiring all other
states joining the treaty to forswear nuclear weapons.
Research into the
development of nuclear weapons was undertaken during World War II by the United
States (in cooperation with the United Kingdom and Canada), Germany, Japan, and
the USSR. The United States was the first and is the only country to have used
a nuclear weapon in war, deploying two bombs against Japan in August 1945. Following
their WWII losses, Germany and Japan ceased involvement in any nuclear
weapon research. In August 1949, the USSR tested a nuclear weapon. The United
Kingdom tested a nuclear weapon in October 1952. France developed a nuclear
weapon in 1960. The People’s Republic of China detonated a nuclear weapon in
1964. India exploded a nuclear device in 1974, and Pakistan conducted a series
of nuclear weapon tests in May 1998, following tests by India earlier that
month. In 2006, North Korea conducted its first nuclear test.
Non-proliferation Efforts
Early efforts to prevent
nuclear proliferation involved intense government secrecy, the wartime
acquisition of known uranium stores (the Combined Development Trust), and at
times even outright sabotage—such as the bombing of a heavy-water facility
thought to be used for a German nuclear program. None of these efforts were
explicitly public because the weapon developments themselves were kept secret until
the bombing of Hiroshima. Earnest international efforts to promote nuclear
non-proliferation began soon after World War II when the Truman Administration
proposed the Baruch Plan of 1946, named after Bernard Baruch, America’s first
representative to the United Nations Atomic Energy Commission (UNAEC). The
Baruch Plan, which drew heavily from the Acheson–Lilienthal Report of 1946,
proposed the verifiable dismantlement and destruction of the U.S. nuclear arsenal after all
governments had cooperated successfully to accomplish two things:
- the establishment of an international
atomic development authority, which would actually own and control all
military-applicable nuclear materials and activities, and
-
the creation of a system
of automatic sanctions, which not even the UN Security Council could veto, and
which would proportionately punish states attempting to acquire the capability
to make nuclear weapons or fissile material.
Baruch’s plea for the destruction
of nuclear weapons invoked basic moral and religious intuitions. In one part of
his address to the UN, Baruch said, “Behind the black portent of the new
atomic age lies a hope which, seized upon with faith, can work out our
salvation. If we fail, then we have damned every man to be the slave of Fear.
Let us not deceive ourselves. We must elect World Peace or World
Destruction…. We must answer the world’s longing for peace and
security.” With this remark, Baruch helped launch the field of nuclear
ethics, to which many policy experts and scholars have contributed.
Although the Baruch Plan
enjoyed wide international support, it failed to emerge from the UNAEC because
the Soviet Union planned to veto it in the Security Council. Still, it remained
official American policy until 1953, when President Eisenhower made his Atoms for
Peace proposal before the UN General Assembly. Eisenhower’s proposal led
eventually to the creation of the International Atomic Energy Agency (IAEA) in
1957. Under the Atoms for Peace program thousands of scientists from around the
world were educated in nuclear science and then dispatched home, where many
later pursued secret weapons programs in their own countries. Since its founding
by the United Nations in 1957, the IAEA has promoted two sometimes
contradictory missions: on the one hand, the Agency seeks to promote and spread
internationally the use of civilian nuclear energy; on the other hand, it seeks
to prevent, or at least detect, the diversion of civilian nuclear energy to
nuclear weapons, nuclear explosive devices, or purposes unknown. The IAEA now
operates a safeguards system as specified under Article III of the NPT, which
aims to ensure that civil stocks of uranium and plutonium, as well as facilities
and technologies associated with these nuclear materials, are used only for
peaceful purposes and do not contribute in any way to proliferation or nuclear
weapons programs. It is often argued that proliferation of nuclear weapons has been prevented by the extension of assurances and mutual
defense treaties to these states by nuclear powers, but other factors such as
national prestige or specific historical experiences also play a part in
hastening or stopping nuclear proliferation.
Efforts to conclude an
international agreement to limit the spread of nuclear weapons did not begin
until the early 1960s, after four nations (the United States, the Soviet Union,
the United Kingdom, and France) had acquired nuclear weapons. Although these
efforts stalled in the early 1960s, they renewed once again in 1964 after
China detonated a nuclear weapon. In 1968, governments represented at the
Eighteen Nation Disarmament Committee finished negotiations on the text of
the NPT. In June 1968, the UN General Assembly endorsed the NPT with General
Assembly Resolution 2373 (XXII), and in July 1968, the NPT opened for signature
in Washington, D.C., London, and Moscow. The NPT entered into force in March
1970.
Since the mid-1970s, the primary focus of non-proliferation efforts has
been to maintain and even increase international control over the fissile
material and specialized technologies necessary to build such devices, because
these are the most difficult and expensive parts of a nuclear weapons program.
The main materials whose generation and distribution is controlled are highly
enriched uranium and plutonium. Other than the acquisition of these special
materials, the scientific and technical means for weapons construction to
develop rudimentary but working nuclear explosive devices are considered within the reach of most if not all industrialized nations.
38.6.4: The Developing World
Although developing countries’ economies have tended
to demonstrate higher growth rates than those of developed countries, they tend
to lag behind in terms of social welfare targets.
Learning Objective
Describe some of the challenges faced by
developing countries
Key Points
- A developing
country is
a nation or a sovereign state with a less developed industrial base and low
Human Development Index (HDI) compared to other countries.
- Economic
development originated as a global concern in the post-World War II period of
reconstruction. It is related to the concept of international aid, but distinct
from disaster relief and humanitarian aid.
- International
development projects may consist of a single transformative project to address
a specific problem or a series of projects targeted at several aspects of
society.
- The launch of
the Marshall Plan was an important step in setting the agenda for international
development, combining humanitarian goals with the creation of a political and
economic bloc in Europe allied with the U.S.
- In terms of
international development practice on the ground, the concept of community
development has been influential since the 1950s.
- By the late
1960s, dependency theory arose, analyzing the evolving relationship between the
West and the Third World.
- In the 1970s and
early 1980s, the modernists at the World Bank and IMF adopted neo-liberal ideas
of economists such as Milton Friedman or Bela Balassa, implemented
in the form of structural adjustment programs, while their opponents were
promoting various bottom-up approaches.
- By the 1990s,
some writers and academics felt an impasse had been reached within development
theory, with some imagining a post-development era.
- While some
critics have been debating the end of development, others have
predicted a development revival as part of the War on Terrorism.
Key Terms
- modernization theory
-
A theory used to explain the process of
modernization within societies using a model of progressive transition from
pre-modern or traditional societies to modern society. The theory
assumes that with assistance, so-called traditional societies can be developed
in the same manner as currently developed countries.
- appropriate technology
-
An ideological movement and its manifestations
encompassing technological choice and application that is small-scale,
decentralized, labor-intensive, energy-efficient, environmentally sound, and
locally autonomous.
- dependency theory
-
The notion that resources flow from periphery
or poor underdeveloped, states to a core of wealthy states, enriching
the latter at the expense of the former.
A developing country is a nation or a
sovereign state with a less developed industrial base and low Human Development
Index (HDI) relative to other countries. There are no universally agreed-upon
criteria for what makes a country developing versus developed and which
countries fit these two categories, although there are general reference points
such as a nation’s GDP per capita compared to other nations. In general, the
term “developing” describes a currently observed situation and not a
dynamic or expected direction of progress. Since the late 1990s, developing
countries have tended to demonstrate higher growth rates than the developed
ones.
History and Theory
Economic development
originated as a global concern in the post-World War II period of
reconstruction. In President Harry Truman’s 1949 inaugural speech, the
development of undeveloped areas was characterized as a priority for the West. The
origins of this priorities can be attributed to:
- the need for reconstruction
in the immediate aftermath of World War II;
-
the legacy of colonialism in
the context of the establishment of a number of free trade policies and a
rapidly globalizing world;
-
the start of the Cold War
and the desire of the U.S. and its allies to prevent satellite states from
drifting towards communism.
The launch of the Marshall
Plan was an important step in setting the agenda for international development,
combining humanitarian goals with the creation of a political and economic bloc
in Europe allied to the U.S. This agenda was given conceptual support
during the 1950s in the form of modernization theory as espoused by Walt Rostow
and other American economists. Changes in the developed world’s approach to
international development were further necessitated by the gradual collapse of
Western Europe’s empires over the following decades because newly independent
ex-colonies no longer received support in return for their subordinate role to
an imperial power.
By the late 1960s,
dependency theory arose, analyzing the evolving relationship between the West
and the Third World. Dependency theorists argue that poor countries have
sometimes experienced economic growth with little or no economic development
initiatives, such as in cases where they have functioned mainly as
resource-providers to wealthy industrialized countries. As such, international
development at its core has been geared towards colonies that gained
independence with the understanding that newly independent states should be
constructed so that the inhabitants enjoy freedom from poverty, hunger, and
insecurity.
In the 1970s and early
1980s, the modernists at the World Bank and IMF adopted the neo-liberal ideas of
economists such as Milton Friedman or Bela Balassa, implemented in
the form of structural adjustment programs, while their opponents promoted various bottom-up approaches ranging from civil disobedience and critical
consciousness to appropriate technology and participatory rural appraisal.
By the 1990s, some writers
and academics felt an impasse had been reached within development theory, with
some imagining a post-development era. The Cold War had ended, capitalism had
become the dominant mode of social organization, and UN statistics showed that
living standards around the world had improved significantly over the previous
40 years. Nevertheless, a large portion of the world’s population was still
living in poverty, their governments were crippled by debt, and concerns about
the environmental impact of globalization were rising. In response to the
impasse, the rhetoric of development has since focused on the issue of poverty,
with the meta-narrative of modernization replaced by shorter term visions
embodied by the Millennium Development Goals and the Human Development approach,
which measures human development in capabilities achieved. At the same time,
some development agencies are exploring opportunities for public-private
partnerships and promoting the idea of corporate social responsibility with the
apparent aim of integrating international development with the process of
economic globalization.
Critics have suggested that
such integration has always been part of the underlying agenda of development. They
argue that poverty can be equated with powerlessness, and that the way to
overcome poverty is through emancipatory social movements and civil society,
not paternalistic aid programs or corporate charity. This approach is embraced
by organizations such as the Gamelan Council, which seeks to empower
entrepreneurs through micro-finance initiatives, for example. While some critics
have been debating the end of development, however, others have predicted a
development revival as part of the War on Terrorism. To date, wever, there is
limited evidence to support the notion that aid budgets are being used to
counter Islamic fundamentalism in the same way that they were used 40 years ago
to counter communism.
Policy
International development is
related to the concept of international aid, but distinct from disaster relief
and humanitarian aid. While these two forms of international support seek to
alleviate some of the problems associated with a lack of development, they are
most often short-term fixes — not necessarily long-term solutions.
International development, on the other hand, seeks to implement long-term
solutions to problems by helping developing countries create the necessary
capacity needed to provide such sustainable solutions to their problems. A
truly sustainable development project is able to carry on indefinitely
with no further international involvement or support, whether it be financial
or otherwise.
In its broadest sense,
policies of economic development encompass three major areas:
- Governments undertaking broad economic objectives such as price stability, high employment, and
sustainable growth. Such efforts include monetary and fiscal policies,
regulation of financial institutions, trade, and tax policies.
-
Programs that provide
infrastructure and services such as highways, parks, affordable housing, crime
prevention, and K–12 education.
-
Job creation and retention
through specific efforts in business finance, marketing, neighborhood
development, workforce development, small business development, business
retention and expansion, technology transfer, and real estate development. This
third category is a primary focus of economic development professionals.
International development
projects may consist of a single transformative project to address a specific
problem or a series of projects targeted at several aspects of society.
Promoted projects involve problem solving reflecting the unique
culture, politics, geography, and economy of a region. More recently, the focus
in this field has been projects that aim towards empowering women, building
local economies, and caring for the environment. In the context of human
development, projects usually encompass themes of foreign aid, governance,
healthcare, education, poverty reduction, gender equality, disaster
preparedness, infrastructure, economics, human rights, the environment, and
issues associated with these.
In terms of international
development practice on the ground, the concept of community development has
been influential since the 1950s. The United Nations defines
community development as “a process where community members come together to
take collective action and generate solutions to common problems”. It is a
broad term given to practices aiming to build stronger and more resilient local
communities. Community development is also a professional discipline and is
defined by the International Association for Community Development (IACD), the
global network of community development practitioners and scholars, as “a
practice-based profession and an academic discipline that promotes
participative democracy, sustainable development, rights, economic opportunity,
equality and social justice, through the organization, education and
empowerment of people within their communities, whether these be of locality,
identity or interest, in urban and rural settings”. Community development
practitioners, using a myriad of job titles, are employed by governmental and
non-governmental organizations to build the capacity of vulnerable people to
engage in development projects and programs. According to the IACD, there are
national networks of community development practitioners in many countries,
several hundred graduate programs training practitioners, and an extensive
canon of research and scholarship, including the international Community
Development Journal.
The promotion of regional
clusters and a thriving metropolitan economy has grown in importance among
economic development professionals. In today’s global landscape, location is
vitally important and becomes key to obtaining and maintaining competitive
advantage. International trade and exchange rates are also key issues in economic
development. Currencies are often either undervalued or overvalued, resulting
in trade surpluses or deficits.
International Economic
Development Council
With more than 20,000 professional economic developers employed
worldwide in this highly specialized industry, the International Economic
Development Council (IEDC) headquartered in Washington, D.C. is a non-profit
organization dedicated to helping economic developers do their jobs more
effectively while raising the profile of the profession. With over 4,500
members across the U.S. and internationally, IEDC membership represents the
entire range of the profession ranging from regional, state, local, rural,
urban, and international economic development organizations to chambers of commerce, technology development agencies, utility companies,
educational institutions, consultants, and redevelopment authorities. Many
individual states also have associations comprising economic development
professionals who work closely with IEDC.
38.6.5: Reactions against Globalization
The uneven spread of globalization’s
benefits caused an anti-globalization movement to rise at the end of the 20th
century.
Learning Objective
Outline some of the criticisms of globalization
Key Points
Key Term
- xenophobia
-
The fear of that perceived to be foreign or strange.
Reactions to processes
contributing to globalization have varied widely with a history as long as
extraterritorial contact and trade. Proponents of economic growth, expansion,
and development generally view globalizing processes as desirable or
necessary to the well-being of human society. Not everybody affected by
globalization believes there are benefits to its spread, however. Many
individuals within the anti-globalization movement have witnessed unrest within
their home communities and the world at large and questioned the basis for
continuing the trend due to the sustainability of long-term and continuous
economic expansion, the social structural inequality caused by these processes,
and the colonial, imperialistic, or hegemonic ethnocentrism that
underlie such processes. Critics argue that globalization requires nations to
give up their political, economic, and cultural sovereignty and adapt to
Western ways.
Xenophobia can and has
manifested itself in many ways as a result of globalization, involving the
relations and perceptions of an in-group towards an out-group, including a fear
of losing identity, suspicion of activities, aggression, and the desire to
eliminate another group’s presence to secure a presumed purity. While globalization
has eased the flow of international trade and contributed to greater
efficiency within market economies, it has also been partially to blame for
global economic crises. Additionally, globalization is not simply an economic
project–it also heavily influences the world environmentally, politically, and
socially. While the forces of globalization have led to the spread of Western-style
democracy, this has been accompanied by an increase in inter-ethnic tension and
violence as free market economic policies combine with democratic processes of
universal suffrage as well as an escalation in militarization to impose democratic
principles as a means to conflict resolution.
Public Opinion
A 2005 study by Peer Fiss
and Paul Hirsch found a large increase in articles negative towards
globalization in the years prior. In 1998, negative articles outpaced positive
articles by two to one. In 2008, Greg Ip claimed this rise in opposition to
globalization could be explained, at least in part, by economic self-interest. The
number of newspaper articles showing negative framing rose from about 10% of
the total in 1991 to 55% of the total in 1999. This increase occurred during a
period when the total number of articles concerning globalization nearly
doubled.
A number of international
polls have shown that residents of Africa and Asia tend to view globalization
more favorably than residents of Europe or North America. In Africa, a Gallup
poll found that 70% of the population views globalization favorably. The BBC
found that 50% of people believed that economic globalization was proceeding
too rapidly, while 35% believed it was proceeding too slowly. In 2004, Philip
Gordon stated that “a clear majority of Europeans believe that globalization
can enrich their lives, while believing the European Union can help them take
advantage of globalization’s benefits while shielding them from its negative
effects”. The main opposition within the EU consisted of socialists,
environmental groups, and nationalists. Residents of the EU did not appear to
feel threatened by globalization in 2004. The EU job market was more stable and
workers were less likely to accept wage/benefit cuts. Social spending was much
higher than in the U.S. In a 2007 Danish poll, 76% of respondents said that
globalization was a good thing. Yet a 2016 referendum vote on whether to leave
or stay within the UK saw a majority of British voters opting to withdraw from
the EU.
Fiss and Hirsch also surveyed
U.S. opinion in 1993 and found that more than 40% of
respondents were unfamiliar with the concept of globalization. When the survey
was repeated in 1998, 89% of the respondents had a polarized view of
globalization as being either good or bad. Polarization increased dramatically
after the establishment of the WTO in 1995; this event and subsequent protests
led to a larger scale anti-globalization movement. Initially, college-educated
workers were likely to support globalization. Less educated workers, who were more
likely to compete with immigrants and workers in developing countries, tended
to be opponents. The situation changed after the financial crisis of 2007.
According to a 1997 poll, 58% of college graduates said globalization had been
good for the U.S. By 2008 only 33% thought it was good. Respondents with high
school education also became more opposed.
Economics
The literature analyzing the
economics of free trade is rich with information on its theoretical and empirical effects. Though it creates winners and losers,
the broad consensus among economists is that free trade is a large and
unambiguous net gain for society. However, some opponents of globalization see
the phenomenon as a promotion of corporate interests. Many claim that the
increasing autonomy and strength of corporate entities shapes the political
policies of countries, crowding out the moral claims of poor and working
classes as well as environmental concerns. For example, globalization allows
corporations to outsource manufacturing and service jobs from high-cost
locations, creating economic opportunities with the most competitive wages and
worker benefits, which critics say disadvantages poorer countries.
While it is true that free
trade encourages globalization among countries, some countries try to protect
their domestic suppliers. The main export of poorer countries is usually
agricultural productions. Larger countries often subsidize their farmers (e.g.,
the EU’s Common Agricultural Policy), which lowers the market price for foreign
crops. Thus, globalization can be described as an uneven process due to the
global integration of some groups alongside the marginalization or exclusion of
others.
Additionally, the global
economic crisis of 2007-2008, the worst financial crisis since the Great
Depression, can be credited partially to neo-liberal globalization. Although
globalization promised an improved standard of living, it has actually worsened
the financial situation of many homes and has made the financial crisis global
through the influences of international financial institutions such as the
World Bank. Globalization limits development and civilization to a path that
only leads to a Western and capitalistic system. Because of the political and
structural differences in countries, the implementation of globalization has
been detrimental for many countries.
Politics
Globalization has fueled the
rise of transnational corporations, and their power has vaulted to the point
where they can now rival many nation states. Of the world’s 100 largest
economies, 42 of them are corporations. Many of these transnational
corporations now hold sway over many nation states as their fates are
intertwined with the nations where they are located. Transnational
corporations could offer massive influence regarding the Third World and bring
about more pressure to help increase worker salaries and working conditions in
sweatshops. On account of doing business globally, transnational corporations
have a huge influence in many nation states.
In the process of
implementing globalization in developing countries, the creation of winners and
losers is often predetermined. Multinational corporations typically benefit from
globalization while poor, indigenous locals are negatively affected. Globalization
can be seen as a new form of colonization, as economic inequality and the rise
in unemployment have followed its implementation. Globalization has been
criticized for benefiting those who are already large and in power at the risk
and growing vulnerability of the countries’ indigenous population. Furthermore,
globalization is non-democratic, as it is enforced through top-down methods.
Globalization requires a
country to give up some sovereignty for the sake of executing Western ideals. As a result, sovereignty is safest with those whose views and
ideals are being implemented. In the name of free markets and with the promise
of an improved standard of living, countries give up their political and social
powers to international organizations. Thus, globalization carries the
potential to raise the power of international organizations at the expense of
local state institutions, which must in turn diminish in influence.
Environmental Impacts
International trade in
petroleum products has expanded significantly through globalization. There is also a corresponding
increase in activities within the petroleum industry to meet the ever-increasing demand for petroleum products. As a result, it gives rise to further
environmental pollution. Petroleum is toxic to almost all forms of life and its
extraction fuels climate change, including air pollution, water pollution,
noise pollution, land degradation, and erosion. As international commerce
develops new trade routes, markets, and products, the spread of invasive
species is also facilitated. On account of the development of larger and faster
forms of transport, commercial trade propels rising annual and cumulative rates
of invasion.
37.1: European Unification
37.1.1: The European Coal and Steel Community
The European Coal and Steel Community (ECSC) was born
from the desire to prevent future European conflicts following the
devastation of World War II.
Learning Objective
Connect the establishment of the ECSC to WWII.
Key Points
Key Term
- supranationalism
-
A type of multinational political union where
negotiated power is delegated to an authority by governments of member states.
The European Coal and Steel
Community (ECSC) was an international organization unifying certain continental
European countries after World War II. It was formally established in 1951 by
the Treaty of Paris, signed by Belgium, France, West Germany, Italy,
the Netherlands, and Luxembourg. The ECSC was the first international
organization based on the principles of supranationalism, and would
ultimately pave the way for the European Union.
History
The ECSC was first proposed
by French foreign minister Robert Schuman on May 9, 1950, to prevent
further war between France and Germany. His declared aim was to make future
wars among the European nations unthinkable due to higher levels of regional
integration, with the ECSC as the first step towards that integration. The treaty would create a common market for coal and steel among its member states,
which served to neutralize competition between European nations over natural
resources used for wartime mobilization, particularly in the Ruhr. The Schuman
Declaration that created the ECSC had several distinct aims:
Political Pressures
In West Germany, Schuman
kept close contact with the new generation of democratic politicians. Karl
Arnold, the Minister President of North Rhine-Westphalia, the province that
included the coal and steel producing Ruhr, was initially spokesman for German
foreign affairs. He gave a number of speeches and broadcasts on a supranational
coal and steel community at the same time as Schuman began to propose the
Community in 1948 and 1949. The Social Democratic Party of Germany (German:
Sozialdemokratische Partei Deutschlands, SPD), in spite of support from unions
and other socialists in Europe, decided it would oppose the Schuman plan. Kurt
Schumacher’s personal distrust of France, capitalism, and Konrad Adenauer
aside, he claimed that a focus on integration would override the SPD’s prime
objective of German reunification and thus empower ultra-nationalist and
Communist movements in democratic countries. He also thought the ECSC would end
any hopes of nationalizing the steel industry and encourage the growth of
cartel activity throughout a newly conservative-leaning Europe. Younger members
of the party like Carlo Schmid were, however, in favor of the Community and
pointed to the long tradition of socialist support for a supranational movement.
In France, Schuman gained strong political and intellectual support from all sectors, including
many non-communist parties. Charles de Gaulle, then out of power, had
been an early supporter of linking European economies on French terms and spoke in 1945 of a “European confederation” that would exploit the
resources of the Ruhr. However, he opposed the ECSC, deriding
it as an unsatisfactory approach to European unity. He also considered the
French government’s approach to integration too weak and feared the ECSC would
be hijacked by other nation’s concerns. De Gaulle felt that the ECSC had
insufficient supranational authority because the Assembly was not ratified by a
European referendum, and he did not accept Raymond Aron’s contention that the
ECSC was intended as a movement away from U.S. domination. Consequently, de
Gaulle and his followers in the Rally of the French People (RPF) voted against
ratification in the lower house of the French Parliament.
Despite these reservations
and attacks from the extreme left, the ECSC found substantial public support. It
gained strong majority votes in all 11 chambers of the parliaments of the six
member states, as well as approval among associations and European public
opinion. The 100-article Treaty of Paris, which established the ECSC, was
signed on April 18, 1951, by “the inner six”: France, West Germany,
Italy, Belgium, the Netherlands, and Luxembourg. On August 11, 1952, the United
States was the first non-ECSC member to recognize the Community and stated it
would now deal with the ECSC on coal and steel matters, establishing its
delegation in Brussels.
First Institutions
The ECSC was run by four
institutions: a High Authority composed of independent appointees, a Common Assembly
composed of national parliamentarians, a Special Council composed of national
ministers, and a Court of Justice. These would ultimately form the blueprint
for today’s European Commission, European Parliament, the Council of the
European Union, and the European Court of Justice.
The High Authority (now the European Commission) was the first-ever supranational body that served
as the Community’s executive. The President was elected by the eight other
members. The nine members were appointed by member states (two
for the larger three states, one for the smaller three), but represented
the common interest rather than their own states’ concerns. The member states’
governments were represented by the Council of Ministers, the presidency of
which rotated between each state every three months in alphabetical order. The
Council of Ministers’ task was to harmonize the work of national governments
with the acts of the High Authority and issue opinions on the work of
the Authority when needed.
The Common Assembly, now the European Parliament, was composed
of 78 representatives. The Assembly exercised supervisory powers over the
executive. The representatives were to be national MPs elected by their
Parliaments to the Assembly, or directly elected. The Assembly was intended as
a democratic counter-weight and check to the High Authority. It had formal powers
to sack the High Authority following investigations of abuse.
37.1.2: The European Economic Community
The European Economic Community blossomed from
the desire to further regional integration following the successful
establishment of the European Coal and Steel Community.
Learning Objective
Describe the transition from the ECSC to the EEC
Key Points
- The European
Economic Community (EEC) was a regional organization that aimed to integrate
its member states economically. It was created by the Treaty of Rome of 1957.
-
Some important
accomplishments of the EEC included the establishment in 1962 of common price
levels for agricultural products and the removal of internal tariffs between
member nations on certain products in 1968.
-
Disagreements
arose between member states regarding infringements of sovereignty and financing
of the Common Agricultural Policy (CAP).
-
On July 1, 1967,
the Merger Treaty came into force, combining the institutions of the ECSC and EURATOM
into the EEC. Collectively, they were known as the European Communities.
-
The 1960s saw the first attempts at enlargement, which over time led to
a desire to increase areas of cooperation. As a result, the Single European Act
was signed by foreign ministers in February
1986.
Key Terms
- sovereignty
-
The full right and power of a governing body to
govern itself without interference from outside sources or bodies. In
political theory, sovereignty is a substantive term designating supreme
authority over some polity. It is a basic principle underlying the dominant
Westphalian model of state foundation.
- supranationalism
-
A type of multinational political union in which negotiated power is delegated to an authority by governments of member states.
The European Economic
Community (EEC) was a regional organization that aimed to integrate its member
states economically. It was created by the Treaty of Rome of 1957. Upon the
formation of the European Union (EU) in 1993, the EEC was incorporated and
renamed as the European Community (EC). In 2009, the EC’s institutions were
absorbed into the EU’s wider framework and the community ceased to exist.
Background
In 1951, the Treaty of Paris
was signed, creating the European Coal and Steel Community (ECSC). This was an
international community based on supranationalism and international law,
designed to facilitate European economic growth and prevent future conflicts by
integrating its members. With the aim of furthering regional integration, two
additional communities were proposed: a European Defence Community and a
European Political Community. While the treaty for the latter was drawn
up by the Common Assembly, the ECSC parliamentary chamber, the proposed defense
community was rejected by the French Parliament. ECSC President Jean Monnet, a
leading figure behind the communities, resigned from the High Authority in
protest and began work on alternative communities based on economic
integration rather than political integration.
After the Messina Conference
in 1955, Paul Henri Spaak was given the task of preparing a report on the idea
of a customs union. Together with the Ohlin Report, the so-called Spaak Report
would provide the basis for the Treaty of Rome. In 1956, Spaak led
the Intergovernmental Conference on the Common Market and Euratom at the Val
Duchesse castle. The conference led to the signature on March 25, 1957, of the
Treaty of Rome, establishing a European Economic Community.
Creation and Early Years
The resulting communities
were the European Economic Community (EEC) and the European Atomic Energy
Community (EURATOM, or sometimes EAEC). The EEC created a customs union while
EURATOM promoted cooperation in the sphere of nuclear power. One of the first
important accomplishments of the EEC was the establishment in 1962 of common
price levels for agricultural products. In 1968, internal tariffs between
member nations were removed on certain products. The formation of these
communities was met with protest due to a fear that state sovereignty
would be infringed. Another crisis was triggered in regards to proposals for
the financing of the Common Agricultural Policy (CAP), which came into force in
1962. The transitional period whereby decisions were made by unanimity had come
to an end, and majority voting in the Council had taken effect. Then-French
President Charles de Gaulle’s opposition to supranationalism and fear of the
other members challenging the CAP led to an empty-chair policy in which French
representatives were withdrawn from the European institutions until the French
veto was reinstated. Eventually, the Luxembourg Compromise of January 29, 1966,
instituted a gentlemen’s agreement permitting members to use a veto on issues
of national interest.
On July 1, 1967, the Merger
Treaty came into force, combining the institutions of the ECSC and EURATOM into
that of the EEC. Collectively, they were known as the European Communities. The
Communities still had independent personalities although they were increasingly
integrated. Future treaties granted the Community new powers beyond simple
economic matters, edging closer to the goal of political integration and a
peaceful, united Europe.
Enlargement and Elections
The 1960s saw the first
attempts at enlargement. In 1961, Denmark, Ireland, Norway, and the United
Kingdom applied to join the three Communities. However, President Charles de
Gaulle saw British membership as a Trojan horse for U.S .influence and vetoed
membership, and the applications of all four countries were suspended. The four
countries resubmitted their applications on May 11, 1967, and with Georges
Pompidou succeeding Charles de Gaulle as French president in 1969, the veto was
lifted. Negotiations began in 1970 under the pro-European government of UK
Prime Minister Sir Edward Heath, who had to deal with disagreements relating to
the CAP and the UK’s relationship with the Commonwealth of Nations.
Nevertheless, two years later the accession treaties were signed and Denmark,
Ireland, and the UK joined the Community effective January 1, 1973. The
Norwegian people finally rejected membership in a referendum on September 25, 1972.
The Treaties of Rome stated that the European Parliament must be directly elected; however, this
required the Council to agree on a common voting system first. The Council
procrastinated on the issue and the Parliament remained appointed. Charles de
Gaulle was particularly active in blocking the development of the Parliament,
with it only being granted budgetary powers following his resignation. Parliament
pressured for agreement and on September 20, 1976, the Council agreed part of
the necessary instruments for election, deferring details on electoral systems that remain varied to this day. In June 1979, during the tenure of President
Jenkins, European Parliamentary elections were held. The new Parliament,
galvanized by a direct election and new powers, started working full-time and
became more active than previous assemblies.
Towards Maastricht
Greece applied to join the Community on June 12, 1975, following the
restoration of its democracy. Greece joined the Community effective January 1,
1981. Similarly, and after their own democratic restorations, Spain and
Portugal applied to the communities in 1977 and joined together on January 1, 1986.
In 1987, Turkey formally applied to join the Community and began the longest
application process for any country. With the prospect of further enlargement
and a desire to increase areas of cooperation, the Single European Act was
signed by foreign ministers in February 1986.
This single document dealt with the reform of institutions, extension of
powers, foreign policy cooperation, and the single European market. It came
into force on July 1, 1987. The act was followed by work on what would become
the Maastricht Treaty, which was agreed to on December 10, 1991, signed the
following year, and came into force on November 1, 1993, establishing the
European Union.
37.1.3: The European Union
Although
the European Union was formed to increase cooperation among member
states, the desire to retain national control over certain policy areas
made some institutions more intergovernmental than supranational
in nature.
Learning Objective
Compare
the European Union to its predecessors
Key Terms
- Schengen Area
-
An area
composed of 26 European states that have officially abolished passport and any
other type of border control at their mutual borders. The area mostly functions
as a single country for international travel purposes with a common visa
policy.
- supranational
-
A
type of multinational political union where negotiated power is delegated to an
authority by governments of member states.
Examples
- The European Union (EU) is a politico-economic
union of 28 member states ocated primarily in Europe.
- The EU operates through a hybrid system of
supranational and intergovernmental decision-making.
- The EU traces its origins from the European Coal
and Steel Community (ECSC) and the European Economic Community (EEC), formed by
the Inner Six countries in 1951 and 1958, respectively.
-
The European Union was formally established when
the Maastricht Treaty came into force on November 1, 1993. The treaty
established the three pillars of the European Union: the European Communities
pillar, which included the European Community (EC), the ECSC, and the EURATOM; the
Common Foreign and Security Policy (CFSP) pillar; and the Justice and Home
Affairs (JHA) pillar.
- The creation of the pillar system was the result
of some member states wanting to extend the EEC while others felt those areas
were too critical to their sovereignty to be managed by a supranational
mechanism.
-
The Maastricht, or convergence, criteria
established minimum requirements for EU member states to enter the third stage
of European Economic and Monetary Union (EMU) and adopt the euro as their currency.
The four criteria impose controls over inflation, public debt and the public
deficit, exchange rate stability, and the convergence of interest rates.
- On December 1, 2009, the Lisbon Treaty entered
into force and reformed many aspects of the EU, including its legal structure.
- During the 2010s, the cohesion of the EU has
been tested by several issues, including a debt crisis in some of the Eurozone
countries, increasing migration from the Middle East, and the United Kingdom’s
withdrawal from the EU.
The European Union (EU) is a politico-economic union of 28
member states located primarily in Europe. It has an area of 4,324,782
km2 (1,669,808 sq mi) and an estimated population of over 510 million. The EU
has developed an internal single market through a standardized system of laws
that apply in all member states. EU policies aim to ensure the free movement of
people, goods, services, and capital within the internal market, enact
legislation in justice and home affairs, and maintain common policies on trade,
agriculture, fisheries, and regional development. Within the Schengen Area,
passport controls have been abolished. A monetary union was established in 1999
and came into full force in 2002, and is composed of 19 EU member states which
use the euro currency.
The EU operates through a hybrid system of supranational and
intergovernmental decision-making. The seven principal decision-making
bodies—known as the institutions of the European Union—are the European
Council, the Council of the European Union, the European Parliament, the
European Commission, the Court of Justice of the European Union, the European
Central Bank, and the European Court of Auditors.
The EU traces its origins from the European Coal and Steel
Community (ECSC) and the European Economic Community (EEC), formed by the Inner
Six countries in 1951 and 1958, respectively. The Community and its successors
have grown in size by the accession of new member states and in power by the
addition of policy areas to its remit.
Maastricht Treaty
The European Union was formally established when the
Maastricht Treaty—whose main architects were Helmut Kohl and François
Mitterrand—came into force on November 1, 1993. The treaty established the
three pillars of the European Union: the European Communities pillar, which
included the European Community (EC), the ECSC, and the EURATOM; the Common
Foreign and Security Policy (CFSP) pillar; and the Justice and Home Affairs
(JHA) pillar. The first pillar handled economic, social, and economic policies. The second pillar handled foreign policy and military matters, and the third pillar coordinated member states’ efforts in the fight
against crime.
All three pillars were the extensions of existing policy
structures. The European Community pillar was a continuation of the EEC.
Additionally, coordination in foreign policy had taken place since the 1970s
under the European Political Cooperation (EPC), first written
into treaties by the Single European Act. While the JHA extended cooperation in
law enforcement, criminal justice, asylum, and immigration as well as judicial
cooperation in civil matters, some of these areas were already subject to
intergovernmental cooperation under the Schengen Implementation Convention of
1990.
The creation of the pillar system was the result of the
desire by many member states to extend the EEC to the areas of foreign policy,
military, criminal justice, and judicial cooperation. This desire was met with
misgivings by some member states, notably the United Kingdom, who thought some areas were too critical to their sovereignty to be managed by
a supranational mechanism. The agreed compromise was that instead of completely
renaming the European Economic Community as the European Union, the treaty
would establish a legally separate European Union comprising the European Economic
Community and entities overseeing intergovernmental policy areas such as
foreign policy, military, criminal justice, and judicial cooperation. The
structure greatly limited the powers of the European Commission, the European
Parliament, and the European Court of Justice.
Euro Convergence Criteria
The Maastricht, or convergence, criteria established the
minimum requirements for EU member states to enter the third stage of European
Economic and Monetary Union (EMU) and adopt the euro as their currency. The
four criteria are defined in article 121 of the treaty establishing the
European Community. They impose control over inflation, public debt and the
public deficit, exchange rate stability, and the convergence of interest rates.
The purpose of this criteria was to maintain price stability within the
Eurozone even with the inclusion of new member states.
- Inflation rates: No more than 1.5 percentage
points higher than the average of the three best performing (lowest inflation)
member states of the EU.
-
Government finance:
-
Annual government deficit: The ratio of the
annual government deficit to gross domestic product (GDP) must not exceed 3% at
the end of the preceding fiscal year. If not, it must reach
a level close to 3%. Only exceptional and temporary excesses would be granted
for exceptional cases.
- Government debt: The ratio of gross government
debt to GDP must not exceed 60% at the end of the preceding fiscal year. Even
if the target cannot be achieved due to specific conditions, the ratio must
have sufficiently diminished and be approaching the reference value at a satisfactory
pace. As of the end of 2014, of the countries in the Eurozone, only Estonia,
Latvia, Lithuania, Slovakia, Luxembourg, and Finland still met this target.
-
Exchange rate: Applicant countries should have
joined the exchange-rate mechanism (ERM II) under the European Monetary System
(EMS) for two consecutive years and should not have devalued its currency
during the period.
-
Long-term interest rates: The nominal long-term
interest rate must not be more than 2 percentage points higher than in the
three lowest-inflation member states.
Lisbon Treaty and Beyond
On December 1, 2009, the Lisbon Treaty reformed many aspects of the EU. In particular, it changed the
legal structure, merging the three pillars system into a
single legal entity provisioned with a legal personality; created a permanent
President of the European Council; and strengthened the position of the High Representative
of the Union for Foreign Affairs and Security Policy. During the 2010s, the
cohesion of the EU has been tested by several issues, including a debt crisis
in some of the Eurozone countries, increasing migration from the Middle East,
and the United Kingdom’s withdrawal from the EU. As of December 2016, the UK
has not yet initiated formal withdrawal procedures.
37.2: Fall of the Soviet Union
37.2.1: The Soviet Union’s Aging Leadership
The aging Soviet leadership of the 1980s was
ill-equipped to deal with ongoing economic stagnation and worsening foreign
conflicts such as the Soviet-Afghan War.
Learning Objective
Describe the leadership problem facing the
Soviet Union in the 1980s
Key Points
- The transition
period that separated the Brezhnev and Gorbachev eras resembled the former much
more than the latter, although hints of reform emerged as early as 1983.
- Andropov maneuvered
his way into power both through his KGB connections and by gaining the support
of the military by promising not to cut defense spending, despite the heavy toll
it exacted on the ailing Soviet economy.
- Andropov began a
thorough house-cleaning throughout the party and state bureaucracy, but his
ability to reshape the top leadership was constrained by his own advanced age
and poor health as well as the influence of his rival, Konstantin Chernenko.
- Andropov’s
domestic policy leaned heavily towards restoring discipline and order to Soviet
society. He eschewed radical political and economic reforms, promoting instead
a degree of candor in politics and mild economic experiments.
- In foreign
affairs, Andropov continued Brezhnev’s policies, causing US-Soviet relations to
deteriorate rapidly.
- Chernenko
succeeded Andropov in 1984, bringing about a number of significant policy
changes, including more investment in consumer goods and services and in
agriculture. Chernenko also called for a reduction in the Communist Party’s micromanagement
of the economy. However, KGB repression of Soviet dissidents increased and
personnel changes and investigations into corruption undertaken under Andropov
came to an end.
- During this period of Soviet leadership, fighting in the Soviet-Afghan
War intensified, compounding Soviet economic stagnation and further entangling
the USSR in a war it didn’t seem they could successfully win.
Key Term
- Goulash Communism
-
The variety of communism as practiced in the
Hungarian People’s Republic from the 1960s until the Central European collapse
of communism in 1989. With elements of free market economics and an
improved human rights record, it represented a quiet deviation from
the Soviet principles applied to Hungary in the previous decade. The name is a
semi-humorous metaphor derived from the popular Hungarian dish. Goulash is made
with an assortment of unlike ingredients, representing how Hungarian communism
was a mixed ideology and no longer strictly adhered to Marxist-Leninist
interpretations as in the past.
By 1982, the stagnation of
the Soviet economy was evidenced by the fact that the Soviet Union
had been importing grain from the U.S. throughout the 1970s. However, the conditions
that led to economic stagnation, primarily the huge rate of defense spending
that consumed the budget, were so firmly entrenched within the economic system that
any real turnaround seemed impossible. The transition period that separated the
Brezhnev and Gorbachev eras resembled the former much more than the latter,
although hints of reform emerged as early as 1983.
Andropov Interregnum
Brezhnev died on November 10,
1982. Two days passed between his death and the announcement of the election of
Yuri Andropov as the new General Secretary, suggesting that a power struggle
had occurred in the Kremlin. Andropov maneuvered his way into power both
through his KGB connections and by gaining the support of the military by
promising not to cut defense spending. For comparison, some of his rivals, such as Konstantin Chernenko, were skeptical of continued high military spending. At age 69, he was the oldest person ever appointed as General Secretary and 11
years older than Brezhnev when he acquired that post. In June 1983, he assumed
the post of chairman of the Presidium of the Supreme Soviet, thus becoming the
ceremonial head of state. It had taken Brezhnev 13 years to acquire this post.
Andropov began a thorough
house-cleaning throughout the party and state bureaucracy, a decision made easy
by the fact that the Central Committee had an average age of 69. He replaced
more than one-fifth of the Soviet ministers and regional party first
secretaries, and more than one-third of the department heads within the Central
Committee apparatus. As a result, he replaced the aging leadership with
younger, more vigorous administrators. But Andropov’s ability to reshape the
top leadership was constrained by his own age and poor health and the influence
of his rival (and longtime ally of Leonid Brezhnev) Konstantin Chernenko, who
previously supervised personnel matters in the Central Committee.
Andropov’s domestic policy
leaned heavily towards restoring discipline and order to Soviet society. He
eschewed radical political and economic reforms, promoting instead a small
degree of candor in politics and mild economic experiments similar to those
associated with the late Premier Alexei Kosygin’s initiatives in
the mid-1960s. In tandem with these economic experiments, Andropov launched an
anti-corruption drive that reached high into the government and party ranks.
Unlike Brezhnev, who possessed several mansions and a fleet of luxury cars, Andropov
lived a modest life. While visiting Budapest in early 1983, he expressed
interest in Hungary’s Goulash Communism and that the sheer size of the Soviet
economy made strict top-down planning impractical. 1982 had witnessed
the country’s worst economic performance since World War II, with real GDP
growth at almost zero percent, necessitating real change, and fast.
In foreign affairs, Andropov
continued Brezhnev’s policies. U.S.-Soviet relations deteriorated rapidly
beginning in March 1983, when President Ronald Reagan dubbed the Soviet
Union an “evil empire”. The official press agency TASS accused Reagan
of “thinking only in terms of confrontation and bellicose, lunatic
anti-communism”. Further deterioration occurred as a result of the September
1, 1983, Soviet shoot-down of Korean Air Lines Flight 007 near Moneron Island, carrying 269 people including a sitting U.S. congressman, Larry McDonald, as
well as by Reagan’s stationing of intermediate-range nuclear missiles in Western
Europe. Additionally, in Afghanistan, Angola, Nicaragua, and elsewhere, the U.S. began undermining Soviet-supported governments by supplying arms to
anti-communist resistance movements.
Andropov’s health declined
rapidly during the tense summer and fall of 1983, and he became the first
Soviet leader to miss the anniversary celebrations of the 1917 revolution. He
died in February 1984 of kidney failure after disappearing from public view for
several months. His most significant legacy to the Soviet Union was his
discovery and promotion of Mikhail Gorbachev.
Chernenko Interregnum
At 71, Konstantin Chernenko
was in poor health, suffering from emphysema, and unable to play an active role
in policy-making when he was chosen after lengthy discussion to succeed
Andropov. But Chernenko’s short time in office did bring about some significant
policy changes, including more investment in consumer goods and
services and in agriculture. He also called for a reduction in the Communist
Party of the Soviet Union’s (CPSU) micromanagement of the economy. However, KGB
repression of Soviet dissidents increased and personnel changes and
investigations into corruption undertaken under Andropov came to an end. In
February 1983, Soviet representatives withdrew from the World Psychiatric
Organization in protest of its continued complaints about the use of
psychiatry to suppress dissent. This policy was underlined in June when
Vladimir Danchev, a broadcaster for Radio Moscow, referred to the Soviet troops
in Afghanistan as “invaders” while conducting English-language
broadcasts. After refusing to retract this statement, he was sent to a mental institution
for several months.
Soviet-Afghan War
Andropov played a
dominant role in the decision to intervene militarily in Afghanistan on
December 24, 1979, insisting on the invasion although he knew that the
international community would find the USSR culpable. The decision to intervene
led to the Soviet-Afghan War, which continued once Andropov became the leader
of the USSR. By this time, Andropov felt the invasion might have been
a mistake and halfheartedly explored options for a negotiated withdrawal. The
Soviets had not foreseen taking such an active role in fighting the
mujahideen rebels and attempted to downplay their involvement in relation to that of
the Afghan army. However, the arrival of Soviet troops had the opposite effect
on the Afghan people, incensing rather than pacifying and causing the mujahideen
to gain in strength and numbers.
During the Chernenko
interregnum, fighting in Afghanistan intensified. Once it became apparent that
the Soviets could not take a backseat in the conflict, they followed three main
strategies aimed at quelling the uprising. Intimidation was the first strategy,
in which the Soviets would use airborne attacks as well as armored ground
attacks to destroy villages, livestock, and crops in trouble areas. Locals were forced to either flee their homes or die as daily Soviet attacks
made it impossible to live in these areas. By forcing the people of Afghanistan
to flee their homes, the Soviets hoped to deprive the guerrillas of resources
and safe havens. The second strategy consisted of subversion, which entailed
sending spies to join resistance groups and report information as well as
bribing local tribes or guerrilla leaders into ceasing operations. Finally, the
Soviets used military forays into contested territories to root
out the guerrillas and limit their options. Classic search and destroy
operations were implemented and once villages were occupied by Soviet forces,
inhabitants who remained were frequently interrogated and tortured for
information, or killed.
In the mid-1980s, the Afghan resistance movement, assisted by the U.S.,
Pakistan, Saudi Arabia, the UK, Egypt, China, and others, contributed to Moscow’s
high military costs and strained international relations. The U.S. viewed the
struggle in Afghanistan as an integral Cold War struggle and the CIA provided
assistance to anti-Soviet forces via Pakistani intelligence services in a
program called Operation Cyclone. The mujahideen favored sabotage operations.
The more common types of sabotage included damaging power lines, knocking out
pipelines and radio stations, and blowing up government office buildings, air
terminals, hotels, cinemas, and so on. They concentrated on both civilian and
military targets, knocking out bridges, closing major roads, attacking convoys,
disrupting the electric power system and industrial production, and attacking
police stations and Soviet military installations and air bases. They assassinated
government officials and Marxist People’s Democratic Party of Afghanistan (PDPA)
members, and laid siege to small rural outposts.
37.2.2: Gorbachev and Perestroika
Gorbachev launched
perestroika to rescue the Soviet economy from stagnation, but did not intend to
abandon the centrally planned economy entirely.
Learning Objective
Explain Gorbachev’s reasons for launching perestroika
Key Points
- Gorbachev’s
primary goal as general secretary was to revive the Soviet economy after the
stagnant Brezhnev and interregnum years.
- Gorbachev soon
came to believe that fixing the Soviet economy would be nearly impossible
without also reforming the political and social structure of the Communist
nation.
- The purpose of
reform was to prop up the centrally planned economy—not to transition to market
socialism.
- Gorbachev
initiated his new policy of perestroika (literally “restructuring” in
Russian) and its attendant radical reforms in 1986. Policy reforms included the
Law on State Enterprise, the Law on Cooperatives, and the opening of the Soviet
economy to foreign investment.
- Unfortunately, Gorbachev’s
economic changes did not do much to restart the country’s sluggish economy.
-
In 1988,
Gorbachev introduced glasnost, which gave the Soviet people freedoms that they
had not previously known, including greater freedom of speech.
-
In June 1988, at the CPSU’s Party Conference, Gorbachev launched radical
reforms meant to reduce party control of the government apparatus, proposing a
new executive in the form of a presidential system as well as a new legislative
element.
Key Terms
- glasnost
-
Roughly translating to “openness”, reforms to the political and judicial system made in the 1980s
that ensured greater freedoms for the public and the press as well as increased
government transparency.
- perestroika
-
Literally “restructuring” in Russian, a political movement for reform within the Communist Party of
the Soviet Union during the 1980s, widely associated with Soviet leader Mikhail
Gorbachev.
Mikhail Sergeyevich
Gorbachev was the eighth and final leader of the Soviet Union, General Secretary of the Communist Party of the Soviet Union (CPSU) from 1985
until 1991, when the party was dissolved. Gorbachev’s primary goal as general
secretary was to revive the Soviet economy after the stagnant Brezhnev and
interregnum years. In 1985, he announced that the economy was stalled and that
reorganization was needed, proposing a vague program of reform that was
adopted at the April Plenum of the Central Committee. His reforms called for fast-paced
technological modernization and increased industrial and agricultural
productivity. He also tried to make the Soviet bureaucracy more efficient.
Gorbachev soon came to
believe that fixing the Soviet economy would be nearly impossible without also
reforming the political and social structure of the Communist nation. He started
by making personnel changes, most notably replacing Andrei Gromyko with Eduard
Shevardnadze as Minister of Foreign Affairs. Gromyko had served at his post for
28 years and was considered a member of the old Soviet guard. Although
Shevardnadze was comparatively inexperienced in diplomacy, he, like Gorbachev,
had a background in managing an agricultural region of the Soviet Union
(Georgia), which entailed weak links to the military-industrial complex, sharing Gorbachev’s outlook on governance.
The purpose of reform was to
prop up the centrally planned economy—not to transition to market socialism.
Speaking in late summer 1985 to the secretaries for economic affairs of the
central committees of the East European communist parties, Gorbachev said:
“Many of you see the solution to your problems in resorting to market
mechanisms in place of direct planning. Some of you look at the market as a
lifesaver for your economies. But, comrades, you should not think about
lifesavers but about the ship, and the ship is socialism.”
Perestroika
Gorbachev initiated his new
policy of perestroika (literally “restructuring” in Russian) and its
attendant radical reforms in 1986. They were sketched, but not fully spelled
out, at the XXVIIth Party Congress in February–March 1986. The
“reconstruction” was proposed in an attempt to overcome economic
stagnation by creating a dependable and effective mechanism for accelerating
economic and social progress. In July 1987, the Supreme Soviet of the Soviet
Union passed the Law on State Enterprise. The law stipulated that state
enterprises were free to determine output levels based on demand from consumers
and other enterprises. Enterprises had to fulfill state orders, but could
dispose of the remaining output as they saw fit. However, the state still held
control over the means of production for these enterprises, limiting their
ability to enact full-cost accountability. Enterprises bought input from
suppliers at negotiated contract prices. Under the law, enterprises became
self-financing; that is, they had to cover expenses (wages, taxes, supplies,
and debt service) through revenues. No longer was the government to rescue
unprofitable enterprises that faced bankruptcy. Finally, the law shifted
control over the enterprise operations from ministries to elected workers’
collectives.
The Law on Cooperatives,
enacted in May 1988, was perhaps the most radical of the economic reforms
introduced in the early part of the Gorbachev era. For the first time since
Vladimir Lenin’s New Economic Policy was abolished in 1928, the law permitted
private ownership of businesses in the services, manufacturing, and
foreign-trade sectors. The law initially imposed high taxes and employment
restrictions, but it later revised these to avoid discouraging private-sector
activity.
The most significant of
Gorbachev’s reforms in the foreign economic sector allowed foreigners to invest
in the Soviet Union in joint ventures with Soviet ministries, state
enterprises, and cooperatives. The original version of the Soviet Joint Venture
Law, which went into effect in June 1987, limited foreign shares of a Soviet
venture to 49 percent and required that Soviet citizens occupy the positions of
chairman and general manager. After potential Western partners complained, the
government revised the regulations to allow majority foreign ownership and
control. Under the terms of the Joint Venture Law, the Soviet partner supplied
labor, infrastructure, and a potentially large domestic market. The foreign
partner supplied capital, technology, entrepreneurial expertise, and high-quality products and services.
Gorbachev’s economic changes
did little to restart the country’s sluggish economy in the late 1980s.
The reforms decentralized economic activity to a certain extent, but price
controls remained, as did the ruble’s inconvertibility and most government
controls over the means of production. By 1990, the government had virtually
lost control over economic conditions. Government spending increased sharply as
more unprofitable enterprises required state support and
consumer price subsidies continued. Tax revenues declined because local
governments withheld tax revenues from the central government in a climate of growing
regional autonomy. The elimination of central control over production
decisions, especially in the consumer goods sector, led to the breakdown in
traditional supply-demand relationships without contributing to the formation
of new ones. Thus, instead of streamlining the system, Gorbachev’s decentralization
caused new production bottlenecks.
Glasnost
In 1988, Gorbachev introduced
glasnost, which gave the Soviet people freedoms they had not previously
known, including greater freedom of speech. The press became far less
controlled, and thousands of political prisoners and many dissidents were
released as part of a wider program of de-Stalinization. Gorbachev’s goal in
glasnost was to pressure conservatives within the CPSU who opposed
his policies of economic restructuring, believing that through varying ranges
of openness, debate, and participation, the Soviet people would support his
reform initiatives. At the same time, he exposed his plans to more public criticism.
In June 1988, at the CPSU’s Party Conference, Gorbachev launched radical
reforms to reduce party control of the government apparatus. He proposed
a new executive in the form of a presidential system as well as a new
legislative element, the Congress of People’s Deputies. Elections
to the Congress of People’s Deputies were held throughout the Soviet Union in
March and April 1989. This was the first free election in the Soviet Union
since 1917. Gorbachev became Chairman of the Supreme Soviet (or head of state)
on May 25, 1989.
37.2.3: Unrest in the Soviet Union
The increased freedoms of glasnost allowed opposition groups to make political gains against
the centralized Soviet government in Moscow.
Learning Objective
Analyze the reasons for the uprisings that broke
out across the Soviet Union in the late 1980s
Key Points
- By the late
1980s, people in the Caucasus and Baltic states were demanding more autonomy
from Moscow, and the Kremlin was losing some of its control over certain regions
and elements in the Soviet Union.
- The Chernobyl
disaster in April 1986 had major political and social effects that catalyzed the
revolutions of 1989.
- Under glasnost,
the Soviet media began to expose numerous social and economic problems in the
Soviet Union that the government had long denied and covered up, such as
poor housing, food shortages, alcoholism, widespread pollution, creeping
mortality rates, the second-rate position of women, and the history of
state crimes against the population.
-
Political
openness continued to produce unintended consequences as nationalists swept the
board in regional elections.
- Starting in the
mid-1980s, the Baltic states used the reforms provided by glasnost to assert
their rights to protect their environment (for example during the Phosphorite
War) and historic monuments, and later, their claims to sovereignty and
independence.
- Momentum towards
full-blown revolution began in Poland where by early April 1989, numerous
reforms and freedoms for opposition groups had been obtained.
-
Revolutionary
momentum, encouraged by the peaceful transition underway in Poland, continued
in Hungary, East Germany, Bulgaria, Czechoslovakia, and Romania.
-
The Soviet Union was dissolved by the end of 1991, resulting in 14
countries (Armenia, Azerbaijan, Belarus, Estonia, Georgia, Kazakhstan,
Kyrgyzstan, Latvia, Lithuania, Moldova, Tajikistan, Turkmenistan, Ukraine, and
Uzbekistan) declaring their independence in the course of the years 1990–1991.
Key Terms
- glasnost
-
Roughly translating to “openness”, this term
refers to the reforms the political and judicial system made in the 1980s
that ensured greater freedoms for the public and the press and increased
government transparency.
- sovereignty
-
The full right and power of a governing body to
govern itself without interference from outside sources or bodies. In
political theory, sovereignty is a substantive term designating supreme
authority over some polity. It is a basic principle underlying the dominant
Westphalian model of state foundation.
The Revolutions of 1989 were
part of a revolutionary wave in the late 1980s and early 1990s that resulted in
the end of communist rule in Central and Eastern Europe and beyond.
Leadup to Revolution
By the late 1980s, people in
the Caucasus and Baltic states were demanding more autonomy from Moscow, and
the Kremlin was losing some of its control over certain regions and elements in
the Soviet Union. In November 1988, Estonia issued a declaration of
sovereignty, which eventually led to other states doing the same.
The Chernobyl disaster in
April 1986 had major political and social effects that catalyzed the revolutions of 1989. It is difficult to establish the
total economic cost of the disaster. According to Mikhail Gorbachev, the Soviet
Union spent 18 billion rubles (the equivalent of USD $18 billion at the time) on
containment and decontamination, virtually bankrupting itself. One political
result of the disaster was the greatly increased significance of the Soviet
policy of glasnost. Under glasnost, relaxation of censorship resulted in the
Communist Party losing its grip on the media, and Soviet citizens were able to
learn significantly more about the past and the outside world.
The Soviet media began to
expose numerous social and economic problems in the Soviet Union that the
government had long denied and covered up, such as poor housing, food
shortages, alcoholism, widespread pollution, creeping mortality rates, the
second-rate position of women, and the history of state crimes against
the population. Although Nikita Khrushchev denounced Stalin’s personality cult
as early as the 1950s, information about the true proportions of his atrocities
had still been suppressed. These revelations had a devastating effect on those
who believed in state communism and had never been exposed to this
information, as the driving vision of society was built on a foundation of
falsehood and crimes against humanity. Additionally, information about the
higher quality of life in the United States and Western Europe and about Western pop culture were exposed to
the Soviet public for the first time.
Political openness continued
to produce unintended consequences. In elections to the regional assemblies of
the Soviet Union’s constituent republics, nationalists swept the board. As
Gorbachev weakened the system of internal political repression, the ability of
the USSR’s central government to impose its will on the USSR’s constituent
republics was largely undermined. During the 1980s, calls for greater independence
from Moscow’s rule grew louder. This was especially marked in the Baltic
Republics of Estonia, Lithuania, and Latvia, which had been annexed into the
Soviet Union by Joseph Stalin in 1940. Nationalist sentiment also took hold in
other Soviet republics such as Ukraine, Georgia, and Azerbaijan.
Starting in the mid-1980s,
the Baltic states used the reforms provided by glasnost to assert their rights
to protect their environment (for example during the Phosphorite War) and their
historic monuments, and, later, their claims to sovereignty and independence.
When the Balts withstood outside threats, they exposed an irresolute Kremlin.
Bolstering separatism in other Soviet republics, the Balts triggered multiple
challenges to the Soviet Union. The rise of nationalism under glasnost also
reawakened simmering ethnic tensions throughout the union. For example, in
February 1988, Nagorno-Karabakh, a predominantly ethnic Armenian region in
Azerbaijan, passed a resolution calling for unification with Armenia, which
sparked the Nagorno-Karabakh War.
Collapse (Summer 1989 to
Fall 1991)
Momentum toward full-blown
revolution began in Poland in 1989. During the Polish United Workers’ Party’s
(PZPR) plenary session of January 16-18, 1989, General Wojciech Jaruzelski and
his ruling formation overcame the Central Committee’s resistance by threatening
to resign. As a result, the communist party decided to allow relegalization of
the independent trade union Solidarity and approach its leaders for formal
talks. From February 6 to April 4, 94 sessions of talks between 13 working groups,
known as the Round Table Talks, resulted in political and economic
compromise reforms. The talks resulted in the Round Table Agreement, by which
political power wound be vested in a newly created bicameral legislature and a
president who would be the chief executive.
By April 4, 1989, numerous
reforms and freedoms for the opposition were obtained. Solidarity, now in
existence as the Solidarity Citizens’ Committee, would again be legalized as a
trade union and allowed to participate in semi-free elections. The election had
restrictions imposed designed to keep the communists in power, since only 35%
of the seats in the Sejm, the key lower chamber of parliament, would be open to
Solidarity candidates. The remaining 65% was reserved for candidates
from the PZPR and its allies (the United People’s Party, the Alliance of
Democrats, and the PAX Association). Since the Round Table Agreement mandated
only reform (not replacement) of socialism in Poland, the communist party
thought of the election as a way of neutralizing political conflict and staying
in power while gaining legitimacy to carry out economic reforms. However, the
negotiated social policy determinations by economists and trade
unionists during the Round Table talks were quickly rejected by
both the Party and the opposition.
A systemic transformation
was made possible by the Polish legislative elections of June 4, 1989, which
coincided with the bloody crackdown on the Tienanmen Square protesters in
China. When polling results were released, a political earthquake erupted:
Solidarity’s victory surpassed all predictions. Solidarity candidates captured
all seats they were allowed to compete for in the Sejm, while in the newly
established Senate they captured 99 out of the 100 available seats (the other
seat went to an independent, who later switched to Solidarity). At the same
time, many prominent PZPR candidates failed to gain even the minimum number of
votes required to capture the seats that were reserved for them. The communists
suffered a catastrophic blow to their legitimacy as a result.
Revolutionary momentum,
encouraged by the peaceful transition underway in Poland, continued in Hungary,
East Germany, Bulgaria, Czechoslovakia, and Romania. A common feature among
these countries was the extensive use of campaigns of civil resistance, demonstrating
popular opposition to the continuation of one-party rule and contributing to
the pressure for change. Romania was the only Eastern Bloc country whose people
overthrew its Communist regime violently. The Tienanmen Square protests of 1989
failed to stimulate major political changes in China, but powerful images of
courageous defiance during that protest helped to spark a precipitation of
events in other parts of the globe. Hungary dismantled its section of the
physical Iron Curtain, leading to a mass exodus of East Germans through
Hungary that destabilized East Germany. This led to mass demonstrations in
cities such as Leipzig and subsequently to the fall of the Berlin Wall, which
served as the symbolic gateway to German reunification in 1990.
The Soviet Union was
dissolved by the end of 1991, resulting in 14 countries (Armenia, Azerbaijan,
Belarus, Estonia, Georgia, Kazakhstan, Kyrgyzstan, Latvia, Lithuania, Moldova,
Tajikistan, Turkmenistan, Ukraine, and Uzbekistan) declaring their independence
from the Soviet Union in 1990-91. Lithuania was the
first Union Republic to declare independence from the dissolving Soviet Union
in the Act of the Re-Establishment of the State of Lithuania, signed by the
Supreme Council of the Republic of Lithuania on March 11, 1990. The Act of the
Re-Establishment of the State of Lithuania served as a model and inspiration to
other Soviet republics. However, the issue of independence was not immediately
settled and recognition by other countries was uncertain. The rest of the
Soviet Union, which constituted the bulk of the area, became Russia in December
1991.
Communism was abandoned in
Albania and Yugoslavia between 1990 and 1992. By 1992, Yugoslavia split into
the five successor states of Bosnia and Herzegovina, Croatia, the Macedonia,
Slovenia, and the Federal Republic of Yugoslavia, which was later renamed Serbia
and Montenegro and eventually split into two separate states,. Serbia then further split with the breakaway of the partially
recognized state of Kosovo. Czechoslovakia was dissolved three years after the
end of Communist rule, splitting peacefully into the Czech Republic and
Slovakia in 1992. The impact was felt in dozens of Socialist countries.
Communism was abandoned in countries such as Cambodia, Ethiopia, Mongolia
(which democratically re-elected a Communist government that ran the country
until 1996), and South Yemen. The collapse of Communism (and of the Soviet
Union) led commentators to declare the end of the Cold War.
During the adoption of
varying forms of market economies, there was initially a general decline in living
standards. Political reforms were varied, but in only five countries were
Communist parties able to keep for themselves a monopoly on power: China, Cuba,
North Korea, Laos, and Vietnam. Many Communist and Socialist organisations in
the West turned their guiding principles over to social democracy. Communist
parties in Italy and San Marino suffered, and the renewal of the Italian
political class took place in the early 1990s. The European political landscape
was drastically changed, with numerous Eastern Bloc countries joining NATO and
the European Union, resulting in stronger economic and social integration.
37.2.4: Fall of the Berlin Wall
A relaxing of Eastern bloc
border defenses initiated a chain of events that pressured the East German
government into opening crossing points between East and West Berlin to
political refugees, precipitating the eventual fall of the Berlin Wall.
Learning Objective
Detail the events leading up
to the fall of the Berlin Wall
Key Points
- The Berlin Wall
was a barrier that divided Berlin from 1961 to 1989. When Hungary disabled its
physical border defenses with Austria on August 19, 1989, it initiated a chain
of events that would eventually precipitate the fall of the Berlin Wall.
- A slew of border
crossings and protests ensued in the
Peaceful Revolution of late 1989.
-
To ease the
difficulties posed by these large masses of people, the Politburo led by East
Germany’s leader Egon Krenz decided on November 9, 1989, to allow refugees to
exit directly via crossing points between East and West Germany, including
between East and West Berlin.
- Günter
Schabowski, the party boss in East Berlin and the spokesman for the SED
Politburo, announced the new regulations, but mistakenly
said they were effectively immediately rather than the next day.
- East Germans
began gathering at the Wall, demanding that border guards open the
gates. Finally, at 10:45 pm, Harald
Jäger, the commander of the Bornholmer Straße border crossing, yielded,
allowing the guards to open the checkpoints and people to pass through with
little to no identity checking.
-
Television
coverage of citizens demolishing sections of the Wall on November 9 was soon
followed by the East German regime announcing ten new border crossings,
including the historically significant locations of Potsdamer Platz, Glienicker
Brücke, and Bernauer Straße.
- On June 13, 1990,
the East German military officially began dismantling the Wall, beginning in
Bernauer Straße and around the Mitte district.
-
On July 1, 1990, the day East Germany adopted West German currency, all
de jure border controls ceased, although the inter-German border was meaningless for some time before that.
Key Terms
- fakir beds
-
Essentially, a bed of nails used as
a deterrent to vehicle or foot crossing of an expanse.
- defection
-
In politics, a person who gives up allegiance to
one state in exchange for allegiance to another in a way that is considered
illegitimate by the first state.
The Berlin Wall was a
barrier that divided Berlin from 1961 to 1989. Constructed by the German
Democratic Republic (GDR, or East Germany) beginning August 13, 1961, the Wall
completely cut off West Berlin by land from East Germany and East Berlin. The
barrier included guard towers placed along large concrete walls, which
circumscribed a wide area that contained anti-vehicle trenches, fakir beds, and
other defenses. The Eastern Bloc claimed the Wall was erected to protect
its population from fascist elements conspiring to prevent grassroots socialist
state-building in East Germany. But in practice, the Wall served to prevent
massive emigration and defection that plagued East Germany and the communist
Eastern Bloc during the post-World War II period.
The Fall of the Wall
When Hungary disabled its
physical border defenses with Austria on August 19, 1989, it initiated a chain
of events that would eventually precipitate the fall of the Berlin Wall. In
September 1989, more than 13,000 East German tourists escaped through Hungary
to Austria. The Hungarians prevented many more East Germans from crossing the
border and returned them to Budapest. Those East Germans then flooded the West
German embassy and refused to return to East Germany. The East German
government responded to this by disallowing any further travel to Hungary, but
allowed those already there to return to East Germany.
Soon, a similar pattern
began to emerge out of Czechoslovakia. This time, however, the East German
authorities allowed people to leave, provided that they did so by train through
East Germany. This was followed by mass demonstrations within East Germany
itself. Initially, protesters were mostly people wanting to leave to the West,
chanting “Wir wollen raus!” (“We want out!”). Then
protesters began to chant “Wir bleiben hier!” (“We are staying
here!”). This was the start of what East Germans call the
Peaceful Revolution of late 1989. Protest demonstrations grew considerably by
early November, and the movement neared its height on November 4, when half a
million people gathered to demand political change at the Alexanderplatz
demonstration, East Berlin’s large public square and transportation hub.
The longtime leader of East
Germany, Erich Honecker, resigned on October 18, 1989, and was replaced by
Egon Krenz the same day. Honecker predicted in January of that year that
the Wall would stand for 50 or 100 more years if the conditions that caused its construction did not
change. The wave of refugees
leaving East Germany for the West kept increasing. By early November, refugees
were finding their way to Hungary via Czechoslovakia or the West German
Embassy in Prague. This was tolerated by the new Krenz government due to
long-standing agreements with the communist Czechoslovak government allowing
free travel across their common border. However, this movement grew so
large it caused difficulties for both countries. The Politburo
led by Krenz thus decided on November 9 to allow refugees to exit directly via
crossing points between East and West Germany, including between East and West
Berlin. Later the same day, the ministerial administration modified the
proposal to include private, round-trip travel. The new regulations were to
take effect the next day.
Günter Schabowski, the party
boss in East Berlin and the spokesman for the SED Politburo, had the task of
announcing the new regulations but had not been involved in the
discussions about the new regulations and was been fully updated. Shortly
before a press conference on November 9, he was handed a note announcing the
changes but given no further instructions on how to handle the information.
These regulations had only been completed a few hours earlier and were to take
effect the following day to allow time to inform the border guards. But
this starting time delay was not communicated to Schabowski. At the end of the
press conference, Schabowski read out loud the note he had been given. One of
the reporters, ANSA’s Riccardo Ehrman, asked when the regulations would take effect. After a few seconds’
hesitation, Schabowski stated based on assumption that it would be immediate. After further questions from
journalists, he confirmed that the regulations included border crossings
through the Wall into West Berlin, which he had not mentioned until then.
Excerpts from Schabowski’s
press conference were the lead story on West Germany’s two main news programs
that night, meaning that the news was also broadcast to nearly all of East
Germany. East Germans
began gathering at the Wall at the six checkpoints between East and West
Berlin, demanding that border guards immediately open the gates. The
surprised and overwhelmed guards made many hectic telephone calls to their
superiors about the problem. At first, they were ordered to find the more
aggressive people gathered at the gates and stamp their passports with a
special stamp that barred them from returning to East Germany—in effect,
revoking their citizenship. However, this still left thousands
demanding to be let through.
It soon became clear that no
one among the East German authorities would take personal responsibility for
issuing orders to use lethal force, so the vastly outnumbered soldiers had no
way to hold back the huge crowd of East German citizens. Finally, at 10:45 pm,
Harald Jäger, the commander of the Bornholmer Straße border crossing, yielded,
allowing the guards to open the checkpoints and people to pass through with
little to no identity checking. As the Ossis (“Easterners”) swarmed through,
they were greeted by Wessis (“Westerners”) waiting with flowers and champagne
amid wild rejoicing. Soon afterward, a crowd of West Berliners jumped on top of
the Wall and were joined by East German youngsters. They danced together
to celebrate their new freedom.
Demolition
Television coverage of
citizens demolishing sections of the Wall on November 9 was soon followed by
the East German regime announcing ten new border crossings, including the
historically significant locations of Potsdamer Platz, Glienicker Brücke, and
Bernauer Straße. Crowds gathered on both sides of the historic crossings
waiting for hours to cheer the bulldozers that tore down portions of the Wall
to reinstate ancient roads. While the Wall officially remained guarded at a
decreasing intensity, new border crossings continued for some time, including
the Brandenburg Gate on December 22, 1989. Initially the East German military
attempted to repair damage done by “Wall peckers,” but gradually
these attempts ceased and guards became more lax, tolerating the demolitions and unauthorized border crossings through holes in the Wall.
West Germans and West
Berliners were allowed visa-free travel starting December 23. Until that point,
they were only able to visit East Germany and East Berlin under restrictive
conditions that involved applying for a visa several days or weeks in advance
and the obligatory exchange of at least 25 Deutsche Marks per day of their
planned stay, which hindered spontaneous visits. Thus, in the weeks
between November 9 and December 23, East Germans could actually travel more
freely than Westerners.
On June 13, 1990, the East
German military officially began dismantling the Wall, beginning in Bernauer
Straße and around the Mitte district. From there, demolition continued through
Prenzlauer Berg/Gesundbrunnen, Helligensee, and throughout the city of Berlin
until that December. Various military units dismantled the Berlin/Brandenberg
border wall, completing the job in November 1991. Virtually every road that was
severed by the Berlin Wall was reconstructed and reopened by August 1, 1990.
On July 1, the day East Germany adopted West German currency, all de
jure border controls ceased, although the inter-German border was meaningless for some time before that. The fall of the Wall marked the first
critical step towards German reunification, which formally concluded a mere 339
days later on October 3, 1990, with the dissolution of East Germany and the
official reunification of the German state along the democratic lines of the
West German government.
37.2.5: Dissolution of the USSR
An
unintended consequence of the expanding reform within the USSR was the
destruction of the very system it was designed to save.
Learning Objective
Summarize the chain of
events that resulted in the dissolution of the USSR
Key Points
-
Since 1985, General Secretary Gorbachev instituted
liberalizing policies broadly referred to as glasnost and perestroika. As a
result of his push towards liberalization, dissidents were welcomed back in the
USSR and pro-independence movements became more vocal in the regional
republics.
- Gorbachev continued to radically expand the
scope of glasnost during the late 1980s, stating that no subject was off limits
for open discussion in the media.
- On March 17, 1991, in a Union-wide referendum,
76.4% of voters endorsed retention of a reformed Soviet Union.
- On June 12, 1991, Boris Yeltsin won 57% of the
popular vote in democratic elections for the newly created post of President of
the Russian SFSR, defeating Gorbachev’s preferred candidate. In his election campaign,
Yeltsin criticized the “dictatorship of the center”.
- Faced with growing separatism, Gorbachev sought
to restructure the Soviet Union into a less-centralized state. On August 20,
1991, the Russian SFSR was scheduled to sign a New Union Treaty that would have
converted the Soviet Union into a federation of independent republics with a
common president, foreign policy, and military. But more radical reformists
were increasingly convinced that a rapid transition to a market economy was
required.
- On August 19, 1991, Gorbachev’s vice president,
Gennady Yanayev, Prime Minister Valentin Pavlov, Defense Minister Dmitry Yazov,
KGB chief Vladimir Kryuchkov, and other senior officials acted to prevent the
union treaty from being signed by forming the “General Committee on the State
Emergency”, which put Gorbachev under house arrest and cut off his
communications.
- After three days, the coup collapsed. The
organizers were detained and Gorbachev returned as president, albeit with his
power depleted.
- On August 24, 1991, Gorbachev dissolved the
Central Committee of the CPSU, resigned as the party’s general secretary, and
dissolved all party units in the government. Five days later, the Supreme
Soviet indefinitely suspended all CPSU activity on Soviet territory,
effectively ending Communist rule in the Soviet Union and dissolving the only
remaining unifying force in the country. The Soviet Union collapsed with
dramatic speed in the last quarter of 1991.
-
Following the collapse of the Soviet Union,
Russia underwent a radical transformation, moving from a centrally planned
economy to a globally integrated market economy. Corrupt and haphazard
privatization processes turned major state-owned firms over to politically
connected “oligarchs,” which left equity ownership highly concentrated.
Key Terms
- perestroika
-
Literally
“restructuring” in Russian, a political movement for reform
within the Communist Party of the Soviet Union during the 1980s, widely
associated with Soviet leader Mikhail Gorbachev.
- glasnost
-
Roughly
translating to “openness,” the reforms to the political and
judicial system in the 1980s that ensured greater freedoms for the public
and the press as well as increased government transparency.
The Soviet Union was dissolved on December 26, 1991, as a
result of declaration no. 142-Н of the Supreme Soviet. The declaration
acknowledged the independence of the former Soviet republics and created the
Commonwealth of Independent States (CIS), although five of the signatories
ratified it much later or not at all. On the previous day, Soviet President
Mikhail Gorbachev, the eighth and final leader of the Soviet Union, resigned,
declared his office extinct, and handed over its powers – including control of
the Soviet nuclear missile launching codes – to Russian President Boris
Yeltsin. That evening at 7:32, the Soviet flag was lowered from the Kremlin for
the last time and replaced with the pre-revolutionary Russian flag. From August to December of 1991, all individual republics, including Russia
itself, seceded from the union. The week before the union’s formal
dissolution, 11 republics signed the Alma-Ata Protocol formally establishing
the CIS and declaring that the Soviet Union had ceased to exist. The
Revolutions of 1989 and the dissolution of the USSR signaled the end of the
Cold War and left the United States as the world’s only superpower.
Moscow’s Crisis
Since 1985, Mikhail Gorbachev, General Secretary of the
USSR, instituted liberalizing policies broadly referred to as glasnost and
perestroika. As a result of his push towards liberalization, dissidents were
welcomed back in the USSR following prolonged exile and pro-independence
movements were becoming more vocal in the regional republics. At the January
28–30, 1987, Central Committee plenum, Gorbachev suggested a new policy of
“Demokratizatsiya” throughout Soviet society. He proposed that future
Communist Party elections should offer a choice between multiple candidates,
elected by secret ballot. However, the CPSU delegates at the Plenum watered
down Gorbachev’s proposal, and democratic choice within the Communist Party was
never significantly implemented.
Gorbachev continued to radically expand the scope of glasnost
during the late 1980s, stating that no subject was off limits for open
discussion in the media. Even so, the cautious Soviet intelligentsia took
almost a year to begin pushing the boundaries to see if he meant what he said.
For the first time, the Communist Party leader appealed over the heads of
Central Committee members for the people’s support in exchange for expansion of
liberties. The tactic proved successful – within two years political reform
could no longer be sidetracked by Party conservatives. An unintended
consequence was that expanding the scope of reform would ultimately destroy the
very system it was designed to save.
On January 14, 1991, Nikolai Ryzhkov resigned from his post
as Chairman of the Council of Ministers, or premier of the Soviet Union, and
was succeeded by Valentin Pavlov in the newly-established post of Prime
Minister of the Soviet Union. On March 17, 1991, in a Union-wide referendum,
76.4% of voters endorsed retention of a reformed Soviet Union. The Baltic
republics, Armenia, Georgia, and Moldova, boycotted the referendum, as did Checheno-Ingushetia (an autonomous republic within Russia that had a strong
desire for independence, and by now referred to itself as Ichkeria). In each of
the other nine republics, a majority of the voters supported the retention of a
reformed Soviet Union. On June 12, 1991, Boris Yeltsin won 57% of the popular
vote in democratic elections for the newly-created post of President of the
Russian SFSR, defeating Gorbachev’s preferred candidate, Ryzhkov, who won 16%
of the vote. In his election campaign, Yeltsin criticized the “dictatorship of
the center,” but did not yet suggest that he would introduce a market economy.
August Coup
Faced with growing separatism, Gorbachev sought to
restructure the Soviet Union into a less centralized state. On August 20, 1991,
the Russian SFSR was scheduled to sign a New Union Treaty that would have
converted the Soviet Union into a federation of independent republics with a
common president, foreign policy, and military. It was strongly supported by
the Central Asian republics, which needed the economic advantages of a common
market to prosper. However, it would have meant some degree of continued
Communist Party control over economic and social life.
More radical reformists were increasingly convinced that a
rapid transition to a market economy was required, even if the eventual outcome
meant the disintegration of the Soviet Union into several independent states.
Independence also accorded with Yeltsin’s desires as president of the Russian
Federation, as well as those of regional and local authorities to get rid of
Moscow’s pervasive control. In contrast to the reformers’ lukewarm response to
the treaty, the conservatives and Russian nationalists of the USSR – still
strong within the CPSU and the military – were opposed to weakening the Soviet
state and its centralized power structure.
On August 19, 1991, Gorbachev’s vice president, Gennady
Yanayev, Prime Minister Valentin Pavlov, Defense Minister Dmitry Yazov, KGB
chief Vladimir Kryuchkov, and other senior officials acted to prevent the union
treaty from being signed by forming the “General Committee on the State
Emergency”, which put Gorbachev – on holiday in Foros, Crimea – under house
arrest and cut off his communications. The coup leaders issued an emergency
decree suspending political activity and banning most newspapers. Coup
organizers expected some popular support but found that public sympathy in
large cities and in the republics was largely against them, manifested by
public demonstrations, especially in Moscow. Russian SFSR President Yeltsin
condemned the coup and garnered popular support.
Thousands of Muscovites came out to defend the White House
(the Russian Federation’s parliament and Yeltsin’s office), the symbolic seat
of Russian sovereignty at the time. The organizers tried but ultimately failed
to arrest Yeltsin, who rallied opposition to the coup with speech-making atop a
tank. The special forces dispatched by the coup leaders took up positions near
the White House, but members refused to storm the barricaded building. The coup
leaders also neglected to jam foreign news broadcasts, so many Muscovites
watched it unfold live on CNN. Even the isolated Gorbachev was able to stay
abreast of developments by tuning into BBC World Service on a small transistor
radio.
After three days, on August 21, 1991, the coup collapsed.
The organizers were detained and Gorbachev returned as president, albeit with
his power much depleted.
The Fall: August – December 1991
On August 24, 1991, Gorbachev dissolved the Central
Committee of the CPSU, resigned as the party’s general secretary, and dissolved
all party units in the government. Five days later, the Supreme Soviet
indefinitely suspended all CPSU activity on Soviet territory, effectively
ending Communist rule in the Soviet Union and dissolving the only remaining
unifying force in the country. The Soviet Union collapsed with dramatic speed
in the last quarter of 1991. Between August and December, ten republics
declared their independence, largely out of fear of another coup. By the end of
September, Gorbachev no longer had the authority to influence events outside of
Moscow. He was challenged even there by Yeltsin, who had begun taking over what
remained of the Soviet government, including the Kremlin.
On September 17, 1991, General Assembly resolution numbers
46/4, 46/5, and 46/6 admitted Estonia, Latvia, and Lithuania to the United
Nations, conforming to Security Council resolution numbers 709, 710, and 711,
passed on September 12 without a vote. The final round of the Soviet Union’s
collapse began with a Ukrainian popular referendum on December 1, 1991, in
which 90 percent of voters opted for independence. The secession of Ukraine,
the second-most powerful republic, ended any realistic chance of Gorbachev
keeping the Soviet Union together even on a limited scale. The leaders of the
three principal Slavic republics, Russia, Ukraine, and Belarus (formerly
Byelorussia), agreed to discuss possible alternatives to the union.
On December 8, the leaders of Russia, Ukraine, and Belarus
secretly met in Belavezhskaya Pushcha, in western Belarus, and signed the
Belavezha Accords, which proclaimed the Soviet Union had ceased to exist and
announced formation of the Commonwealth of Independent States (CIS) as a looser
association to take its place. They also invited other republics to join the
CIS. Gorbachev called it an unconstitutional coup. However, by this time there
was no longer any reasonable doubt that, as the preamble of the Accords put it,
“the USSR, as a subject of international law and a geopolitical reality,
is ceasing its existence.” On December 12, the Supreme Soviet of the
Russian SFSR formally ratified the Belavezha Accords and renounced the 1922
Union Treaty. It also recalled the Russian deputies from the Supreme Soviet of
the USSR. In effect, the largest and most powerful republic had seceded from
the Union. Later that day, Gorbachev hinted for the first time that he was
considering stepping down.
Doubts remained over whether the Belavezha Accords had legally
dissolved the Soviet Union since they were signed by only three republics.
However, on December 21, 1991, representatives of 11 of the 12 remaining
republics – all except Georgia – signed the Alma-Ata Protocol, which confirmed
the dissolution of the Union and formally established the CIS. They also recognized
and accepted Gorbachev’s resignation. While Gorbachev hadn’t made any formal
plans to leave his position yet, he did tell CBS News that he would resign as
soon as he saw that the CIS was indeed a reality.
In a nationally televised speech early in the morning of
December 25, 1991, Gorbachev resigned as president of the USSR – or, as he put
it, “I hereby discontinue my activities at the post of President of the
Union of Soviet Socialist Republics.” He declared the office extinct, and
all of its powers, including control of the nuclear arsenal, were ceded to
Yeltsin. A week earlier, Gorbachev met with Yeltsin and accepted the fait
accompli of the Soviet Union’s dissolution. On the same day, the Supreme Soviet
of the Russian SFSR adopted a statute to change Russia’s legal name from “Russian
Soviet Federative Socialist Republic” to “Russian Federation,” showing that it
was now a sovereign state. On the night of December 25, at 7:32 p.m. Moscow
time, after Gorbachev left the Kremlin the Soviet flag was lowered for the
last time and the Russian tricolor was raised in its place, symbolically
marking the end of the Soviet Union. On that same day, the President of the
United States George H.W. Bush held a brief televised speech officially
recognizing the independence of the 11 remaining republics.
On December 26, the upper chamber of the Union’s Supreme
Soviet voted both itself and the Soviet Union out of existence. The lower
chamber, the Council of the Union, had been out of commission since December
12, when the recall of Russian deputies left it without a quorum. The following
day Yeltsin moved into Gorbachev’s former office, though Russian authorities
had taken over the suite two days earlier. By the end of 1991, the few
remaining Soviet institutions that had not been taken over by Russia ceased
operation, and individual republics assumed the central government’s role.
The Alma-Ata Protocol addressed issues such as UN membership
following dissolution. Notably, Russia was authorized to assume the Soviet
Union’s UN membership, including its permanent seat on the Security Council.
The Soviet Ambassador to the UN delivered a letter signed by Russian President
Yeltsin to the UN Secretary General dated December 24, 1991, informing him that
by virtue of the Alma-Ata Protocol, Russia was the successor state to the USSR.
After being circulated among the other UN member states and with no objections
being raised, the statement was accepted on
December 31, 1991.
The Transition to a Market Economy, 1991-1998
Following the collapse of the Soviet Union, Russia radically transformed from a centrally planned economy to a globally
integrated market economy. Corrupt and haphazard privatization processes turned
major state-owned firms over to politically connected “oligarchs”,
which left equity ownership highly concentrated. Yeltsin’s program of radical,
market-oriented reform came to be known as a “shock therapy.” It was
based on the recommendations of the IMF and a group of top American economists,
including Larry Summers. The result was disastrous, with real GDP falling by
more than 40% by 1999, the occurrence of hyperinflation, which wiped out
personal savings, and crime and destitution spreading rapidly. Difficulties in
collecting government revenues amid the collapsing economy and a dependence on
short-term borrowing to finance budget deficits led to the 1998 Russian
financial crisis.
Also during this time, Russia became the largest
borrower from the International Monetary Fund with loans totaling $20 billion.
The IMF was the subject of criticism for lending so much as Russia introduced
little of the reforms promised in exchange for money, especially as critics
suspected a large part of these funds could have been diverted or even used to
fund illegal enterprises.
37.3: Apartheid Repealed
37.3.1: Institutional Racism in South Africa
Due to increasing Afrikaner
resentment of perceived black and white English-speaking labor advantages and
the concurrent electoral success of the National Party in 1948, institutional
racism became state policy under apartheid.
Learning Objective
Examine how racism was institutionalized in
South Africa during apartheid
Key Points
- In South Africa
during apartheid, institutional racism was a powerful means of excluding
from resources and power any person not categorized r marked as white.
- The Union of
South Africa allowed social custom and law to govern multiracial affairs and the racial allocation, of access to economic,
social, and political status. Nevertheless, by 1948 gaps in the social structure concerning the rights and
opportunities of nonwhites were apparent..
- Many Afrikaners,
whites chiefly of Dutch descent, resented what they perceived as disempowerment
by an underpaid black workforce and the superior economic power and prosperity
of white English speakers.
-
The National
Party’s election platform stressed that apartheid would preserve a market for
white employment in which nonwhites could not compete, and because the voting
system was disproportionately weighted in favor of rural constituencies and the
Transvaal in particular, the 1948 election catapulted the National Party from a
small minority to a commanding position with an eight-vote parliamentary lead.
- The first grand
apartheid law was the Population Registration Act of 1950, which formalized
racial classification and introduced an identity card for all persons over the
age of 18, specifying their racial group.
- The second
pillar of grand apartheid was the Group Areas Act of 1950, which put an end to
diverse settlement areas and determined where one lived according to race.
- The Prohibition
of Mixed Marriages Act of 1949 prohibited marriage between those of different
races, and the Immorality Act of 1950 made sexual relations with a person of a
different race a criminal offense.
- Under the
Reservation of Separate Amenities Act of 1953, municipal grounds could be
reserved for a particular race, creating separate beaches,
buses, hospitals, schools, universities, and other facilities.
- Further laws were designed to suppress resistance, especially armed
resistance, to apartheid.
Key Terms
- Atlantic Charter
-
A pivotal policy statement issued on August 14,
1941, that defined the Allied goals for the post-war world: no territorial
aggrandizement, no territorial changes made against the wishes of the people,
restoration of self-government to those deprived of it, reduction of trade
restrictions, global cooperation to secure better economic and social
conditions for all, freedom from fear and want, freedom of the seas,
abandonment of the use of force, and disarmament of aggressor nations.
- Bantustans
-
Also known as Bantu homeland, black homeland,
black state, or simply homeland, a territory set aside for
black inhabitants of South Africa and South-West Africa (now Namibia) as part
of apartheid. Ten were established in South Africa and
ten in neighboring South-West Africa (then under South African administration)
for members of designated ethnic groups. This made each territory ethnically homogeneous to create autonomous nation-states for South Africa’s black ethnic groups.
In South Africa during apartheid, institutional racism was a powerful means of excluding from
resources and power any person not categorized as white. Those considered black were further discriminated against based upon their backgrounds, with Africans
facing more extreme forms of exclusion and exploitation than those marked as
colored or Indian.
Election of 1948
The Union of South Africa
allowed social custom and law to govern the consideration of multiracial
affairs and the allocation in racial terms of access to economic, social, and
political status. Most white South Africans, regardless of their differences, accepted the prevailing pattern. Nevertheless, by 1948 it
remained apparent that there were occasional gaps in the social structure,
whether legislated or otherwise, concerning the rights and opportunities of
nonwhites. The rapid economic development of World War II attracted black
migrant workers in large numbers to chief industrial centers where they
compensated for the wartime shortage of white labor. However, this escalated
rate of black urbanization went unrecognized by the South African government,
which failed to accommodate the influx with parallel expansion in housing or
social services.
Overcrowding, spiking crime
rates, and disillusionment resulted. Urban blacks came to support a new
generation of leaders influenced by the principles of self-determination and
popular freedoms enshrined in such statements as the Atlantic Charter. Whites
reacted negatively to these developments. Many Afrikaners, whites chiefly of
Dutch descent but with early infusions of Germans and French Huguenots who were
soon assimilated, also resented what they perceived as disempowerment by an
underpaid black workforce and the superior economic power and prosperity of
white English speakers. In addition, Jan Smuts, as a strong advocate of the
United Nations, lost domestic support when South Africa was criticized for its
color bar and continued mandate of South-West Africa by other UN member states.
Afrikaner nationalists
proclaimed they would offer voters a new policy to ensure continued
white domination. The policy was initially expounded from a theory by
Hendrik Verwoerd presented to the National Party by the Sauer
Commission. It called for a systematic effort to organize relations, rights,
and privileges of the races as officially defined through a series of
parliamentary acts and administrative decrees. Segregation was previously pursued only in major matters, such as separate schools, and enforcement
depended on local authorities and societal complicity. Now it would be a matter of national legislation. The party gave this policy a
name: apartheid, meaning “apartness”. Apartheid would be the basic
ideological and practical foundation of Afrikaner politics for the next quarter-century.
The National Party’s
election platform stressed that apartheid would preserve a market for white
employment in which nonwhites could not compete. On the issues of black urbanization,
the regulation of nonwhite labor, influx control, social security, farm
tariffs, and nonwhite taxation, the United Party’s policy remained
contradictory and confused. Its traditional bases of support not only took
mutually exclusive positions, but found themselves increasingly at odds with
each other. Smuts’ reluctance to consider South African foreign policy against
the mounting tensions of the Cold War also stirred up discontent, while the
nationalists promised to purge the state and public service of communist
sympathizers. First to desert the United Party were Afrikaner farmers, who
wished to see a change in influx control due to problems with squatters, as
well as higher prices for their maize and other produce in the face of mine owners’
demand for cheap food policies.
The party also failed to appeal to its working-class constituents given its long-term affiliation with affluent and
capitalist sectors. Populist rhetoric allowed the National Party to
sweep eight constituencies in the mining and industrial centers of the
Witwatersrand and five more in Pretoria. Barring the predominantly
English-speaking landowner electorate of the Natal, the United Party was
defeated in almost every rural district. Its urban losses in the nation’s most
populous province, the Transvaal, proved equally devastating. Because the voting
system was disproportionately weighted in favor of rural constituencies and the
Transvaal in particular, the 1948 election catapulted the National Party from a
small minority to a commanding position with an eight-vote parliamentary lead. Daniel
François Malan became the first nationalist prime minister, with the aim of
implementing apartheid and silencing liberal opposition.
Legislation
NP leaders argued that South
Africa did not comprise a single nation, but was made up of four distinct racial
groups: white, black, colored, and Indian. Such groups were split into 13
nations or racial federations. White people encompassed the English and
Afrikaans language groups; the black populace was divided into ten such groups.
The state passed laws that paved the way for “grand apartheid,” large-scale segregation by compelling people to live
in separate places defined by race, leading to the creation of black-only
townships where blacks were relocated en masse. This strategy was influenced
in party by British rule after they took control of the Boer republics in the
Anglo-Boer war.
The first grand apartheid
law was the Population Registration Act of 1950, which formalized racial
classification and introduced an identity card for all persons over the age of
18 specifying their racial group. Official boards were established to
decide on a classification when a person’s race was unclear. This
caused difficulties for many people, especially colored people, when families were placed in different racial classes.
The second pillar of grand
apartheid was the Group Areas Act of 1950. Until then, most settlements had
people of different races living side-by-side. This Act put an end to diverse
areas and determined where one lived according to race. Each race was allotted
its own area, used in later years as a basis of forced removal. The
Prevention of Illegal Squatting Act of 1951 allowed the government to demolish
black shanty town slums and forced white employers to pay for the construction
of housing for black workers who were permitted to reside in cities
otherwise reserved for whites.
The Prohibition of Mixed
Marriages Act of 1949 prohibited marriage between persons of different races,
and the Immorality Act of 1950 made sexual relations with a person of a
different race a criminal offense.
Under the Reservation of
Separate Amenities Act of 1953, municipal grounds could be reserved for a
particular race, creating separate beaches, buses,
hospitals, schools, universities, and other facilities. Signboards such as “whites
only” applied to public areas, including park benches. Blacks were
provided with services greatly inferior to those given to whites, and to a lesser
extent, to those for Indian and colored people.
Further laws suppressed resistance, especially armed resistance, to apartheid. The
Suppression of Communism Act of 1950 banned any party subscribing to Communism.
The act defined Communism and its aims so broadly that anyone who opposed
government policy risked being labeled as a Communist. Since the law
specifically stated that Communism aimed to disrupt racial harmony, it was
frequently used to gag opposition to apartheid. Disorderly gatherings were
banned, as were certain organizations deemed threatening to the
government.
Education was segregated by
the 1953 Bantu Education Act, which crafted a separate system of education for
black South African students and was designed to prepare black people for lives
as a laboring class. In 1959, separate universities were created for black,
colored, and Indian people. Existing universities were not permitted to enroll
new black students. The Afrikaans Medium Decree of 1974 required the use of
Afrikaans and English equally in high schools outside the homelands.
The Bantu Authorities Act of
1951 created separate government structures for blacks and whites and was the
first legislation to support the government’s plan of separate
development in the Bantustans. The Promotion of Black Self-Government Act of
1959 entrenched the NP policy of nominally independent “homelands”
for blacks. So-called “self–governing Bantu units” were proposed,
which would have devolved administrative powers with the promise later of
autonomy and self-government. It also abolished the seats of white representatives
of black South Africans and disenfranchised the few blacks still qualified to
vote. The Bantu Investment Corporation Act of 1959 set up a mechanism to
transfer capital to the homelands to create employment there. Legislation in
1967 allowed the government to halt industrial development in white cities and
redirect such development to the black homelands. The Black Homeland
Citizenship Act of 1970 marked a new phase in Bantustan strategy. It changed
the citizenship of blacks to apply only within one of the ten autonomous
territories. The aim was to ensure a demographic majority of white people
within South Africa by having all ten Bantustans achieve full independence.
The government tightened pass
laws compelling blacks to carry identity documents in order to prevent the
immigration of blacks from other countries. To reside in a city, blacks had to be
employed there. Until 1956, women were for the most part excluded from these
pass requirements, as attempts to introduce pass laws for women were met with
fierce resistance.
Disenfranchisement of
Colored Voters
In 1950, D.F. Malan
announcement the NP’s intention to create a Colored Affairs Department. J.G.
Strijdom, Malan’s successor as Prime Minister, moved to strip voting rights
from black and colored residents of the Cape Province. The previous government
introduced the Separate Representation of Voters Bill into Parliament in
1951; however, four voters, G. Harris, W.D. Franklin, W.D. Collins, and Edgar
Deane, challenged its validity in court with support from the United Party. The
Cape Supreme Court upheld the act, but it was reversed by the Appeal Court, which
found it invalid because a two-thirds majority in a joint sitting of both
Houses of Parliament was needed to change the entrenched clauses of the
Constitution. The government then introduced the High Court of Parliament Bill
(1952), which gave Parliament the power to overrule decisions of the court. The
Cape Supreme Court and the Appeal Court declared this invalid as well.
In 1955, the Strijdom
government increased the number of judges in the Appeal Court from five to
11 and appointed pro-Nationalist judges to fill the new seats. In the same
year, the Strijdom government introduced the Senate Act, which increased the
Senate from 49 seats to 89. Adjustments were made to the effect that the NP
controlled 77 of these seats. Parliament met in a joint sitting and passed the
Separate Representation of Voters Act in 1956, which transferred colored voters
from the common voters’ roll in the Cape to a new colored voters’ roll.
Immediately after the vote, Senate was restored to its original size.
The Senate Act was contested
in the Supreme Court, but the recently enlarged Appeal Court, packed with
government-supporting judges, upheld both the Senate and Separate
Representation of Voters Acts. The Separate Representation of Voters Act
allowed colored people to elect four people to Parliament, but a 1969 law
abolished those seats and stripped colored people of their right to vote. Since
Asians had never been allowed to vote, this resulted in whites being the sole
enfranchised group.
Division Among Whites
Before South Africa became a
republic, politics among white South Africans was typified by the division
between mainly Afrikaner pro-republic conservatives and largely English
anti-republican liberal sentiments, with the legacy of the Boer War still
affecting viewpoints among many people. Once South Africa became a republic,
Prime Minister Hendrik Verwoerd called for improved relations and greater
accord between people of British descent and the Afrikaners. He claimed that
the only difference among these groups was between those in favor of apartheid
and those against it. The ethnic division would no longer be between Afrikaans
and English speakers, but between blacks and whites. Most Afrikaners supported
the notion of unanimity among white people as a means to ensure their safety.
White voters of British descent were divided. Many opposed a republic,
leading to a majority “no” vote in Natal. Later, some recognized the
perceived need for white unity, convinced by the growing trend of decolonization
elsewhere in Africa, which concerned them. British Prime Minister Harold
Macmillan’s “Wind of Change” speech left the British faction feeling
that Britain had abandoned them.
More conservative English speakers
supported Verwoerd. Yet others were troubled by the implications of severing
ties with Britain and wished to remain loyal to the Crown. They were displeased
with their perceived choice between British and South African nationalities.
Although Verwoerd tried to bind these different blocs along racial lines, subsequent
voting patterns illustrated only a minor swell of support, indicating that many
English speakers remained apathetic and Verwoerd had not truly succeeded in
uniting the white population.
37.3.2: The African National Congress
The African National Congress (ANC) resisted the apartheid system in South Africa using both peaceful and violent
means.
Learning Objective
Describe the
origins and evolution of the African National Congress
Key Points
- The
African National Congress (ANC) was formed on January 8, 1912, as a way to
bring Africans together as one people to defend their rights and freedoms.
- The
successful increase of awareness brought to the plight of Indians in South
Africa under the leadership of Mahatma Gandhi inspired blacks in South Africa
to resist the racism and inequality that they and other non-whites were
experiencing under apartheid.
- In
1949, the ANC saw a jump in membership, which had previously lingered
around 5,000, and began to establish a firm presence in South African national
society.
- In
June 1955, the Congress of the People, organized by the ANC and Indian,
Colored, and White organizations, adopted the Freedom Charter, the fundamental
document of the anti-apartheid struggle that demanded equal rights for
all regardless of race.
- In
1959, a number of members broke from the ANC due to objections over the
ANC’s reorientation away from African nationalist policies. They formed the
rival Pan Africanist Congress (PAC).
- The
ANC planned a campaign against the Pass Laws to begin on March 31, 1960. The
PAC preempted the ANC by holding unarmed protests 10 days earlier, during
which 69 protesters were killed and 180 injured by police fire in what became
known as the Sharpeville massacre. In the aftermath of the tragedy, both organizations
were banned from political activity.
-
Following
the Sharpeville massacre, the ANC leadership concluded that methods of
non-violence were not suitable against the apartheid system. A military wing
was formed in 1961 called Umkhonto we Sizwe (MK), with Nelson Mandela as its
first leader.
- The
ANC was classified as a terrorist organization by the South African government
and some Western countries, including the United States and United Kingdom.
Key Term
- apartheid
-
A system of
institutionalized racial segregation and discrimination that existed in South
Africa between 1948 and 1991.
Origins
The African
National Congress (ANC) was formed on January 8, 1912, by Saul Msane, Josiah
Gumede, John Dube, Pixley ka Isaka Seme, and Sol Plaatje. It grew from a number
of chiefs, people’s representatives, and church organizations as a way to bring
Africans together as one people to defend their rights and freedoms. From its
inception, the ANC represented both traditional and modern elements of South
African black society, from tribal chiefs to church bodies and educated black
professionals. Women, however, were only admitted as affiliate members from
1931, and as full members in 1943. The formation of the ANC Youth League in
1944 by Anton Lembede heralded a new generation committed to building
non-violent mass action against the legal underpinnings of the white minority’s
supremacy.
In 1946, the
ANC allied with the South African Communist Party to assist in the formation of
the South African Mine Workers’ Union. After the miners strike became a general
labor strike, the ANC’s President General Alfred Bitini Xuma, along with
delegates of the South African Indian Congress, attended the 1946 session of
the United Nations General Assembly, where the treatment of Indians in South
Africa was raised by the government of India. Together, they put
the issue of police brutality and the wider struggle for equality in South
Africa on the radar of the international community.
Opposition to
Apartheid
The return
of an Afrikaner-led National Party government by the overwhelmingly white
electorate in 1948 signaled the advent of the policy of apartheid. During the
1950s, non-whites were removed from electoral rolls, residence and mobility
laws were tightened, and political activities restricted. The successful
increase of awareness to the plight of Indians in South Africa under
the leadership of Mahatma Gandhi inspired blacks in South Africa to resist the
racism and inequality that they and other non-whites were experiencing. The
ANC also realized it needed a fervent leader like Gandhi was for the
Indians, who was, in the words of Nelson Mandela, “willing to violate the
law and if necessary go to prison for their beliefs as Gandhi had”. The
two groups began working together, forcing themselves to accept one another and
abandon their personal prejudices, even jointly
campaigning for their struggle to be managed by the United Nations.
In 1949, the
ANC saw a jump in membership, which had previously lingered around 5,000,
and began to establish a firm presence in South African national society. In
June 1952, the ANC joined with other anti-apartheid organizations in a Defiance
Campaign against the restriction of political, labor, and residential rights,
during which protesters deliberately violated oppressive laws, following the
example of Gandhi’s passive resistance in KwaZulu-Natal and in India. The
campaign was called off in April 1953 after new laws prohibiting protest
meetings were passed. In June 1955, the Congress of the People, organized by
the ANC and Indian, Colored, and White organizations at Kliptown near
Johannesburg, adopted the Freedom Charter, henceforth the fundamental document
of the anti-apartheid struggle, demanding equal rights for all
regardless of race. As opposition to the regime’s policies continued, 156
leading members of the ANC and allied organizations were arrested in 1956. The
resulting “treason trial” ended with mass acquittals five years
later.
The ANC
first called for an academic boycott of South Africa in protest of its
apartheid policies in 1958 in Ghana. The call was repeated the following year
in London.
In 1959, a
number of members broke from the ANC because they objected to the ANC’s
reorientation away from African nationalist policies. They formed the rival Pan
Africanist Congress (PAC), led by Robert Sobukwe.
Protest and Banning
The ANC
planned a campaign against the Pass Laws, which required blacks to carry an identity
card at all times to justify their presence in white areas, to begin on March
31, 1960. The PAC preempted the ANC by holding unarmed protests 10 days
earlier, during which 69 protesters were killed and 180 injured by police fire
in what became known as the Sharpeville massacre. In the aftermath of the
tragedy, both organizations were banned from political activity. International
opposition to the regime increased throughout the 1950s and 1960s, fueled by
the growing number of newly independent African nations, the Anti-Apartheid
Movement in Britain, and the civil rights movement in the United States. In
1960, the leader of the ANC, Albert Luthuli, won the Nobel Peace Prize.
Violent Political Resistance
Following
the Sharpeville massacre in 1960, the ANC leadership concluded that methods of
non-violence, such as those utilized by Gandhi against the British Empire, were
not suitable against the apartheid system. A military wing was formed in 1961,
called Umkhonto we Sizwe (MK), meaning “Spear of the Nation”, with
Mandela as its first leader. MK operations during the 1960s primarily involved
targeting and sabotaging government facilities. Mandela was arrested in 1962,
convicted of sabotage in 1964, and sentenced to life imprisonment on Robben Island,
along with Sisulu and other ANC leaders after the Rivonia Trial. During the
1970s and 1980s, the ANC leadership in exile under Oliver Tambo targeted apartheid government leadership, command and control, secret
police, and military-industrial complex assets and personnel in decapitation
strikes, targeted killings, and guerrilla actions such as bomb explosions in
facilities frequented by military and government personnel. A number of
civilians were also killed in these attacks. Examples include the
Amanzimtoti bombing, the Sterland bomb in Pretoria, the Wimpy bomb in Pretoria,
the Juicy Lucy bomb in Pretoria, and the Magoo’s bar bombing in Durban. ANC
acts of sabotage aimed at government institutions included the bombing of the
Johannesburg Magistrates Court, the attack on the Koeberg nuclear power
station, the rocket attack on Voortrekkerhoogte in Pretoria, and the 1983
Church Street bombing in Pretoria, which killed 16 and wounded 130.
The ANC was
classified as a terrorist organization by the South African government and some
Western countries, including the United States and United Kingdom.
Nevertheless, the ANC had a London office from 1978 to 1994 at 28 Penton Street
in Islington, now marked with a plaque. During this period, the South African
military engaged in a number of raids and bombings on ANC bases in Botswana,
Mozambique, Lesotho, and Swaziland. Dulcie September, a member of the ANC investigating the arms trade between France and South Africa, was
assassinated in Paris in 1988. In the ANC’s training camps, the ANC faced
allegations that dissident members faced torture, detention without trial, and
even execution.
Violence
also occurred between the ANC and the Inkatha Freedom Party, a political party
that grew out of a 1920s cultural organization established for Zulus. Between
1985 and 1989, 5,000 civilians were killed during in-fighting between the two
parties. Massacres of each other’s supporters include the Shell House massacre
and the Boipatong massacre.
As the years
progressed, ANC attacks, coupled with international pressure and internal
dissent, increased in South Africa. The ANC received financial and tactical
support from the USSR, which orchestrated military involvement with surrogate
Cuban forces via Angola. However, the fall of the USSR after 1991 brought an
end to funding and changed the attitude of some Western governments that previously supported the apartheid regime as an ally against communism. The
South African government found itself under increasing internal and external
pressure, and this, together with a more conciliatory tone from the ANC,
resulted in a change in the political landscape. State President F.W. de Klerk
unbanned the ANC and other banned organizations on February 2, 1990, and began
peace talks for a negotiated settlement to end apartheid.
37.3.3: Nelson Mandela and the African National Congress
Nelson Mandela was a central figure in the negotiation process that led to South Africa’s transition from apartheid minority rule to a multicultural democracy.
Learning Objective
Describe the role played by Mandela in repealing apartheid
Key Points
- Apartheid was a system of racial discrimination and segregation in the South African government, ended through a series of negotiations between 1990 and 1993.
- When de Klerk became President in 1989, he built on previous secret negotiations with the imprisoned Mandela. The first significant steps toward formal negotiations took place in February 1990 when de Klerk announced the unbanning of the ANC and other organizations and the release of ANC leader Nelson Mandela after 27 years in prison.
- In May 1990, Mandela led a multiracial ANC delegation into preliminary negotiations with a government delegation of 11 Afrikaner men, which led to the Groot Schuur Minute in which the government lifted the state of emergency.
- In August 1990, Mandela—recognizing the ANC’s severe military disadvantage—offered a ceasefire, the Pretoria Minute, for which he was widely criticized by MK activists.
- The Convention for a Democratic South Africa (CODESA) began in December 1991. Mandela remained a key figure, taking the stage to denounce de Klerk as the “head of an illegitimate, discredited minority regime.”
- CODESA 2 was held in May 1992, at which de Klerk insisted that post-apartheid South Africa must use a federal system with a rotating presidency to ensure the protection of ethnic minorities. Mandela opposed this, demanding a unitary system governed by majority rule.
- In September 1992, Mandela and de Klerk resumed negotiations and agreed to a multiracial general election, which would result in a five-year coalition government and a constitutional assembly. The ANC conceded to safeguarding the jobs of white civil servants. The duo also agreed on an interim constitution based on a liberal democratic model, dividing the country into nine provinces each with its own premier and civil service, a compromise between federalism and Mandela’s desire for a unitary government.
- The National Assembly elected during the 1994 general election in turn elected Mandela as South Africa’s first black chief executive.
- Presiding over the transition from apartheid minority rule to a multicultural democracy, Mandela saw national reconciliation as the primary task of his presidency.
Key Term
- apartheid
-
A system of
institutionalized racial segregation and discrimination in South
Africa between 1948 and 1991.
The apartheid system in South Africa was ended through a series of negotiations between 1990 and 1993 and through unilateral steps by the de Klerk government. These negotiations took place between the governing National Party, the African National Congress (ANC), and a wide variety of other political organizations. Negotiations were offset by political violence, including allegations of a state-sponsored third force destabilizing the country. The negotiations resulted in South Africa’s first non-racial election, which was won by the ANC.
Background
Apartheid was a system of racial discrimination and segregation in the South African government. It was formalized in 1948, forming a framework for political and economic dominance by the white population and severely restricting the political rights of the black majority. Between 1960 and 1990, the ANC and other mainly black opposition political organizations were banned. As the National Party cracked down on black opposition to apartheid, most leaders of the ANC and other opposition organizations were either imprisoned or went into exile, including Nelson Mandela, who was imprisoned from 1962 until 1990. However, increasing local and international pressure on the government and the realization that apartheid could neither be maintained by force forever nor overthrown by the opposition without considerable suffering, eventually led both sides to the negotiating table.
Early Contact
The first meetings between the South African government and Nelson Mandela were driven by the National Intelligence Service (NIS) under the leadership of Niel Barnard and his Deputy Director General, Mike Louw. These secret meetings were designed to understand if there was sufficient common ground for future peace talks. As these meetings evolved, a level of trust developed between the key actors (Barnard, Louw, and Mandela). To facilitate future talks while preserving secrecy needed to protect the process, Barnard arranged for Mandela to be moved off Robben Island to Pollsmoor Prison in 1982. This provided him with more comfortable lodgings, but also gave easier access in a way that could not be compromised. Barnard therefore brokered an initial agreement in principle about what became known as “talks about talks”. It was at this stage that the process was elevated from a secret engagement to a more public engagement.
As the secret talks bore fruit and the political engagement began, NIS withdrew from center stage in the process and moved to a new phase of operational support work. This was designed to test public opinion about a negotiated solution. A key initiative was known in Security Force circles as the Dakar Safari, which saw a number of prominent Afrikaner opinion-makers engage with the African National Congress in Dakar, Senegal, and Leverkusen, Germany at events organized by the Institute for a Democratic Alternative for South Africa. The operational objective of this meeting was not to understand the opinions of the actors themselves—that was well-known at this stage within strategic management circles—but rather to gauge public opinion about a movement away from the previous security posture of confrontation and repression to one based on engagement and accommodation.
Unbanning and Mandela’s Release, 1990-91
When F.W. de Klerk became President in 1989, he built on the previous secret negotiations with the imprisoned Mandela. The first significant steps towards formal negotiations took place in February 1990 when in his speech at the opening of Parliament, de Klerk announced the unbanning of the ANC and other banned organizations and the release of ANC leader Nelson Mandela after 27 years in prison. Mandela proceeded on an African tour, meeting supporters and politicians in Zambia, Zimbabwe, Namibia, Libya, and Algeria. Then he continued to Sweden, where he was reunited with exiled ANC leader Oliver Tambo, and London, where he appeared at the Nelson Mandela: An International Tribute for a Free South Africa concert at Wembley Stadium in Wembley Park. In France, Mandela was welcomed by President François Mitterrand; in Vatican City by Pope John Paul II; and in the United Kingdom by Thatcher. In the United States, he met President George H.W. Bush, addressed both Houses of Congress, and visited eight cities, with particular popularity among the African-American community. In Cuba, he became friends with President Fidel Castro, whom he had long admired. He met President R. Venkataraman in India, President Suharto in Indonesia, Prime Minister Mahathir Mohamad in Malaysia, and Prime Minister Bob Hawke in Australia. He visited Japan, but not the USSR, a longtime ANC supporter. All the while, Mandela encouraged foreign countries to support sanctions against the apartheid government.
In May 1990, Mandela led a multiracial ANC delegation into preliminary negotiations with a government delegation of 11 Afrikaner men. Mandela impressed them with his discussions of Afrikaner history, and the negotiations led to the Groot Schuur Minute, in which the government lifted the state of emergency. In August, Mandela—recognizing the ANC’s severe military disadvantage—offered a ceasefire, the Pretoria Minute, for which he was widely criticized by Umkhonto we Sizwe (MK) activists. He spent much time trying to unify and build the ANC, appearing at a Johannesburg conference in December attended by 1,600 delegates, many of whom found him more moderate than expected. At the ANC’s July 1991 national conference in Durban, Mandela admitted the party’s faults and announced his aim to build a “strong and well-oiled task force” for securing majority rule. At the conference, he was elected ANC President, replacing the ailing Tambo, and a 50-strong multiracial, mixed-gendered national executive was elected.
Mandela was given an office in the newly purchased ANC headquarters at Shell House, Johannesburg, and moved into his wife Winnie Madikizela’s house in Soweto. Their marriage was increasingly strained as he learned of her affair with Dali Mpofu, but he supported her during her trial for kidnapping and assault. He gained funding for her defense from the International Defence and Aid Fund for Southern Africa and from Libyan leader Muammar Gaddafi. However, in June 1991, she was found guilty and sentenced to six years in prison, reduced to two on appeal. On April 13, 1992, Mandela publicly announced his separation from Winnie. The ANC forced her to step down from the national executive for misappropriating ANC funds and Mandela moved into the mostly white Johannesburg suburb of Houghton.
Mandela’s prospects for a peaceful transition were further damaged by an increase in “black-on-black” violence, particularly between ANC and Inkatha supporters in KwaZulu-Natal, which resulted in thousands of deaths. Mandela met with Inkatha leader Buthelezi, but the ANC prevented further negotiations on the issue. Mandela argued that there was a “third force” within the state intelligence services, fueling the violence. Mandela openly blamed de Klerk – whom he increasingly distrusted – for the Sebokeng massacre. In September 1991, a national peace conference was held in Johannesburg at which Mandela, Buthelezi, and de Klerk signed a peace accord, though the violence continued.
CODESA Talks: 1991-92
The Convention for a Democratic South Africa (CODESA) began in December 1991 at the Johannesburg World Trade Center, attended by 228 delegates from 19 political parties. Although Cyril Ramaphosa led the ANC’s delegation, Mandela remained a key figure, and after de Klerk used the closing speech to condemn the ANC’s violence, Mandela denounced de Klerk as the “head of an illegitimate, discredited minority regime.” Dominated by the National Party and ANC, little negotiation was achieved.
At CODESA 2 in May 1992, de Klerk insisted that post-apartheid South Africa must use a federal system with a rotating presidency to ensure the protection of ethnic minorities. Mandela opposed this, demanding a unitary system governed by majority rule. Following the Boipatong massacre of ANC activists by government-aided Inkatha militants, Mandela called off the negotiations before attending a meeting of the Organisation of African Unity in Senegal, at which he called for a special session of the UN Security Council and proposed that a UN peacekeeping force be stationed in South Africa to prevent state terrorism. Calling for domestic mass action, in August the ANC organized the largest-ever strike in South African history, and supporters marched on Pretoria.
Following the Bisho massacre, in which 28 ANC supporters and one soldier were shot dead by the Ciskei Defence Force during a protest march, Mandela realized that mass action was leading to further violence and resumed negotiations in September. He agreed to do so on the conditions that all political prisoners be released, Zulu traditional weapons be banned, and Zulu hostels fenced off. The latter two measures were intended to prevent further Inkatha attacks. de Klerk reluctantly agreed to these terms. The negotiations agreed that a multiracial general election would be held, resulting in a five-year coalition government of national unity and a constitutional assembly that gave the National Party continuing influence. The ANC also conceded to safeguarding the jobs of white civil servants. Such concessions brought fierce internal criticism. The duo also agreed on an interim constitution based on a liberal democratic model, guaranteeing separation of powers, creating a constitutional court, and including a U.S.-style bill of rights. The constitution also divided the country into nine provinces, each with its own premier and civil service, a compromise between de Klerk’s desire for federalism and Mandela’s desire for a unitary South African government.
The democratic process was threatened by the Concerned South Africans Group (COSAG), an alliance of far-right Afrikaner parties and black ethnic-secessionist groups like the Inkatha. In June 1993, the white supremacist Afrikaner Weerstandsbeweging (AWB) attacked the Kempton Park World Trade Center. Following the murder of ANC activist Chris Hani, Mandela gave a speech to calm rioting soon after appearing at a mass funeral in Soweto for Tambo, who had died of a stroke. In July 1993, both Mandela and de Klerk visited the U.S., independently meeting with President Bill Clinton and each receiving the Liberty Medal. Soon after, Mandela and de Klerk were jointly awarded the Nobel Peace Prize in Norway. Influenced by Thabo Mbeki, Mandela began meeting with big business figures and played down his support for nationalization, fearing that he would scare away much needed foreign investment. Although criticized by socialist ANC members, he had been encouraged to embrace private enterprise by members of the Chinese and Vietnamese Communist parties at the January 1992 World Economic Forum in Switzerland.
General Election: 1994
With the election set for April 27, 1994, the ANC began campaigning, opening 100 election offices and orchestrating People’s Forums across the country at which Mandela could appear. The ANC campaigned on a Reconstruction and Development Program (RDP) to build a million houses in five years, introduce universal free education, and extend access to water and electricity. The party’s slogan was “a better life for all,” although it was not explained how this development would be funded. With the exception of the Weekly Mail and the New Nation, South Africa’s press opposed Mandela’s election, fearing continued ethnic strife. Mandela devoted much time to fundraising for the ANC, touring North America, Europe, and Asia to meet wealthy donors, including former supporters of the apartheid regime. He also urged a reduction in the voting age from 18 to 14, which was ultimately rejected by the ANC.
Concerned that COSAG would undermine the election, particularly in the wake of the conflict in Bophuthatswana and the Shell House Massacre—incidents of violence involving the AWB and Inkatha, respectively—Mandela met with Afrikaner politicians and generals, including P.W. Botha, Pik Botha, and Constand Viljoen, persuading many to work within the democratic system. With de Klerk, he also convinced Inkatha’s Buthelezi to enter the elections rather than launch a war of secession. As leaders of the two major parties, de Klerk and Mandela appeared on a televised debate. Although de Klerk was widely considered the better speaker at the event, Mandela’s offer to shake his hand surprised him, leading some commentators to deem it a victory for Mandela. The election went ahead with little violence, although an AWB cell killed 20 with car bombs. As widely expected, the ANC won a sweeping victory, taking 63% of the vote, just short of the two-thirds majority needed to unilaterally change the constitution. The ANC was also victorious in seven provinces, with Inkatha and the National Party each taking another.
Presidency of Nelson Mandela
The newly elected National Assembly’s first act was to formally elect Mandela as South Africa’s first black chief executive. His inauguration took place in Pretoria on May 10, 1994, televised to a billion viewers globally. The event was attended by 4,000 guests, including world leaders from a wide range of geographic and ideological backgrounds. Mandela headed a Government of National Unity dominated by the ANC—which had no experience of governing by itself—but containing representatives from the National Party and Inkatha. Under the interim constitution, Inkatha and the National Party were entitled to seats in the government by virtue of winning at least 20 seats in the election. In keeping with earlier agreements, both de Klerk and Thabo Mbeki were given the position of Deputy President. Although Mbeki had not been his first choice for the job, Mandela grew to rely heavily on him throughout his presidency, allowing him to shape policy details. Although he dismantled press censorship and spoke out in favor of freedom of the press, Mandela was critical of much of the country’s media, noting that it was overwhelmingly owned and run by middle-class whites and believing that it focused too heavily on scaremongering about crime.
National Reconciliation
Presiding over the transition from apartheid minority rule to a multicultural democracy, Mandela saw national reconciliation as the primary task of his presidency. Having seen other post-colonial African economies damaged by the departure of white elites, Mandela worked to reassure South Africa’s white population that they were protected and represented in “the Rainbow Nation”. Although his National Unity government would be dominated by the ANC, he attempted to create a broad coalition by appointing de Klerk as Deputy President and other National Party officials as ministers for Agriculture, Energy, Environment, and Minerals and Energy, as well as naming Buthelezi as Minister for Home Affairs. The other cabinet positions were taken by ANC members, many of whom—like Joe Modise, Alfred Nzo, Joe Slovo, Mac Maharaj, and Dullah Omar—had long been comrades. Mandela’s relationship with de Klerk was strained because he believed de Klerk was intentionally provocative. Likewise, de Klerk felt that he was being intentionally humiliated by the president. In January 1995, Mandela heavily chastised him for awarding amnesty to 3,500 police officers just before the election, and later criticized him for defending former Minister of Defence Magnus Malan when the latter was charged with murder.
Mandela personally met with senior figures of the apartheid regime, including Hendrik Verwoerd’s widow, Betsie Schoombie, and lawyer Percy Yutar. He also laid a wreath by the statue of Afrikaner hero Daniel Theron. Emphasizing personal forgiveness and reconciliation, Mandela announced that “courageous people do not fear forgiving, for the sake of peace”. He encouraged black South Africans to get behind the previously hated national rugby team, the Springboks, as South Africa hosted the 1995 Rugby World Cup. Mandela wore a Springbok shirt at the final against New Zealand, and after the Springboks won the match, Mandela presented the trophy to captain Francois Pienaar, an Afrikaner. This was widely seen as a major step in the reconciliation of white and black South Africans. Mandela’s efforts at reconciliation assuaged the fears of whites, but also drew criticism from more militant blacks. Among the latter was his estranged wife, Winnie, who accused the ANC of being more interested in appeasing the white community than in helping the black majority.
Mandela oversaw the formation of a Truth and Reconciliation Commission to investigate crimes committed under apartheid by both the government and the ANC, appointing Desmond Tutu as its chair. To prevent the creation of martyrs, the Commission granted individual amnesties in exchange for testimony of crimes committed during the apartheid era. Dedicated in February 1996, it held two years of hearings detailing rapes, torture, bombings, and assassinations before issuing its final report in October 1998. Both de Klerk and Mbeki appealed to have parts of the report suppressed, though only de Klerk’s appeal was successful. Mandela praised the Commission’s work, stating that it “had helped us move away from the past to concentrate on the present and the future.”
37.4: The Rwandan Genocide
37.4.1: Composition of the Rwandan Population
The Rwandan population is comprised of three
main ethnic groups: the Hutus, Tutsis, and Twa.
Learning Objective
Describe the ethnic subgroups that make up the
Rwandan population
Key Points
- The largest
ethnic groups in Rwanda are the Hutus, the Tutsis, and the Twa.
- When Europeans
first explored the region around the Great Lakes of Chad that has since become
Rwanda, they described the people in the region as having descended from
three racially distinct tribes and coexisting in a complex social order.
- A contrasting
picture of human cultural diversity was recorded in the early Rwandan oral
histories, ritual texts, and biographies, in which the terms Tutsi, Hutu, and
Twa were rarely used and the boundary between Tutsi and Hutu was somewhat open
to social mobility.
- Elites in
pre-colonial Rwanda propagated an origin myth of the three groups to justify
the hierarchical relationship of sociopolitical inequality between them in
sacred, religious terms.
- Despite sociopolitical
stratification, Rwanda was a unified society. Inhabitants all considered
themselves part of the same nation, spoke the same language, practiced the same
cultural traditions, and worshiped the same God.
- European
colonizers would later exploit group divisions as a means of securing control.
Key Terms
- serfs
-
The status of many peasants within feudal systems,
an individual who occupies a plot of land
and is required to work for the owner of that land in return for protection and
the right to exploit certain fields on the property to maintain their own
subsistence.
- pygmies
-
A member of an ethnic group whose
average height is unusually short. Anthropologists define this as any group where adult men are on average less than 4 feet 11 inches tall.
The largest ethnic groups in
Rwanda are the Hutus, the Tutsis, and the Twa. Starting with the Tutsi feudal
monarchy rule of the 10th century, the Hutus were a subjugated social group.
It was not until Belgian colonization that the tensions between the Hutus and Tutsis
became focused on race, with the Belgians propagating the myth that Tutsis were
the superior ethnicity. The resulting tensions would eventually foster the
slaughtering of Tutsis in the Rwandan genocide. Since then, government policy
has changed to recognize one main ethnicity: “Rwandan.”
Pre-Colonial Rwanda
When Europeans first
explored the region around the Great Lakes of Chad that has since become
Rwanda, they described the people found in the region as descending from
three racially distinct tribes and coexisting in a complex social order: the
Tutsis, Hutus, and Twa. The Tutsis, an elite minority of about 24% of the
population, were tall, slim pastoralists. The Hutu majority, about 75% of the
population, were stocky, strong farmers. The Twa were a marginalized minority
of 1% of the population: a tribe of pygmies, dwelling in the forests as hunters
and gatherers. Although these groups were distinct and stratified in relation
to one another, the boundary between Tutsi and Hutu was somewhat open to social
mobility. The Tutsi elite were defined by their exclusive ownership of land and
cattle. Hutus, though disenfranchised socially and politically, could shed
Hutuness, or kwihutura, by accumulating wealth and thereby rising through the
social hierarchy to the status of Tutsi.
A contrasting picture of
human cultural diversity was recorded in the early Rwandan oral histories,
ritual texts, and biographies, in which the terms Tutsi, Hutu, and Twa were
rarely used and had meanings different from those conceived by the Europeans. In
these oral histories, the term Tutsi was equivalent to the phrase “wealthy
noble”; Hutu meant “farmer”; and Twa was used to refer to people
skilled in hunting, use of fire, pottery-making, guarding, and other disciplines. In contrast to
the European conception, rural farmers are often described as wealthy and well-connected.
Kings sometimes looked down on them but still married individuals from this group and frequently conferred them with titles, land, herds, armies, servitors, and ritual
functions.
Origin Myths
Elites in pre-colonial
Rwanda propagated an origin myth of the three groups to justify the
hierarchical relationship of sociopolitical inequality in sacred,
religious terms. According to this myth, Kigwa, a deity who fell from heaven,
had three sons: Gatwa, Gahutu, and Gatutsi. He chose an heir by giving each son
the responsibility of watching over a pot of milk during the night. Gatwa drank
the milk, Gahutu fell asleep and carelessly spilled his pot, and Gatutsi
kept watch, keeping his milk safe. Therefore, Kigwa appointed Gatutsi to be his
successor and Gahutu to be his brother’s servant, while Gatwa was to be
resigned to the status of an outsider. Gatutsi would possess cattle and power,
and Gahutu would only be allowed to acquire cattle through service to Gatutsi,
whereas Gatwa was condemned to the fringe of society. This myth was the basis
of the hierarchical relationship that placed the Tutsi at the apex of the
social pyramid. The prevalence of this myth became the basis for the social and
political stratification of Rwanda.
From the 15th century
when the Tutsi arrived in what is now Rwanda as migrant pastoralists to the
onset of colonization, Rwanda was a feudal monarchy. A Tutsi monarch ruled,
distributing land and political authority through hereditary chiefs whose power
was manifest in their land and cattle ownership. Most of these chiefs were
Tutsis. The land was farmed under an imposed system of patronage in which Tutsi
chiefs demanded manual labor in return for the rights of Hutus to occupy their
land. This system left Hutus with the status of serfs. Additionally, when
Rwanda conquered the peoples on its borders, their ethnic identities were cast
aside and they were simply labeled “Hutu.” Therefore, “Hutu” became an identity
that was not necessarily ethnic, but rather associated with subjugation.
Stratified Social Hierarchy
This social system was based
on five fundamental assumptions, as reinforced through group interactions and
influenced by cultural myths:
-
Fundamental natural
differences existed between the groups.
-
The origin of the Tutsis was
celestial.
-
The civilization that Tutsis
brought to Rwanda was superior.
-
The kingship of the Tutsi
Mwami was divinely ordained.
-
Divine sanctions would occur
if the monarchy was usurped by any other group.
Despite the stratification promulgated by these ideas, Rwanda was still
very much a unified society. Notwithstanding association with different groups
in the sociopolitical hierarchy, the inhabitants all considered themselves part
of the same nation, the Banyarwanda, which means “people of Rwanda.” They spoke
the same language, practiced the same cultural traditions, and worshiped the
same God. However, the arrival of European colonizers would later exploit group
divisions as a means of securing control. The modern conception of Tutsi and
Hutu as distinct ethnic groups in no way reflects the pre-colonial relationship
between them. Tutsi and Hutu were simply groups occupying different places in
the Rwandan social hierarchy, the division between which was exacerbated by
slight differences in appearance propagated by occupation and pedigree.
37.4.2: Imperialism and Racial Divisions
European imperialists used
power disparities and pseudo-science to perpetuate the myth of divergent Tutsi
and Hutu racial identities.
Learning Objective
Explain how European imperialists encouraged
categorizing Rwandans on the basis of ethnicity
Key Points
Key Term
- Mwami
-
A chiefly title usually translated as “king.”
Unlike much of the rest of Africa,
Rwanda and the Great Lakes region was not divided up during the 1884 Berlin
Conference. Instead, the region was divided in an 1890 conference in Brussels. Rwanda and Burundi were given to the German Empire as colonial
spheres of interest in exchange for Germany renouncing all claims on Uganda.
The poor-quality maps referenced in these agreements left Belgium with a claim
on the western half of the country, and after several border skirmishes, the
final borders of the colony were not established until 1900. These borders
contained the kingdom of Rwanda as well as a group of smaller kingdoms on the
shore of Lake Victoria.
German and Belgian
Colonization
Germany
The construction of
divergent ethnic “Tutsi” and “Hutu” identities occurred during the era of
European colonization from the late 1880s to the 1950s. German colonialism did
little to alter the existing stratified social system. The Germans were not
interested in disrupting social affairs – their sole concern was the efficient
extraction of natural resources and trade of profitable cash crops. Colonial
bureaucrats relied heavily on native Tutsi chiefs to maintain order over the
Hutu lower classes and collect taxes. Thus, the German affirmation of the
stratified social structure was utilized by the Tutsi aristocracy as
justification for minority rule over the lower-class Hutu masses.
The German presence had
mixed effects on the authority of Rwandan governing powers. The Germans helped
the Mwami increase their control over Rwandan affairs, but Tutsi power weakened
with the introduction of capitalist forces and via increased integration with
outside markets and economies. Money came to be seen by many Hutus as a replacement
for cattle, in terms of both economic prosperity and for purposes of social standing. Tutsi power was also weakened by Germany through the introduction of the head-tax on all Rwandans. As some Tutsis feared, the tax made the Hutus feel less bonded to their Tutsi patrons and
more dependent on European foreigners. The head-tax also implied equality among
those counted. Thus, despite Germany’s attempt to uphold traditional
Tutsi domination of the Hutus, the Hutu began to shift their ideas surrounding
this concept.
Belgium
Germany’s defeat in World
War I allowed Belgian forces to conquer Rwanda. Belgian involvement in the
region was far more intrusive than German administration. In an era of Social
Darwinism, European anthropologists claimed to identify a distinct “Hamitic
race” that was superior to native “Negroid” populations. Influenced by
racialized attitudes, Belgian social scientists declared that the Tutsis, who
wielded political control in Rwanda, must be descendants of the Hamites, who shared
a purported closer bloodline to Europeans. The Belgians concluded that the
Tutsis and Hutus composed two fundamentally different ethno-racial groups.
Thus, the Belgians viewed the Tutsis as more civilized, superior, and most
importantly, more European than the Hutus.
This perspective justified placement of societal control in the hands of the Tutsis at the expense of the
Hutus, establishing a comprehensive race theory that would dictate Rwandan
society until independence: Tutsi racial superiority and Hutu oppression. The
institutionalization of Tutsi and Hutu ethnic divergence was accomplished
through administrative, political, economic, and educational means. Initially,
Belgian administrators used an expedient method of classification based on the
number of cattle a person owned – anyone with ten or more cattle was considered
a member of the aristocratic Tutsi class. However, the presence of wealthy Hutu
was problematic. Then in 1933, the colonial administration institutionalized a
more rigid ethnic classification by issuing ethnic identification cards,
officially branding every Rwandan as Tutsi, Hutu, or Twa.
Tutsis began to believe the
myth of their superior racial status and exploited their power over the Hutu
majority. A history of Rwanda that justified the existence of these racial
distinctions was written. No historical, archaeological, or
linguistic traces have been found to date that confirm this official history.
The observed differences between the Tutsis and the Hutus are about the same as
those evident between the different French social classes in the 1950s. The way
people nourished themselves explains a large part of the differences observed; for
instance, the Tutsis, who raised cattle, traditionally drank more milk
than the Hutu, who were farmers.
Post-Colonial Framework
As Belgium’s era of colonial
dominance over Rwanda drew to a close during the 1950s, Hutu and Tutsi
racial identities had become firmly institutionalized. Manipulative racial
engineering by the Belgians and the despotic practices of the Tutsi chieftains
they empowered helped to drive together the disparate Rwandan sub-classes under
the “Hutu” moniker. When the Belgians finally left Rwanda in the early
1960s, the politics of racial and ethnic division remained. In the decades that
followed, regimes under both Hutu ultra-nationalists and moderate conciliators
would demonstrate how the labels of Hutu and Tutsi could be molded to fit political expediency.
37.4.3: 100 Days of Violence
The Rwandan genocide was a
mass slaughter of Tutsi people in Rwanda by members of the Hutu majority
government.
Learning Objective
Recall the key events of the
100 Days of Violence
Key Points
- The army began
training Hutu youth in combat and arming civilians in 1990 as part of an official
program of civil defense against the Rwandan Patriotic Front (RPF).
- In March 1993,
Hutu Power groups began compiling lists of “traitors” who they planned to kill,
possibly including President Juvenal Habyarimana.
- In October 1993,
the President of Burundi, Melchior Ndadaye, who had been elected in June as the
country’s first ever Hutu president, was assassinated by extremist Tutsi army
officers.
-
On January 11,
1994, General Romeo Dallaire, commander of United Nations Assistance Mission
for Rwanda (UNAMIR) sent the infamous “Genocide Fax” to UN Headquarters,
stating that an informant told him of plans to distribute weapons to Hutu
militias to kill Belgian members of UNAMIR and guarantee Belgian
withdrawal from the country.
- On April 6,
1994, the airplane carrying President Habyarimana and Cyprien Ntaryamira, the
Hutu president of Burundi, was shot down as it prepared to land in Kigali,
killing everyone on board.
- Following Habyarimana’s
death, a crisis committee was formed, which would remain the de facto source of
power in the country as well as one of the driving sources of the genocide.
- Within hours of
Habyarimana’s death, the genocide began. For the remainder of April and early
May, the Presidential Guard, gendarmerie,
and youth militias, aided by local populations, continued killing at very high
rates.
- The RPF made
slow but steady gains in the north and east of the country, ending killings in
each area they occupied.
-
At the end of July, Kagame’s RPF forces held the whole of Rwanda, except
for the zone in the southwest that was occupied by Operation Turquoise,
effectively ending the genocide.
Key Terms
- interahamwe
-
A Hutu paramilitary organization that enjoyed
the backing of the Hutu-led government leading up to and during the Rwandan
genocide. Since the genocide, they have been driven out of Rwanda, mainly to
Zaire (present-day Democratic Republic of the Congo).
- UN Charter article 2(4)
-
“All members shall refrain in their
international relations from the threat or use of force against the territorial
integrity or political independence of any state, or in any other manner
inconsistent with the purposes of the United Nations.” Although some
commentators interpret Article 2(4) as banning only the use of force directed
at the territorial integrity or political independence of a state, the more
widely held opinion is that these are merely intensifiers, and that the article
constitutes a general prohibition subject only to the exceptions stated in the
Charter (i.e., self-defense and Chapter VII action by the Security Council).
The Rwandan genocide, also
known as the genocide against the Tutsi, was a mass slaughter of Tutsi people
in Rwanda by members of the Hutu majority government. An estimated 500,000 to
one million Rwandans were killed during the 100-day period from April 7 to
mid-July 1994, constituting as many as 70% of the Tutsi population and 20% of
Rwanda’s overall population.
Prelude
Preparation for Genocide
Historians do not agree on a
precise date on which the idea of a “final solution” to kill
every Tutsi in Rwanda was introduced. The army began training Hutu youth in
combat and arming civilians with weapons such as machetes in 1990, as part of
an official program of civil defense against the Rwandan Patriotic Front (RPF),
which largely consisted of Tutsi refugees whose families had fled to Uganda
after the 1959 Hutu revolt against colonial rule. Rwanda also purchased large
numbers of grenades and munitions starting in late 1990. In one deal, future UN
Secretary-General Boutros Boutros-Ghali, in his role as Egyptian foreign
minister, facilitated a large sale of arms from Egypt. The Rwandan Armed Forces
(FAR) also expanded rapidly during this time, growing from fewer than 10,000
troops to almost 30,000 in one year. New recruits were often poorly
disciplined, however, and a divide grew between them and the more elite,
experienced units.
In March 1993, the Hutu
Power group began compiling lists of “traitors” who they planned to kill, and
it is possible that President Juvenal Habyarimana’s name was on these lists.
The far-right Hutu Power political party Coalition for the Defense of the
Republic (CDR) was actively and openly accusing the president of treason, and
many Power groups believed that the national radio station, Radio Rwanda, had become too liberal and supportive of the opposition. In turn, they founded a new
radio station, Radio Television Libre des Mille Collines (RTLMC), which
broadcast racist propaganda, obscene jokes, and music, and quickly became
popular throughout the country. Throughout 1993, hardliners imported machetes
on a scale far larger than required for agriculture, as well as other
tools that could be used as weapons, such as razor blades, saws, and scissors.
These tools were distributed around the country, ostensibly as part of the
civil defense network.
In October 1993, the
President of Burundi, Melchior Ndadaye, who had been elected in June as the
country’s first ever Hutu president, was assassinated by extremist Tutsi army
officers. The assassination caused shock waves throughout the country,
reinforcing the notion among Hutus that the Tutsi were their enemy and could
not be trusted. The CDR and Power wings of other parties quickly realized they
could use the situation to their advantage. The idea of a Tutsi “final
solution”, which had been floating around as a fringe political viewpoint, now
occupied the top of Hutu party agendas and was actively planned. The Hutu Power
groups were confident of persuading the Hutu population to carry out killings
given the public anger at Ndadaye’s murder, the persuasiveness of RTLM
propaganda, and the traditional obedience of Rwandans to authority. Power
leaders began arming the interahamwe
and other militia groups with AK-47s and other weapons, whereas previously they
possessed only machetes and traditional hand weapons.
On January 11, 1994, General
Romeo Dallaire, commander of United Nations Assistance Mission for Rwanda
(UNAMIR) sent the infamous “Genocide Fax” to UN Headquarters. The fax stated
that Dallaire was in contact with a high-level informant who told him of plans
to distribute weapons to Hutu militias to kill Belgian members of
UNAMIR and guarantee Belgian withdrawal from the country. The informant, a local
politician, had been ordered to register all Tutsis in Kigali. Dallaire
requested permission for the protection of his informant and the informant’s
family, but Kofi Annan, then Secretary-General of the UN, repeatedly forbade
any operations despite having authority for approval until guidance was
received from headquarters, citing UN Charter article 2(4).
Assassination of Habyarimana
On April 6, 1994, the airplane
carrying President Habyarimana and Cyprien Ntaryamira, the Hutu president of
Burundi, was shot down as it prepared to land in Kigali, killing everyone on
board. Responsibility for the attack was disputed, with both the RPF and Hutu
extremists blamed. A later investigation by the Rwandan government blamed
Hutu extremists in the Rwandan army. Despite disagreements about the
perpetrators, the attack and deaths of the two Hutu presidents served as the
catalyst for the genocide.
Following Habyarimana’s
death, on the evening of April 6, a crisis committee was formed of
Major General Augustin Ndindiliyimana, Colonel Theoneste Bagosora, and a number
of other senior army staff officers. The committee was headed by Bagosora,
despite the presence of the more senior Ndindiliyimana. Prime Minister Agathe
Uwilingiyimana was legally next in the line of political succession, but the
committee refused to recognize her authority. Dallaire met with the committee
that night and insisted that Uwilingiyimana be placed in charge, but Bagosora
refused, saying Uwilingiyimana did not “enjoy the confidence of the Rwandan
people” and was “incapable of governing the nation”. Bagosora sought to
convince UNAMIR and the RPF that the committee was acting to contain the
Presidential Guard, which he described as “out of control,” and that it would
abide by the Arusha agreement, which had ended the three-year Rwandan civil
war.
Killings of Moderate Leaders
UNAMIR sent an escort of ten
Belgian soldiers to bring Prime Minister Uwilingiyimana her to the Radio Rwanda offices to address the nation. The plan
was cancelled, however, because the Presidential Guard took over the radio
station shortly afterwards and would not permit Uwilingiyimana to speak on air.
Later that morning, a number of soldiers and a crowd of civilians overwhelmed
the Belgians guarding Uwilingiyimana, forcing them to surrender their weapons.
Uwilingiyimana and her husband were killed, but their children survived by
hiding behind furniture and were rescued by Senegalese UNAMIR officer Mbaye
Diagne. The ten Belgians guards were taken to the Camp Kigali military base
where they were tortured and killed.
In addition to assassinating
Uwilingiyimana, the extremists spent the night of April 6 in Kigali with lists of prominent moderate politicians and journalists
on a mission to kill them. Fatalities that evening included President of the
Constitutional Court Joseph Kavaruganda, Minister of Agriculture Frederic
Nzamurambaho, Parti Liberal leader Landwald Ndasingwa and his Canadian wife,
and chief Arusha negotiator Boniface Ngulinzira. A few moderates survived,
including prime minister-delegate Faustin Twagiramungu, but the plot was
successful enough that by the morning of April 7, all moderate politicians and
leaders were either dead or in hiding.
Genocide
The genocide itself began
within a few hours of Habyariamana’s death. Military leaders in Gisenyi
province were initially the most organized, convening a large number of interahamwe and civilian Hutu. The
commanders announced the president’s death, blamed the RPF, and then ordered
the crowd to begin killing. The genocide spread to Ruhengeri, Kibuye, Kigali,
Kibungo, Gikongoro, and Cyangugu provinces on April 7. In each case, local
officials, responding to orders from Kigali, spread rumors that the RPF had
killed the president and commanded the population to kill Tutsi in retribution.
The Hutu population, which had been prepared and armed during the preceding
months, carried out the orders without question. There were few killings in
Gitarama and Butare provinces during the early phases of the genocide, due to
the moderation of their governors. Killings began in earnest in
Gitarama on April 9 and in Butare on April 19, following the arrest and murder
of Tutsi governor Jean Baptiste Habyarimana. The genocide did not affect areas
already under RPF control, including parts of Byumba province and eastern
Ruhengeri.
For the remainder of April
and early May, the Presidential Guard, gendarmerie,
and youth militias, aided by local populations, continued killing at very high
rates. Historian Gerard Prunier estimates in his book The Rwanda Crisis that up to 800,000 Rwandans were murdered during
the first six weeks of the genocide, which represents a rate of killing five
times higher than during the German Holocaust. The goal of the
genocide was to kill every Tutsi living in Rwanda, and with the exception of
the advancing RPF army, there was no opposition force to prevent or slow the
killings. Domestic opposition had already been eliminated and UNAMIR was expressly forbidden to use force except in self-defense. In rural areas, where
Tutsi and Hutu lived side-by-side and families knew each other, it was easy for
Hutu to identify and target their Tutsi neighbors. In urban areas were
residents were more anonymous, identification was facilitated using road blocks
manned by the military and interahamwe.
Each person who encountered a road block was required to show their national
identity card, which included ethnicity, and anyone carrying a Tutsi card was
slaughtered immediately. Many Hutu were also killed for a variety of reasons,
including demonstrating sympathy for moderate opposition parties, being a
journalist, or simply appearing Tutsi.
The RPF made slow and steady
gains in the north and east of the country, ending killings in each area they
occupied. The genocide was effectively ended in April in areas of Ruhengeri,
Byumba, Kibungo, and Kigali provinces. The killings also ceased during f April in western Ruhengeri and Gisenyi because almost every Tutsi had
been eliminated. Large numbers of Hutu in RPF-conquered areas fled, fearing
retribution killings. Half a million Kibungo residents fled over the bridge at
Rusumo Falls into Tanzania at the end of April and were accommodated in UN
camps effectively controlled by ousted leaders of the Hutu regime.
In the remaining provinces,
killings continued throughout May and June, although they became increasingly
sporadic. Most Tutsi were already eliminated and the interim government hoped
to rein in the growing anarchy and engage the population in fighting the
encroaching RPF. On June 23, approximately 2,500 soldiers entered southwestern
Rwanda as part of the French-led UN Operation Turquoise, intended as
a humanitarian mission, although the soldiers were unable to save significant
lives. The genocidal authorities were overtly welcoming of the
French, displaying the French flag on their own vehicles, but slaughtering
Tutsi who came out of hiding seeking protection.
Planning and Organization
The crisis committee, headed
by Bagosora, took power following Habyarimana’s death and was the principal
authority coordinating the genocide. Bagosora immediately began issuing orders
to kill Tutsi, addressing groups of interahamwe
in person in Kigali and making telephone calls to leaders in the provinces.
Other leading national organizers included defense minister Augustin
Bizimana; commander of the paratroopers, Aloys Ntabakuze; and head of the
Presidential Guard, Protais Mpiranya. Businessman Felicien Kabuga funded the
RTLM and the interahamwe, while
Pascal Musabe and Joseph Nzirorera were responsible for coordinating militia
activities nationally. In Kigali, the genocide was led by the Presidential
Guard. They were assisted by militias, who in turn set up road blocks
throughout the capital. Militias also initiated house searches within the city,
slaughtering Tutsi and looting their property. Kigali governor Tharcisse
Renzaho played a leading role, touring road blocks to ensure their effectiveness
and using his position at the top of the Kigali provincial government to
disseminate orders and dismiss officials who were not sufficiently active in
perpetuating murder.
In rural areas, the local
government hierarchy was also in most cases the chain of command for execution
of the genocide. The governor of each province, acting on orders from Kigali,
disseminated instructions to the district leaders who in turn issued directions
to the leaders of the sector, cells, and villages of their districts. The majority
of actual killings in the countryside were carried out by ordinary civilians
under orders from their leaders. A combination of historical Hutu repression by
the Tutsi minority, a culture of obedience to authority, and duress due to the
belief that lack of participation would lead to violent retribution, all
contributed to the willingness of ordinary citizens to commit violent acts
against their neighbors.
The crisis committee
appointed an interim government on April 8. Using the terms of the 1991
constitution instead of the Arusha Accords, the committee designated Theodore
Sindikubwabo as interim president and Jean Kambanda was the new prime minister.
All political parties were represented in the government, but most members were
from the Hutu Power wings of their respective parties. The interim government
was sworn in on April 9, and immediately relocated their headquarters from
Kigali to Gitarama in order to avoid fighting between the RPF and Rwandan army
in the capital. The crisis committee was officially dissolved, but Bagosora and
some senior officers remained de facto rulers of the country. The government
played some part in mobilizing the population, providing the regime an air of
legitimacy, but it was in reality a puppet regime with no ability to halt the
army or interahamwe’s activities.
Impact
Given the chaotic nature of
the situation, there is no consensus on the number of people killed during the
genocide. Unlike the genocides carried out by Nazi Germany or the Khmer Rouge
in Cambodia, authorities made no attempts to document or systematize deaths.
The succeeding RPF government has stated that 1,071,000 were killed in 100 days,
10% of whom were Hutu. Based on those statistics, it could be derived that 10,000
people were murdered every day, which equals 400 people per hour, or seven
people every minute. The journalist Philip Gourevitch agrees with an estimate
of one million, while the UN estimates the death toll to be 800,000. It is
estimated that approximately 300,000 Tutsi survived the genocide. Thousands of
widows, many of whom were subjected to rape, are now HIV-positive. The genocide
also created about 400,000 orphans, and nearly 85,000 of them were forced to
become heads of households.
Rwandan Patriotic Front
Military Campaign and Victory
On April 7, as the genocide
began, RPF commander Paul Kagame warned the crisis committee and UNAMIR that he
would resume the civil war if the killing did not stop. The next day, Rwandan
government forces attacked the national parliament building from several
directions, but RPF troops stationed there successfully fought back. The RPF
then began an attack from the north on three fronts, seeking to link up quickly
with the isolated troops in Kigali. Kagame refused to talk to the interim
government, believing that it was just a cover for Bagosora’s rule and not truly
committed to ending the genocide. Over the next few days, the RPF advanced
steadily south, capturing Gabiro and large areas of the countryside to the
north and east of Kigali. They avoided attacking Kigali or Byumba, but
conducted maneuvers designed to encircle the cities and cut off supply routes.
The RPF also allowed Tutsi refugees from Uganda to settle behind the front line
in RPF-controlled areas.
Throughout April, there were
numerous attempts by UNAMIR to establish a ceasefire, but Kagame insisted each
time that the RPF would not stop fighting unless the killings stopped. In late
April, the RPF secured the whole of the Tanzanian border area and began to move
west from Kibungo to the south of Kigali. They encountered little resistance,
except around Kigali and Ruhengeri. By May 16, they cut the road between
Kigali and Gitarama, the temporary home of the interim government, and by June
13, they had taken Gitarama itself following an unsuccessful attempt by the
Rwandan government forces to reopen the road. Subsequently, the interim
government was forced to relocate to Gisenyi in the far northwest. As well as
fighting the war, Kagame was recruiting heavily to expand the RPF. The new
recruits included Tutsi survivors of the genocide and refugees from Burundi,
but they were less well -trained and disciplined than earlier recruits.
Having completed the encirclement of Kigali, the RPF spent the latter of
half of June fighting for the city itself. The government forces had superior
manpower and weapons, but the RPF steadily gained in territory while conducting
raids to rescue civilians behind enemy lines. Kagame was able to exploit the
government forces’ focus on the genocide and translate that into RPF wins in
the battle for Kigali. The RPF also benefited from the government’s waning
morale as it lost territory. The RPF finally defeated Rwandan government forces
in Kigali on July 4, and on July 18, they took Gisenyi and the rest of the
northwest, forcing the interim government into Zaire, ending the genocide. At
the end of July 1994, Kagame’s forces held the whole of Rwanda, except for the
zone in the southwest occupied by the French-led UN force, Operation
Turquoise.
37.4.4: Aftermath and Reconciliation in Rwanda
Rwandans had recourse to international and
community justice in the aftermath of the genocide.
Learning Objective
Evaluate the methods used to encourage
reconciliation after the genocide
Key Points
- The systematic
destruction of the judicial system during the genocide and civil war was a
major problem for the prospects of reconciliation in Rwanda.
- It was not
until 1996 that Rwandan courts finally began trials for genocide cases with the
enactment of Organic Law N° 08/96 of 30 on August 30.
- In response to
the overwhelming number of potentially culpable individuals and the slow pace
of the traditional judicial system, the government of Rwanda passed Organic Law
N° 40/2000 in 2001, establishing Gacaca Courts at all administrative levels.
- The Gacaca
court system traditionally dealt with conflicts within communities, but it was
adapted to deal with genocide crimes.
- The International
Criminal Tribunal for Rwanda (ICTR) had jurisdiction over high-level members of
the government and armed forces, while the government of Rwanda was responsible
for prosecuting lower-level leaders and local people.
- Following the
RPF victory, approximately two million Hutu fled to refugee camps in neighboring
countries, particularly Zaire, fearing RPF reprisals for the Rwandan Genocide.
- Refugee camps
were set up by the United Nations High Commissioner for Refugees (UNHCR), but
were effectively controlled by the army and government of the former Hutu
regime, who began rearming in a bid to return to power in Rwanda.
- In addition to
dismantling the refugee camps, Kagame began planning a war to remove the long
time dictator of Zaire, who had supported the genocidaires based in the camps and was accused of allowing
attacks on Tutsi people within Zaire.
Key Term
- Gacaca
-
Loosely translated to “justice among the grass,”
a system of community justice inspired by Rwandan tradition. It was adapted in 2001 to fit the needs of Rwanda in
the wake of the 1994 genocide.
Domestic Situation
The infrastructure and
economy of Rwanda suffered greatly during the genocide. Many buildings were
uninhabitable, and the former regime had taken all currency and
movable assets when they fled the country. Human resources were also
severely depleted, with over 40% of the population having been killed or fled.
Many of the remainder were traumatized: most had lost relatives, witnessed
killings, or participated in the genocide. The long-term effects of war rape in
Rwanda for the victims include social isolation, sexually transmitted diseases, and unwanted pregnancies and babies, with some women resorting to self-induced
abortions. The army, led by Paul Kagame, maintained law and order while the
government began the work of rebuilding the country’s structures.
Non-governmental organizations
began to move back into the country, but the international community did not
provide significant assistance to the new regime, and most international aid
was routed to the refugee camps formed in Zaire following the exodus
of Hutu from Rwanda. Kagame strove
to portray the government as inclusive and not Tutsi-dominated. He directed the
removal of ethnicity from citizens’ national identity cards, and the government
began a policy of downplaying the distinctions among Hutu, Tutsi, and Twa.
During the genocide and in
the months following the RPF victory, RPF soldiers killed many people they
accused of participating in or supporting the genocide. Many of these soldiers
were recent Tutsi recruits from within Rwanda who had lost family or friends
and sought revenge. The scale, scope, and source of ultimate responsibility of
these reprisal killings is disputed, although some non-governmental
organizations such as Human Rights Watch alleged that Kagame and the RPF elite
either tolerated or organized the killings. In an interview with journalist
Stephen Kinzer, Kagame acknowledged that killings had occurred, but stated that
they were carried out by rogue soldiers and had been impossible to control.
July 4, 1994, is marked as Liberation
Day in Rwanda and commemorated as a public holiday. The RPF has been the
dominant political party in the country since 1994 and maintained control
of the presidency and the Parliament in national elections, with the party’s
vote share consistently exceeding 70%. The RPF is seen as a Tutsi-dominated
party but receives support from across ethnic sub-groups. It is credited with
ensuring continued peace, stability, and economic growth; however, some human
rights organizations, such as Freedom House and Amnesty International, claim
that the government suppresses the freedoms of opposition groups.
Justice System
The systematic destruction
of the judicial system during the genocide and civil war was a major problem
for the prospects of reconciliation in Rwanda. After the genocide, over one
million people were potentially culpable for their roles in the genocide, amounting
to nearly one-fifth of the population remaining after the summer of 1994. The
RPF pursued a policy of mass arrests for the genocide, jailing over 100,000 in
the two years after the genocide. The pace of arrests overwhelmed the physical
capacity of the Rwandan prison system, leading to what Amnesty International
deemed “cruel, inhuman, or degrading treatment”. The country’s 19 prisons were
designed to hold about 18,000 inmates, but at their peak in 1998, there were
100,000 people in detention facilities across the country.
Government institutions,
including judicial courts, were destroyed, and many judges, prosecutors, and
employees were murdered. By 1997, Rwanda only had 50 lawyers in its judicial
system. These barriers caused trials of those arrested for
genocide-related crimes to proceed very slowly. Of the 130,000 suspects held in
Rwandan prisons after the genocide, 3,343 cases were handled between
1996 and the end of 2000. Of those defendants, 20% received death sentences,
32% received sentencing of life in prison, and 20% were acquitted. It was
calculated that it would take over 200 years to conduct the trials of the
suspects in prison—not including individuals who remained at large.
It was not until 1996 that
Rwandan courts finally began trials for genocide cases with the enactment of
Organic Law N° 08/96 of 30 on August 30, 1996. This law established the
regular domestic courts as the core mechanism for responding to genocide until
it was amended in 2001 to include the Gacaca Courts. The Organic Law
established four categories for those involved in the genocide, specifying the
limits of punishment for members of each category. The first category was
reserved for those who were “planners, organizers, instigators, supervisors and
leaders” of the genocide or who used positions of state authority
to promote the genocide. This category also applied to murderers who
distinguished themselves on the basis of their zeal or cruelty or who engaged
in sexual torture. Members of this first category were eligible for the death
sentence.
While Rwanda had the death
penalty prior to the 1996 Organic Law, no executions had taken
place since 1982. However, following the enactment of the 1996 Organic Law, 22
individuals were executed by firing squad in public executions in April 1997.
After this, Rwanda conducted no further executions, though it did continue to
issue death sentences until 2003. On July 25, 2007, the Organic Law Relating to
the Abolition of the Death Penalty came into law, abolishing the death penalty
and converting all existing death penalty sentences to life in prison under
solitary confinement.
Gacaca Courts
In response to the
overwhelming number of potentially culpable individuals and the slow pace of
the traditional judicial system, the government of Rwanda passed Organic Law N°
40/2000 in 2001. The new law established Gacaca Courts at all
administrative levels of Rwanda and in Kigali. It was mainly created to lessen
the burden on normal courts and escalate the administration of justice for
those already in prison. The least severe cases, according to the terms
of Organic Law N° 08/96 of 30, would be handled by the Gacaca Courts.
With this law, the government began implementing a participatory justice
system, known as Gacaca, to address the enormous backlog of cases.
The Gacaca court system
traditionally dealt with conflicts within communities, but was adapted to
deal with genocide crimes. The following are the objectives of the Gacaca Courts:
- Identifying the truth about
what happened during the genocide
-
Speeding up genocide
trials,
-
Fighting against a culture
of impunity
-
Contributing to the national
unity and reconciliation process
-
Demonstrating the capacity
of the Rwandan people to resolve their own problems.
Throughout the years, the
Gacaca court system went through a series of modifications. It is estimated
that it has tried over one million cases to date. Meanwhile,
the UN established the International Criminal Tribunal for Rwanda (ICTR), based
in Arusha, Tanzania. The UN Tribunal had jurisdiction over high-level members
of the government and armed forces, while the government of Rwanda was
responsible for prosecuting lower-level leaders and local people.
Closing of the Courts
On June 18, 2012, the Gacaca
court system was officially closed after facing criticism over favoring members
and associated parties to the RPF-dominated government. Concern persisted that
the judges who presided over the genocide trials were not trained adequately
for serious legal questions or complex proceedings. Further, many judges
resigned after facing accusations of personal participation in the genocide. There was a lack of defense counsel and protections for the
accused, who were denied the right to appeal to ordinary courts. Most trials
were open to the public, but there were issues relating to witness
intimidation.
Since the ICTR was
established as an ad hoc international jurisdiction, the tribunal was
officially closed on December 31, 2015. Initially, the UN Security Council
established the ICTR in 1994 with a mandate of four years without a fixed
deadline. As the years passed, however, it became apparent that the ICTR would
exist long past its original mandate.
Refugees, Insurgency, and
the Congo Wars
Following the RPF victory,
approximately two million Hutu fled to refugee camps in neighboring countries,
particularly Zaire, fearing RPF reprisals for the Rwandan Genocide. Refugee
camps were crowded and squalid, and thousands of refugees died in disease
epidemics, including cholera and dysentery. The camps were set up by the United
Nations High Commissioner for Refugees (UNHCR), but were effectively controlled
by the army and government of the former Hutu regime, including many leaders of
the genocide, who began rearming in a bid to return to power in Rwanda. By late
1996, Hutu militants from the camps were launching regular cross-border
incursions, and the RPF-led Rwandan government launched a counteroffensive.
Rwanda provided troops and military training to the Banyamulenge, a Tutsi group
in the Zairian South Kivu province, helping them to defeat Zairian security
forces. Rwandan forces, the Banyamulenge, and other Zairian Tutsi then
attacked the refugee camps, targeting Hutu militia. These attacks caused
hundreds of thousands of refugees to flee, many returning to Rwanda
despite the presence of the RPF, while others ventured further west into Zaire.
The defeated forces of the
former regime continued a cross-border insurgency campaign, supported initially
by the predominantly Hutu population of Rwanda’s northwestern provinces. By
1999, however, a program of propaganda and Hutu integration into the Rwandan national
army succeeded in bringing the Hutu to the side of the government, and the
insurgency was defeated.
In addition to dismantling
the refugee camps, Kagame began planning a war to remove long-time dictator of
Zaire, President Mobutu Sese Seko, from power. Mobutu supported the genocidaires based in the camps and was
accused of allowing attacks on Tutsi people within Zaire. The rebels
quickly took control of North and South Kivu provinces and then advanced west,
gaining territory from the poorly organized and demotivated Zairian army with
little fighting, and controlling the whole country by May 1997. Mobutu
fled into exile and the country was renamed the Democratic Republic of the
Congo (DRC). Rwanda fell out with the new Congolese regime in 1998 and Kagame
supported a fresh rebellion, leading to the Second Congo War. This war lasted
until 2003 and caused millions of deaths and massive damage. A 2010 UN report
accused the Rwandan army of committing widespread human rights violations and
crimes against humanity in the DRC during the two Congo wars, but the charges
were denied by the Rwandan government.
37.4.5: The Lack of International Response
Most international actors during the Rwandan
genocide stood on the sidelines, hoping to avoid their own nationals’ loss of
life and political entanglements.
Learning Objective
Account for the lack of international
intervention during the Rwandan genocide
Key Points
- The United
Nations Assistance Mission for Rwanda (UNAMIR) had been in Rwanda since October
1993, but their mandate was hampered by the UN’s inability to intervene
militarily, President Habyarimana and other Hutu Power hardliners, and the loss
of troops.
-
During the first
few days of the genocide, France launched Amaryllis, a military operation
assisted by the Belgian army and UNAMIR, to evacuate expatriates from Rwanda,
but the French and Belgians refused to allow any Tutsi to accompany the
evacuations.
-
In late June
1994, France launched Opération Turquoise, a UN-mandated mission to create safe
humanitarian areas for displaced persons, refugees, and civilians in danger, but
as the genocide came to an end and the RPF ascended to a leadership role within
the country, many Rwandans interpreted Turquoise as a mission to protect Hutu
from the RPF.
- U.S. president
Bill Clinton and his cabinet were aware of a “final solution” for Tutsi people
within Rwanda before the massacre began, but fear of a repeat of the events in
Somalia shaped U.S. failure to intervene.
-
Many Catholic and other clergy within Rwanda sacrificed their lives to
save others from slaughter; however, there is evidence that others did
little to prevent the spread of the genocide, with some even actively
participating in crimes.
Key Terms
- Chapter VI mandate
-
The chapter of the United Nations Charter that deals
with peaceful settlement of disputes. It requires countries with disputes that
could lead to war to first seek solutions via peaceful methods. If these
methods of alternative dispute resolution fail, the issue must be referred to
the UN Security Council.
- Françafrique
-
A portmanteau of France and Afrique used to
denote France’s relationship with its former African colonies and sometimes
extended to cover former Belgian colonies as well.
Most of the world stood on
the sidelines during the Rwandan genocide, hoping to avoid the loss of life and
political entanglement that the American debacle in Somalia had created. As
reports of the genocide spread through the media, the Security Council agreed
to supply more than 5,000 troops to Rwanda to combat the genocide. But the
delay and denial of recommendations prevented the force from getting there in a
timely fashion, and ultimately they arrived months after the genocide was over.
After the genocide, many government officials in the international community
mourned the loss of thousands of civilians within Rwanda, though they took no action to prevent the slaughter as it was happening.
UNAMIR
The United Nations
Assistance Mission for Rwanda (UNAMIR) had been in Rwanda since October 1993
with a mandate to oversee the implementation of the Arusha Accords following
the Rwandan civil war. UNAMIR commander Romeo Dallaire learned of the Hutu
Power movement during the mission’s deployment, as well as plans for the mass
extermination of Tutsi. Dallaire also learned of growing secret weapons caches,
but his request to raid them was turned down by the UN Department of
Peacekeeping Operations (DPKO). UNAMIR’s effectiveness in peacekeeping was also
hampered by President Habyarimana and Hutu hardliners, and by April 1994, the
Security Council threatened to terminate UNAMIR’s mandate if it did not make
progress with its mission.
Following the death of
Habyarimana and the start of the genocide, Dallaire liaised repeatedly with
both the Crisis Committee and the RPF, attempting to re-establish peace and
prevent the resumption of the civil war. Neither side was interested in a
ceasefire: the government was controlled by backers of the genocide, and the
RPF considered continued fighting necessary to stop the killings. UNAMIR’s
Chapter VI mandate rendered it powerless to intervene militarily, and most of
its Rwandan staff were killed in the early days of the genocide, severely
limiting its ability to operate. On April 12, the Belgian government, one of the largest troop contributors to UNAMIR, lost ten soldiers who were
protecting Prime Minister Uwilingillyimana, and subsequently announced its
withdrawal from the force, reducing UNAMIR’s effectiveness further.
UNAMIR was therefore largely
reduced to a bystander role, and Dallaire later labeled it a failure. Its most
significant contribution was to provide refuge for thousands of Tutsi and
moderate Hutu at its headquarters in Amahoro Stadium as well as other secure
UN sites, and in assisting with the evacuation of foreign nationals. In
mid-May, the UN finally conceded that “acts of genocide may have been
committed” and agreed to reinforcement, which would be referred to as UNAMIR
2. New soldiers did not start arriving until June, however, and following the
end of the genocide in July, the role of UNAMIR 2 was largely confined to
maintaining security and stability until its termination in 1996.
France and Operation
Turquoise
During President
Habyarimana’s years in power, France maintained very close relations with him
as part of its Françafrique policy and assisted Rwanda militarily against the
RPF during the Civil War. France considered the RPF, along with Uganda, to be a
part of a plot to increased Anglophone influence at the expense of that of the
French. During the first few days of the genocide, France launched Amaryllis,
a military operation assisted by the Belgian army and UNAMIR, to evacuate
expatriates from Rwanda. The French and Belgians refused to allow any Tutsi to
accompany them, and those who boarded the evacuation trucks were forced off at
Rwandan government checkpoints, where they were killed. The French also
separated several expatriates and children from their Tutsi spouses, rescuing
the foreigners but leaving the Rwandans to a likely death. The French did,
however, rescue several high profile members of Habyarimana’s government, as
well as his wife, Agathe.
In late June 1994, France
launched Opération Turquoise, a UN-mandated mission to create safe humanitarian
areas for displaced persons, refugees, and civilians in danger. The French
entered southwestern Rwanda from bases in the Zairian cities of Goma and Bukavu
and established the zone Turquoise within the Cyangugu-Kibuye-Gikongoro
triangle, an area occupying approximately one-fifth of Rwanda. Radio France
International Estimated that Turquoise saved around 15,000 lives, but as the
genocide came to an end and the RPF ascended to a leadership role within the
country, many Rwandans interpreted Turquoise as a mission to protect Hutu from
the RPF, including some Hutu who had participated in the genocide. The French
remained hostile to the RPF and their presence did temporarily stall the RPF’s
advance. A number of inquiries have been made into French involvement in
Rwanda, including the 1998 French Parliamentary Commission on Rwanda, which
accused France of errors of judgement but stopped short of accusing it of
direct responsibility for the genocide itself. A 2008 report by the Rwandan
government and sponsored by the Mucyo Commission, however, did accuse the
French government of knowing about the genocide and helping to train
Hutu militia members.
Other International Actors
Intelligence reports
indicated that U.S. president Bill Clinton and his cabinet were aware of a “final
solution” for Tutsi people within Rwanda before the height of the massacre. However, fear of a repeat of the events in Somalia shaped U.S. policy at
the time, with many commentators identifying the graphic consequences of the
Battle of Mogadishu as the key reason for the U.S. failure to intervene in
later conflicts such as the Rwandan Genocide. After the Battle of Mogadishu,
the bodies of several U.S. casualties were dragged through the streets by crowds
of local civilians and members of Aidid’s Somali National Alliance. As a
result, 80% of the discussion in Washington in the lead up to the 100 days of
violence in Rwanda concerned the evacuation of American citizens. Later, Bill
Clinton would refer to the failure of the U.S. government to intervene in the
genocide as one of his greatest foreign policy failings while in office.
The Roman Catholic Church affirms that a genocide took place in Rwanda,
but states that those who took part did so without the permission of the
Church. Many Catholic and other clergy sacrificed their lives to save others
from slaughter. However, there is evidence that others contributed to the
mayhem, with some even actively participating in crimes. Though religious
factors were not prominent, Human Rights Watch faulted a number of religious
authorities in Rwanda in a 1999 report on the genocide, including Roman
Catholics, Anglicans, and Protestants, for failing to condemn the genocide.
Some religious authorities were even tried and convicted for their
participation in the genocide by the International Criminal Tribunal for Rwanda
(ICTR). Father Athanase Seromba was sentenced to 15 years imprisonment
(increased on appeal to life imprisonment) by the ICTR for his role in the massacre
of 2,000 Tutsis. The court heard that Seromba lured the Tutsis to a church
where they believed they would find refuge. When they arrived, he ordered
bulldozers to crush the refugees within and Hutu militias to kill any
survivors. Similarly, Bishop Misago was accused of corruption and complicity in
the genocide, but was cleared of all charges in 2000.
37.5: The Yugoslav War
37.5.1: Populations of the Former Yugoslavia
Serbs, Croats, and Bosniaks were the three
largest South Slavic groups that inhabited the Socialist Federal Republic of
Yugoslavia.
Learning Objective
Describe the similarities and differences
between Serbs, Croats, and Bosniaks
Key Points
- Until the 19th
century, the term Bosniak (Bošnjak) referred
to all inhabitants of Bosnia regardless of religious affiliation; over
time, a growing sense of Bosnian nationhood was cherished mainly by Muslim
Bosnians, associating the Bosniak identity with one faith.
- After World War
I, the Kingdom of Serbs, Croats, and Slovenes (later called the Kingdom of
Yugoslavia, or the First Yugoslavia) was formed, recognizing only those three
nationalities in its constitution as Serbian and Croatian nationalists
attempted to absorb Bosniak ethnicities into their
populations.
- Following the liberation
of Yugoslavia, the Communist Party of Yugoslavia reorganized the country into
federal republics: Serbia, Croatia, Bosnia and Herzegovina, Slovenia, Macedonia,
and Montenegro.
- Official state
policy prescribed that Yugoslavia’s peoples were equal groups that would
coexist peacefully within the federation.
- Josip Broz Tito,
the first president of Yugoslavia, expressed his desire for an undivided
Yugoslav ethnicity; however, distinctions among ethnic groups persisted,
reinforced by disparate histories of foreign occupation.
-
In 1964, the Fourth Congress of the Bosnian Party assured Bosniaks the right
to self-determination, prompting the recognition of Bosnian Muslims as a
distinct nation at a meeting of the Bosnian Central Committee in 1968, though
not under the Bosniak or Bosnian name.
Key Term
- South Slavs
-
A subgroup of Slavic peoples
who speak South Slavic languages. They inhabit a contiguous region in the
Balkan Peninsula, southern Pannonian Plain, and eastern Alps, and are
geographically separated from the body of West Slavic and East Slavic people by
the Romanians, Hungarians, and Austrians. The South Slavs include the Bosniaks,
Bulgarians, Croats, Macedonians, Montenegrins, Serbs, and Slovenes.
Following the liberation of Yugoslavia, the
Communist Party of Yugoslavia reorganized the country into federal republics: Serbia,
Croatia, Bosnia and Herzegovina, Slovenia, Macedonia, and Montenegro.
Further, two autonomous provinces were created within the Serbian republic
based on the presence of minorities in the region: Vojvodina (inhabited by a
Hungarian minority) and Kosovo and Metohija (inhabited by an Albanian minority).
The term “nationality” (narodnost) was used to describe the status of Albanians,
Hungarians, and other non-constitutive peoples, distinguishing them from the
nations. This combination of historical and ethnic criteria only applied to
Serbia and not Italian-inhabited Istria or Serb-inhabited Krajina, for example.
The word “nation” (nacija, narod) was used to denote the country’s
constitutive peoples (konstitutivne
nacije), or residents of the federal republics.
Official state policy
prescribed that Yugoslavia’s peoples were equal groups that would coexist
peacefully within the federation. This policy was distilled into a slogan:
“brotherhood and unity” and provided for in the 1974 Yugoslav
constitution.
South Slavs
The concept of Yugoslavia as
a single state for all South Slavic peoples emerged in the late 17th century
and gained prominence through the Illyrian movement of the 19th century. The
name Yugoslavia (sometimes spelled Jugoslavia) is a combination of the Slavic
words jug (south) and sloveni (Slavs). When the term Yugoslav
was first introduced, it was meant to unite a common people of South Slavs.
Josip Broz Tito, the first president of Yugoslavia, expressed his desire for an
undivided Yugoslav ethnicity; however, distinctions among ethnic groups
persisted, reinforced by disparate histories of foreign occupation. As of 1981,
Serbs were the largest ethnic population within Yugoslavia, representing 36.3%
of the population. Croats comprised the second largest ethnic majority,
representing 19.7% of the population, and Muslims, or Bosniaks, comprised 8.9%
of the population.
Bosniaks
Until the 19th century, the term
Bosniak (Bošnjak) came to refer to
all inhabitants of Bosnia regardless of religious affiliation. Terms such as
“Boşnak milleti”, “Boşnak kavmi”, and “Boşnak
taifesi” (all meaning, roughly, “the Bosnian people”) were used
in the Ottoman Empire to describe Bosnians in an ethnic or tribal sense. After
the Austro-Hungarian occupation of Bosnia and Herzegovina in 1878, the Austrian
administration officially endorsed Bošnjaštvo
(“Bosniakhood”) as the basis of a multi-confessional Bosnian nation. The policy
aspired to isolate Bosnia and Herzegovina from its irredentist neighbors
(Orthodox Serbia, Catholic Croatia, and the Muslims of the Ottoman Empire) and
to negate the concept of Croatian and Serbian nationhood, which had already
begun to take ground among Bosnia and Herzegovina’s Catholic and Orthodox
communities, respectively. Nevertheless, a sense of Bosnian nationhood was
cherished mainly by Muslim Bosnians, but fiercely opposed by nationalists from
Serbia and Croatia who were instead opting to claim the Bosnian Muslim
population as their own. After World War I, the Kingdom of Serbs, Croats, and
Slovenes (later called the Kingdom of Yugoslavia) was formed and recognized
only those three nationalities in its constitution.
After World War II, in the
Socialist Federal Republic of Yugoslavia, Bosnian Muslims continued to be
treated as a religious group instead of an ethnic one. In the 1948 census,
Bosnia and Herzegovian’s Muslims had three options for self-identification:
Serb-Muslim, Croat-Muslim, or ethnically undeclared Muslim. In the 1953 census,
the category “Yugoslav, ethnically undeclared” was introduced, and the
overwhelming majority of those who declared themselves as such were Muslim.
Bosniaks were recognized as an ethnic group in 1961, but not as a nationality. Nevertheless,
many Bosniak communist intellectuals argued that the Muslims of Bosnia and
Herzegovina were in fact a distinct native Slavic people that should be
recognized as a nation.
In 1964, the Fourth Congress of the Bosnian Party assured Bosniaks the right
to self-determination, prompting the recognition of Bosnian Muslims as a
distinct nation at a meeting of the Bosnian Central Committee in 1968, though
not under the Bosniak or the Bosnian name. As a compromise, the Constitution of
Yugoslavia was amended to list “Muslims” in a national sense, recognizing
a constitutive nation but not the Bosniak name. The use of “Muslim” as an
ethnic denomination was criticized early on, however. Sometimes other terms,
such as Muslim with a capital “M” were used (that is, “musliman” was
a practicing Muslim, while “Musliman” was a member of the Muslim
nation; Serbo-Croatian uses capital letters for names of peoples, but small for
names of adherents).
37.5.2: NATO and UN Intervention
Although NATO and UN intervention into the Bosnian conflict was
significant, its outcomes were often controversial.
Learning Objective
Assess the successes and limitations of NATO and UN interventions in the
Bosnian War
Key Points
Key Term
- Vance-Owen Peace Plan
-
A peace
proposal negotiated with the leaders of Bosnia’s warring factions by UN Special
Envoy Cyrus Vance and EC representative Lord Owen. This plan involved the
division of Bosnia into ten semi-autonomous regions.
The United Nations and Bosnia
The UN repeatedly, but
unsuccessfully, attempted to stop the Bosnian War, and the much-touted
Vance-Owen Peace Plan in the first half of 1993 made little impact. On February
22, 1993, the United Nations Security Council passed Resolution 808, which
decided “that an international tribunal shall be established for the
prosecution of persons responsible for serious violations of international
humanitarian law.” On May 15-16, 96% of Serbs voted to reject the
Vance-Owen peace plan. After the failure of the plan, an armed conflict sprang
up between Bosniaks and Croats over the 30% of Bosnia the latter held. The
peace plan was one of the factors leading to the escalation of the conflict as
Lord Owen avoided moderate Croat authorities (pro-unified Bosnia) and
negotiated directly with more extreme elements who were in favor of
separation.
On May 25, 1993, the
International Criminal Tribunal for the former Yugoslavia (ICTY) was formally
established by Resolution 827 of the United Nations Security Council. In April
1993, the United Nations Security Council issued Resolution 816, calling on
member states to enforce a no-fly zone over Bosnia-Herzegovina. On April 12,
1993, NATO commenced Operation Deny Flight to enforce this no-fly zone. In an
attempt to protect civilians, the United Nations Protection Force (UNPROFOR),
which had been established during the Croatian War of Independence, saw its
role further extended in May 1993 to protect areas declared as “safe
havens” around Sarajevo, Goražde, Srebrenica, Tuzla, Žepa, and Bihać by
Resolution 824. On June 4, 1993, the United Nations Security Council passed
Resolution 836, authorizing the use of force by UNPROFOR for the purpose of
protecting the above-named safe zones.
United Nations Safe Zones
The establishment of the UN
Safe Areas is considered one of the most controversial decisions of the United
Nations. The resolutions establishing the safe areas were unclear about the
procedure by which these areas were to be protected in the war zone that Bosnia
and Herzegovina had become. The resolutions also created a difficult diplomatic
situation for member states that voted in favor of it due to their
unwillingness to take necessary steps to ensure the security of the safe areas.
In 1995, the situation in the UN Safe Areas had deteriorated to the point of diplomatic
crisis, culminating in the Srebrenica massacre, one of the worst atrocities in
Europe since World War II. By the end of the war, every one of the Safe Areas
had been attacked by the Serbs, and Srebrenica and Žepa were overrun.
Srebrenica
From the outset, violations
of the safe area agreement in Srebrenica were abundant. Between 1,000 and 2,000
soldiers from three of the Army of Republika Srpska (VRS) Drina Corps Brigades
were deployed around the enclave, equipped with tanks, armored vehicles,
artillery, and mortars. The 28th Mountain Division of the Army of the Republic
of Bosnia and Herzegovina (ARBiH) that remained in the enclave was neither well-organized nor well-equipped. A firm command structure and communications system
was lacking and some soldiers carried old hunting rifles or no weapons at all.
Few had proper uniforms. Lieutenant-Colonel Thomas Karremans (the Dutchbat
Commander with UNPROFOR) testified to the ICTY that his personnel were
prevented from returning to the enclave by Serb forces and that equipment and
ammunition were also barred. Bosniaks in Srebrenica
complained of attacks by Serb soldiers, while to the Serbs it appeared that
Bosnian government forces in Srebrenica were using the safe area as a
convenient base from which to launch counter-offensives against the VRS, with UNPROFOR
failing to take any preventive action. General Sefer Halilović admitted that
ARBiH helicopters had flown in violation of the no-fly zone and that he had
personally dispatched eight helicopters with ammunition for the 28th Division
within the enclave.
A Security Council mission
led by Diego Arria arrived in Srebrenica on April 25, 1993, and in their
subsequent report to the UN, condemned the Serbs for perpetrating “a slow-motion
process of genocide.” The mission then stated that:
“Serb forces must
withdraw to points from which they cannot attack, harass or terrorise the town.
UNPROFOR should be in a position to determine the related parameters. The
mission believes, as does UNPROFOR, that the actual 4.5 km by 0.5 km decided as
a safe area should be greatly expanded.”
Specific instructions from UN
Headquarters in New York stated that UNPROFOR should not be too zealous in
searching for Bosniak weapons and later, that the Serbs should withdraw their
heavy weapons before the Bosniaks gave up their weapons. The Serbs never did
withdraw their heavy weapons.
By early 1995, fewer and
fewer supply convoys were making it through to the enclave. The situation in
Srebrenica and in other enclaves had deteriorated into lawless violence as
prostitution among young Muslim girls, theft, and black marketeering
proliferated. The already meager resources of the civilian population dwindled
further and even the UN forces started running dangerously low on food,
medicine, ammunition, and fuel, eventually forced to patrol the
enclave on foot. Dutchbat soldiers who went out of the area on leave were not
allowed to return, and their numbers dropped from 600 to 400 men. In March and
April, the Dutch soldiers noticed a build-up of Serb forces near two of their
observation posts.
In March 1995, Radovan
Karadžić, President of the Republika Srpska (RS), despite pressure from the
international community to end the war and ongoing efforts to negotiate a peace
agreement, issued a directive to the VRS concerning the long-term strategy of
the VRS forces in the enclave. The directive, known as “Directive 7”,
specified that the VRS was to completely separate Srebrenica from Žepa and make
the situation within Srebrenica enclave unbearable by combat means, with the
aim of ending the life of all Srebrenica’s inhabitants. By mid-1995, the
humanitarian situation of the Bosniak civilians and military personnel in the
enclave was catastrophic. In May, following orders, ARBiH Commander Naser Orić
and his staff left the enclave by helicopter to Tuzla, leaving senior officers
in command of the 28th Division. In late June and early July, the 28th Division
issued a series of reports, including urgent pleas for the humanitarian
corridor to the enclave to be reopened. When this failed, Bosniak civilians
began dying from starvation. On July 7, the mayor of Srebrenica reported that
eight residents had died of starvation.
The Serb offensive against
Srebrenica began in earnest the day before, on July 6, 1995. In the following
days, the five UNPROFOR observation posts in the southern part of the enclave
fell one by one in the face of the Bosnian-Serb advance. Some of the Dutch
soldiers retreated into the enclave after their posts were attacked, but the
crews of the other observation posts surrendered into Serb custody.
Simultaneously, the defending Bosnian forces came under heavy fire and were
pushed back towards the town. Once the southern perimeter began to collapse,
about 4,000 Bosniak residents who had been living in a Swedish housing complex
for refugees nearby fled north into the town of Srebrenica. Dutch soldiers
reported that the advancing Serbs were “cleansing” the houses in the
southern part of the enclave.
Late on July 9, 1995,
emboldened by early successes and little resistance from the largely demilitarized
Bosniaks and the absence of any significant reaction from the
international community, Karadžić issued a new order authorizing the
1,500-strong VRS Drina Corps to capture the town of Srebrenica. The following
morning (July 10), Lieutenant-Colonel Karremans made urgent requests for air
support from NATO to defend Srebrenica as crowds filled the streets, some carrying
weapons. VRS tanks were approaching the town, and NATO airstrikes on these
began on the afternoon of July 11, 1995. NATO bombers attempted to attack VRS
artillery locations outside the town, but poor visibility forced NATO to cancel
this operation. Further NATO air attacks were cancelled after the VRS threatened
to bomb the UN’s Potočari compound, kill Dutch and French military hostages,
and attack surrounding locations where 20,000 to 30,000 civilian refugees were
situated. In the days that followed, more than 8,000 Muslim Bosniaks, mainly
men and boys, would be killed by units of the VRS under the command of General
Ratko Mladić.
NATO Military Involvement
NATO became militarily
involved in the conflict when its jets shot down four Serb aircraft in violation of
the UN no-fly zone over central Bosnia on February 28, 1994. On March 12, 1994,
the UNPROFOR made its first request for NATO air support, but close air support
was not deployed as the approval
process was delayed. On April 10-11, 1994, UNPROFOR called in NATO air strikes to protect
the Goražde safe area, resulting in the bombing of a Serbian military command
outpost near Goražde by 2 US F-16 jets. This was the first time in NATO’s
history that it had participated in this type of military maneuver. As a
result, 150 UN personnel were taken hostage on April 14, and on April 16, a
British Sea Harrier was shot down over Goražde by Serb forces.
On August 5, at the request
of UNPROFOR, NATO aircraft attacked a target within the Sarajevo Exclusion Zone
after weapons were seized by Bosnian Serbs from a collection site near
Sarajevo. On September 22, 1994, NATO aircraft carried out an air strike
against a Bosnian Serb tank at the request of UNPROFOR.
Operation Deliberate Force
Operation Deliberate Force was a sustained air campaign conducted by NATO
in concert with UNPROFOR ground operations to undermine the military capability
of the VRS, which had threatened and attacked UN-designated safe areas in
Bosnia and Herzegovina during the Bosnian War. Events such as the Srebrenica
and Markale massacres precipitated intervention. The operation was carried out
between August 30 and September 20, 1995, involving 400 aircraft and
5,000 personnel from 15 nations. Commanded by Admiral Leighton W. Smith, the
campaign struck 338 Bosnian Serb targets, many of which were destroyed.
Overall, 1,026 bombs were dropped during the operation, 708 of which were
precision-guided. The air campaign was key in pressuring the Federal Republic
of Yugoslavia to take part in negotiations that resulted in the Dayton
Agreement reached in November 1995.
37.5.3: The Bosnian War
The Bosnian War was an
international armed conflict that took place between the Republic of Bosnia and
Herzegovina and Bosnian Serb and Bosnian Croat entities within Bosnia and
Herzegovina, Republika Srpska, and Herzeg-Bosnia.
Learning Objective
Explain the events of the Bosnian War
Key Points
- Following Slovenian
and Croatian secession from the Socialist Federal Republic in 1991, the
multi-ethnic Republic of Bosnia and Herzegovina passed a referendum for
independence on February 29, 1992.
- On March 18,
1992, representatives from the three major ethnic majorities signed the Lisbon
Agreement, agreeing to an ethnic division of Bosnia: Alija Izetbegović for the
Bosniaks, Radovan Karadžić for the Serbs, and Mate Boban for the Croats.
However, on March 28, 1992, Izetbegović withdrew his signature and declared his
opposition to any such division of the country.
- Serb forces
attacked Bosnian Muslim civilian populations, following the same pattern once
areas were under their control: houses and apartments were systematically
ransacked or burnt down, civilians were rounded up or captured, and many were
beaten or killed in the process.
- A number of
genocidal massacres perpetrated against the Bosniak population were reported
during the war, including Srebrenica, Bijeljina, Tuzla, and two incidents at Markale.
-
The Siege of
Sarajevo started in early April 1992 and lasted 44 months, with suffering inflicted on the largely Bosniak civilian population to force Bosnian
authorities to accept Serb demands.
- The Graz
agreement was signed between Bosnian-Serb and Bosnian-Croat leaders in early
May 1992, causing deep divisions within the Croat community and strengthening Croat
separatist factions, which led to their conflict with the Bosniaks.
-
Numerous
ceasefire agreements were signed and breached as advantages were gained and
lost across sides. The UN repeatedly attempted to stop the war, but the
much-touted Vance-Owen Peace Plan made little impact.
- On May 25, 1993,
the International Criminal Tribunal for the former Yugoslavia (ICTY) was
formally established by Resolution 827 of the United Nations Security Council.
- The
Croat-Bosniak war officially ended on February 23, 1994, when the commander of the
Croat Defense Council (HVO), General Ante Roso, and commander of the Bosnian
Army, General Rasim Delić, signed a ceasefire agreement in Zagreb, leading to
the Washington Agreement being finalized shortly thereafter.
-
On September 26, 1995, an agreement of further basic principles for a
peace accord was reached in New York City between the foreign ministers of
Bosnia and Herzegovina, Croatia, and the Former Republic of Yugoslavia. A
60-day ceasefire came into effect on October 12, and on November 1, peace talks
began in Dayton, Ohio.
Key Terms
- Vance-Owen peace plan
-
A peace proposal negotiated between the leaders
of Bosnia’s warring factions in early January 1993, facilitated by UN Special
Envoy Cyrus Vance and European Community representative Lord Owen. The proposal
involved the division of Bosnia into ten semi-autonomous regions and received
the backing of the UN.
- Split Agreement
-
The Split Agreement was a mutual defense
agreement between Croatia, the Republic of Bosnia and Herzegovina, and the
Federation of Bosnia and Herzegovina, signed in Split, Croatia, on July 22,
1995. It called on the Croatian Army to intervene militarily in Bosnia and
Herzegovina.
The Bosnian War was an
international armed conflict that took place in Bosnia and Herzegovina between
1992 and 1995. Following a number of violent incidents in early 1992, the war
started in earnest on April 6, 1992, and ended on December
14, 1995. The main belligerents were the forces of the Republic of Bosnia and
Herzegovina and those of the self-proclaimed Bosnian Serb and Bosnian Croat
entities within Bosnia and Herzegovina, Republika Srpska, and Herzeg-Bosnia,
who were led and supplied by Serbia and Croatia respectively. The war was part
of the dissolution of the Socialist Federal Republic of Yugoslavia.
Following the Slovenian and
Croatian secession from the Socialist Federal Republic in 1991, the
multi-ethnic Socialist Republic of Bosnia and Herzegovina – which was inhabited
by mainly Muslim Bosniaks (44%), as well as Orthodox Serbs (32.5%) and Catholic
Croats (17%) – passed a referendum for independence on February 29, 1992. The
turnout to the referendum was reported as 63.7%, with 92.7% in
favor of independence (implying that Bosnian Serbs, who made up approximately
34% of the population, largely boycotted the referendum). Independence was
formally declared by the Bosnian parliament on March 3, 1992. On March 18,
1992, representatives from the three major ethnic majorities signed the Lisbon
Agreement: Alija Izetbegović for the Bosniaks, Radovan Karadžić for the Serbs,
and Mate Boban for the Croats. However, on March 28, 1992, Izetbegović, after
meeting with the then-U.S. ambassador to Yugoslavia Warren Zimmermann, in
Sarajevo, withdrew his signature and declared his opposition to any type of
ethnic division of Bosnia.
In late March 1992, fighting between Serbs and combined Croat and Bosniak forces in and near
Bosanski Brod resulted in the killing of Serb villagers in Sijekovac. Serb
paramilitaries committed the Bijeljina massacre, most victims of which were
Bosniak, on April 1-2, 1992.
Course of the War
At the outset of the Bosnian
war, Serb forces attacked the Bosnian Muslim civilian population in eastern
Bosnia. Once towns and villages were securely in their hands, the Serb forces,
including military, police, paramilitaries, and sometimes even Serb villagers,
followed the same pattern: houses and apartments were systematically ransacked
or burnt down, civilians were rounded up or captured, and many were beaten or
killed in the process. Men and women were separated when captured, with many
men massacred or detained in camps. Women and children were kept in
detention centers that were intolerably unhygienic. Many were
mistreated and raped repeatedly. The Serbs had the upper hand due to their
possession of heavier weaponry (although they claimed less manpower than the
Bosnians). They were supplied by the Yugoslav People’s Army and usually
established control over areas where Serbs were already in the majority.
The Siege of Sarajevo
started in early April 1992. The capital Sarajevo was mostly held by Bosniaks. In
the 44 months of the siege, terror against Sarajevo residents varied in
intensity, but the purpose remained the same: inflict suffering on civilians to
force the Bosnian authorities to accept Serb demands. The Army of Republika
Srpska (VRS) surrounded the city for nearly four years, deploying troops and
artillery in the surrounding hills in what would become the longest siege in
the history of modern warfare.
The Graz agreement was
signed between the Bosnian-Serb and Bosnian-Croat leaders in early May 1992,
causing deep divisions within the Croat community and strengthening separatist
factions, which led to conflict with the Bosniaks. One of the primary pro-union
Croat leaders was Blaž Kraljević, leader of the Croatian Defence Forces (HOS),
which had a Croatian nationalist agenda but unlike the Croat Defense Council
(HVO), fully supported cooperation with the Bosniaks. In June 1992, focus
switched to the towns of Novi Travnik and Gornji Vakuf, where HVO efforts to
gain control were resisted. On June 18, 1992, the Bosnian Territorial Defence
in Novi Travnik received an ultimatum from HVO that included demands to abolish
existing Bosnia and Herzegovina institutions within the town and submit to the
authority of HVO and the Croatian Community of Herzeg-Bosnia, as well as expel all
Muslim refugees. These demands were to be met within 24 hours. The next day,
as demands were not met, an attack was launched. The town’s elementary
school and post office were attacked and damaged.
Vastly under-equipped Bosnian
forces fighting on two fronts were able to repel Croats and gain territory. Bosnia was surrounded by Croat and Serb forces from all sides with no way
to import weapons or food. What saved Bosnia at this time was its vast heavy industrial
complex, which was able to switch to military hardware production. Numerous
ceasefire agreements were signed and breached as advantages were gained and
lost across sides. The UN repeatedly but unsuccessfully attempted to stop the
war, and the much-touted Vance-Owen Peace Plan in the first half of 1993 made
little impact.
On February 22, 1993, the
United Nations Security Council passed Resolution 808, which decided “that
an international tribunal shall be established for the prosecution of persons
responsible for serious violations of international humanitarian law.” On
May 15-16, 96% of Serbs voted to reject the Vance-Owen peace plan. After the
failure of the plan, an armed conflict sprang up between Bosniaks and Croats
over the 30 percent of Bosnia the latter held. The peace plan was one of the
factors leading to the escalation of the conflict as Lord Owen avoided moderate
Croat authorities (pro-unified Bosnia) and negotiated directly with more extreme
elements who were in favor of separation.
On May 25, 1993, the
International Criminal Tribunal for the former Yugoslavia (ICTY) was formally
established by Resolution 827 of the United Nations Security Council. In April
1993, the United Nations Security Council issued Resolution 816, calling on
member states to enforce a no-fly zone over Bosnia-Herzegovina. On April 12,
1993, NATO commenced Operation Deny Flight to enforce this no-fly zone. In an
attempt to protect civilians, the United Nations Protection Force (UNPROFOR),
established during the Croatian War of Independence, saw its
role further extended in May 1993 to protect areas declared as “safe
havens” around Sarajevo, Goražde, Srebrenica, Tuzla, Žepa, and Bihać by
Resolution 824. On June 4, 1993, the United Nations Security Council passed
Resolution 836, authorizing the use of force by UNPROFOR to protect the above-named safe zones.
The Croat-Bosniak war
officially ended on February 23, 1994, when the commander of HVO, General Ante
Roso, and commander of the Bosnian Army, General Rasim Delić, signed a
ceasefire agreement in Zagreb. On March 18, 1994, a peace agreement — the
Washington Agreement — was mediated by the U.S. between the warring Croats
(represented by the Republic of Croatia) and the Republic of Bosnia and
Herzegovina. It was signed in both Washington D.C. and Vienna. The Washington
Agreement ended the war between Croats and Bosniaks and divided the combined
territory held by Croat and Bosnian government forces into ten autonomous
cantons, establishing the Federation of Bosnia and Herzegovina. This reduced
the warring parties to the Federation of Bosnia and Herzegovina, militarily
composed of the Army of the Republic of Bosnia and Herzegovina (ARBiH) and the HVO, and Republika Srpska, composed militarily of the VRS.
The war continued until
November 1995. In July 1995, VRS forces under General Ratko Mladić occupied the
UN safe area of Srebrenica in eastern Bosnia. The resulting Srebrenica
massacre led to the murder of more than
8,000 Muslim Bosniaks, mainly men and boys, in and around the town of
Srebrenica. UNPROFOR, represented on the ground at Srebrenica by a 400-strong
contingent of Dutch peacekeepers, failed to prevent the town’s capture and the
subsequent massacre. The ICTY ruled this event a genocide in the Krstić case.
In line with the
Croat-Bosniak Split Agreement, Croatian forces operated in western Bosnia under
Operation Summer ’95 and in early August launched Operation Storm, aimed at taking
over the Republic of Serb Krajina in Croatia. With this, the Bosniak-Croat
alliance gained winning momentum in the war, taking much of western Bosnia from
the VRS in several operations, including Operation Mistral 2 and Operation
Sana. VRS forces committed several major massacres during 1995: the Tuzla
massacre on May 25, the Srebrenica massacre, and the second Markale massacre on
August 28 (the first Markale massacre occurred on February 5, 1994, when a 120-millimeter mortar shell landed in the center of a marketplace in Sarajevo). On
August 30, the Secretary General of NATO announced the start of Operation Deliberate
Force, which consisted of widespread airstrikes against Bosnian Serb positions
supported by UNPROFOR rapid reaction force artillery attacks. On September 14, 1995,
NATO air strikes were suspended to allow the implementation of an agreement
with Bosnian Serbs for the withdrawal of heavy weapons from around Sarajevo.
Twelve days later, on September 26, an agreement of further basic
principles for a peace accord was reached in New York City between the foreign
ministers of Bosnia and Herzegovina, Croatia, and the Former Republic of Yugoslavia.
A 60-day ceasefire came into effect on October 12, and on November 1, peace
talks began in Dayton, Ohio. The war ended with the Dayton Peace Agreement
signed on November 21, 1995; the final version of the peace agreement was
signed December 14, 1995, in Paris.
37.5.4: Prosecution in the International Criminal Court
A number of Serbs,
Croats, and Bosniaks were prosecuted following the Bosnian
War, and some trials are still ongoing.
Learning Objective
Detail the cases brought before the ICC for crimes perpetrated during the
Bosnian War
Key Points
Key Term
- joint criminal enterprise
-
A legal doctrine used
by the ICTY to prosecute political and military leaders for mass war crimes,
including genocide, committed during the Yugoslav Wars.
The
International Criminal Tribunal for the former Yugoslavia (ICTY) was
established in 1993 as a body within the UN tasked with prosecuting war crimes
committed during the wars in the former Yugoslavia. The tribunal is an ad hoc court located in The Hague, Netherlands.
Both Serbs and Croats were indicted and convicted of systematic war crimes
under the principle of joint criminal enterprise, while Bosniaks were indicted
and convicted of individual ones. Most of the Bosnian-Serb wartime leadership,
such as Biljana Plavšić, Momčilo Krajišnik, Radoslav Brđanin, and Duško Tadić,
were indicted and judged guilty for war crimes and ethnic cleansing.
Major
ICTY Cases
The
former president of Republika Srpska Radovan Karadžić was found guilty of
genocide, war crimes, and crimes against humanity, and sentenced to 40 years
imprisonment on March 24, 2016. He was found guilty of genocide for the
Srebrenica massacre, which aimed to kill “every able-bodied male” and systematically exterminate the Bosnian Muslim community. He was
also convicted of persecution, extermination, deportation, forcible transfer
(ethnic cleansing), and murder in connection with his campaign to drive Bosnian
Muslims and Croats out of villages claimed by Serb forces. Ratko Mladić, the
top military general with command responsibility in the Army of Republika
Srpska, is currently on trial in the ICTY, charged with crimes in connection
with the siege of Sarajevo and the Srebrenica massacre, following a long period
in hiding as he attempted to evade arrest. The closing arguments for his
case were conducted in December 2016 and a verdict is forthcoming. Prosecutors
have argued for nothing less than a life sentence, citing the dissatisfaction
of Bosnians when Karadžić was only given a 40-year sentence.
The
Serbian President Slobodan Milošević was charged with war crimes in connection
with the war in Bosnia, including grave breaches of the Geneva Conventions,
crimes against humanity, and genocide; however, he died in 2006 before his
trial could finish. Milošević was arrested by Yugoslav federal authorities on March
31, 2001, on suspicion of corruption, abuse of power, and embezzlement
following his resignation of the Yugoslav presidency and a disputed presidential
election. The initial investigation into Milošević faltered for lack of
evidence, prompting the Serbian Prime Minister Zoran Đinđić to extradite him to
the ICTY to stand trial for charges of war crimes instead. At the outset of the
trial, Milošević denounced the Tribunal as illegal because it had not been
established with the consent of the UN General Assembly. As a result, he
refused to appoint counsel for his defense and chose to defend himself in the
five years that the trial progressed prior to his death.
Paramilitary
leader Vojislav Šešelj was acquitted in a first-instance verdict on all counts of
an alleged joint criminal enterprise to ethnically cleanse large areas of
Bosnia-Herzegovina of non-Serbs by the ICTY on March 31, 2016. The acquittal
was appealed by prosecutors from the Mechanism for International Criminal
Tribunals (MICT), a United Nations Security Council agency that functions as an
overseer and successor to the ICTY. Subsequently, Šešelj led the Serbian
Radical Party in the 2016 elections, and his party won 23 seats in the
parliament.
The Hague revealed that Alija Izetbegović, President of the
Republic of Bosnia and Herzegovina during the Bosnian War, was also under
investigation for war crimes, although the prosecutor did not find sufficient
evidence over the course of Izetbegović’s lifetime to issue an indictment. Other
Bosniaks convicted of or on trial for war crimes include Rasim
Delić, chief of staff of the Army of Bosnia and Herzegovina, sentenced
to three years’ imprisonment on September 15, 2008, for his failure to prevent the
Bosnian mujahideen members of the Bosnian army from committing crimes,
including murder, rape, and torture, against captured civilians and enemy combatants.
Enver Hadžihasanović, a general of the Army of the Republic of Bosnia and
Herzegovina, was sentenced to 3.5 years for authority over acts of murder and
wanton destruction in Central Bosnia. Hazim Delić was the Bosniak Deputy
Commander of the Čelebići prison camp, which detained Serb civilians. He was
sentenced to 18 years by the ICTY Appeals Chamber on April 8, 2003, for murder
and torture of the prisoners and the rape of two Serbian women.
Many Serbs have accused
Sarajevo authorities of practicing selective justice in the active prosecution
of Serbs for war crimes, while similar acts carried out by Bosniaks have been
ignored or downplayed. Genocide at Srebrenica is the most serious war crime
that any Serbs have been convicted of at the ICTY. Crimes against humanity
(i.e., ethnic cleansing), a charge second in gravity only to genocide, is the
most serious war crime that any Croat has been convicted of. The most serious
war crime a Bosniak has been charged with at the Tribunal is breach of
the Geneva Conventions.
37.6: Globalization
37.6.1: The Development of the Internet
The Internet has evolved
from a government tool used for research to a pervasive social medium.
Learning Objective
Describe the
changes brought on by the advent of the Internet
Key Points
Key Terms
- World Wide Web
-
An information
space where documents and other web resources are identified by URIs,
interlinked by hypertext links, and accessed via the Internet using a
web browser and web-based applications.
- Digital Revolution
-
The
change from mechanical and analog electronic technology to digital
electronics with the adoption and proliferation of digital computers and
information.
The change from mechanical and
analog electronic technology to digital electronics with the adoption and
proliferation of digital computers and information is known as the Digital Revolution. Implicitly, the term refers to the sweeping changes brought about by digital computing and communication
technology during and after the latter half of the 20th century. Analogous to
the Agricultural Revolution and Industrial Revolution, the Digital Revolution
marked the beginning of the Information Age.
Rise of the Global Internet and
the World Wide Web
Initially, as with its
predecessor networks, the system that would evolve into the Internet was
primarily for government use. However, interest in
commercial use of the Internet quickly grew.
Although commercial use was initially forbidden, the exact definition of commercial use
was unclear and subjective. As a result, during the late 1980s, the first
Internet service provider (ISP) companies were formed. The first commercial
dial-up ISP in the United States was The World, which opened in 1989.
The World Wide Web (sometimes
abbreviated “www” or “W3”) is an information space where
documents and other web resources are identified by URIs, interlinked by
hypertext links, and accessed via the Internet using a web browser and, more
recently, web-based applications. It has become known simply as “the
Web.” As of the 2010s, the World Wide Web is the primary tool billions use
to interact on the Internet, and it has changed people’s lives immeasurably. Tim
Berners-Lee is credited with inventing the World Wide Web in 1989 and
developing in 1990 both the first web server and the first web browser, called
WorldWideWeb (no spaces) and later renamed Nexus. Many others were soon
developed, with Marc Andreessen’s 1993 Mosaic (later Netscape) often credited with sparking the internet boom of the 1990s.
A boost in web users was
triggered in September 1993 by NCSA Mosaic, a graphic browser that eventually
ran on several popular office and home computers. It was the first web browser
aimed at bringing multimedia content to non-technical users, and included
images and text on the same page, unlike previous browser designs. Andreessen was head of the company that released Netscape Navigator in
1994, resulted in one of the early browser wars
with Microsoft Windows’ Internet Explorer (a competition Netscape Navigator
eventually lost). When commercial use restrictions were lifted in 1995, online
service America Online (AOL) offered users connection to the Internet via
their own internal browser.
Web 1.0: 1990s to Early 2000s
In terms of providing context for
this period, mobile cellular devices, which today provide near-universal access, were used for
business but were not a routine household item. Modern social media did not exist, laptops were bulky, and most households
did not have computers. Data rates were slow and most people lacked means to
video or digitize video, so websites such as YouTube did not yet exist. Media
storage was transitioning slowly from analog tape to digital optical discs (DVD)
and from floppy disc to CD. Technologies that would enable and simplify web development, such as PHP,
modern Javascript and Java, AJAX, HTML 4 (and its emphasis
on CSS), and various software frameworks, awaited invention and widespread adoption.
From 1997 to 2001,
the first speculative investment bubble related to the Internet took place, in
which “dot-com” companies (referring to the “.com” top
level domain used by businesses) were propelled to exceedingly high valuations
as investors rapidly stoked stock values. This dot-com bubble was followed by a
market crash; however, this only temporarily slowed growth.
The changes that would propel the
Internet into its place as a social system took place during a relatively short
period of about five years, starting from around 2004. They included:
- Accelerating adoption of and familiarity with the necessary hardware (such as
computers).
-
Accelerating storage technology
and data access speeds. Hard drives emerged, eclipsed smaller,
slower floppy discs, and grew from megabytes to gigabytes (and by around 2010,
terabytes). Typical system RAM grew from hundreds of kilobytes to gigabytes. Ethernet, the enabling technology for TCP/IP, moved from common
speeds of kilobits to tens of megabits per second to gigabits per second.
-
High speed Internet and wider
coverage of data connections at lower prices allowed for larger traffic
rates, more reliable traffic, and traffic from more locations.
-
The gradually accelerating
perception of the ability of computers to create new means and approaches to
communication, the emergence of social media and websites such as Twitter and
Facebook, and global collaborations such as Wikipedia (which existed before but
gained prominence as a result).
Web 2.0
The term “Web 2.0”
describes websites that emphasize user-generated content (including
user-to-user interaction), usability, and interoperability. It first appeared
in a January 1999 article called “Fragmented Future” written by Darcy
DiNucci, a consultant on electronic information design. The term resurfaced
around 2002 to 2004 and gained prominence following the first Web 2.0
Conference. In their opening remarks, John Battelle and Tim O’Reilly outlined
their definition of the “Web as Platform”, where software
applications are built upon the Web as opposed to the desktop. The unique
aspect of this migration, they argued, is that “customers are building
your business for you.” They argued that the activities of users
generating content (in the form of ideas, text, videos, or pictures) could be
harnessed to create value.
Web 2.0 does not refer to an
update to any technical specification, but rather to cumulative changes in the
way webpages are made and used. Web 2.0 describes an approach in which sites
focus substantially on user interaction and collaboration in a social media
dialogue. Customers create content in a virtual community, in
contrast to websites where people are limited to the passive viewing of
content. Examples of Web 2.0 include social networking sites, blogs, wikis,
folksonomies, video sharing sites, hosted services, Web applications, and
mashups. This era saw several household names gain prominence through their
community-oriented operation, including YouTube, Twitter, Facebook, Reddit, and
Wikipedia.
The Mobile Revolution
The process of change generally described as
“Web 2.0” was itself greatly accelerated and transformed by the increasing growth in mobile devices. This mobile revolution
meant that computers in the form of smartphones became ubiquitous. People now bring devices everywhere and use them to communicate, shop, seek information, and take and instantly share photos and video. Location-based services and crowd-sourcing became common, with posts tagged by location and websites and services
becoming location aware. Mobile-targeted websites (such as
“m.website.com”) are increasingly designed especially for these new
devices. Netbooks, ultrabooks, widespread 4G and Wi-Fi, and mobile chips
capable or running at nearly the power of desktops on far less power enabled this stage of Internet development.
The term “App” emerged (short for “Application program” or
“Program”) as did the “App store”.
37.6.2: Ease of Movement
One benefit of globalization and the accompanying improvements in transportation technology is the
ease of travel.
Learning Objective
Explain how travel has become easier and more
universal in the modern age
Key Points
- As
transportation technology improved, travel time and costs decreased
dramatically between the 18th and early 20th centuries.
-
The developments
in technology and transport infrastructure, such as jumbo jets, low-cost
airlines, and more accessible airports, have made many types of tourism more
affordable.
- As of 2014,
there were an estimated 232 million international migrants in the world, and
approximately half were estimated to be economically active. International
movement of labor is often considered important to economic development.
- More
students are seeking higher education in foreign countries and many
international students now consider overseas study a stepping stone to
permanent residency within a country.
- As more people have ties to networks of people and places across the globe
rather than to a current geographic location, people are increasingly marrying
across national boundaries.
-
Because globalization has brought people into greater contact with
foreign cultures, some have come to view one or more globalizing
processes as detrimental to social well-being on a global or local scale, with
xenophobia an issue in many modern societies.
Key Terms
- immigration
-
The international movement of people into a destination country where they do not possess citizenship to settle or reside there, especially as permanent residents or naturalized citizens. People also immigrate to take up employment as migrant workers or temporarily as foreign workers.
- tourism
-
The act of traveling for pleasure, or the theory and practice of touring, the business of attracting, accommodating, and entertaining tourists, and the business of operating tours.
- xenophobia
-
The fear of that perceived as foreign or strange.
An essential aspect of
globalization is increased ease of travel. As transportation technology
improved, travel time and costs decreased dramatically between the 18th and
early 20th centuries. For example, travel across the Atlantic Ocean took up to five weeks in the 18th century, but around the time of the 20th century
it took a mere eight days. Today, modern aviation has made long-distance
transportation quick and affordable.
Tourism
Tourism is travel for
pleasure. Developments in technology and transport infrastructure, such as
jumbo jets, low-cost airlines, and more accessible airports, have made many
types of tourism more affordable. International tourist arrivals surpassed the
milestone of 1 billion for the first time in 2012.
A visa is
a conditional authorization granted by a country to a foreigner allowing them
to enter and temporarily remain within or leave that country. Some countries –
such as those in the Schengen Area – have agreements with other countries
allowing citizens to travel between them without visas. The World
Tourism Organization announced that the number of tourists who required a visa
before traveling was at its lowest level ever in 2015.
Immigration
Immigration is the
international movement of people into a destination country of which they are
not natives or where they do not possess citizenship to settle or
reside there, especially as permanent residents or naturalized citizens, or to
take up employment as a migrant worker or temporary foreign worker. According
to the International Labor Organization, as of 2014, there were an estimated
232 million international migrants in the world (defined as persons outside
their country of origin for 12 months or more), and approximately half were estimated to be employed or seeking
employment. International movement of labor is often seen as important to
economic development. For example, freedom of movement for workers in the
European Union means that people can move freely between member states to live,
work, study, or retire in another country.
Globalization
Globalization is associated
with a dramatic rise in international education. More nd more students are
seeking higher education in foreign countries and many international students
now consider overseas study a stepping stone to permanent residency within a
country. The contributions that foreign students make to host nation economies,
both culturally and financially, has encouraged the implementation of further
initiatives to facilitate the arrival and integration of overseas students, including
substantial amendments to immigration and visa policies and procedures.
Transnational Marriage
Transnational marriage is a
marriage between two people from different countries and a by-product of the
movement and migration of people. A variety of special issues arise in
marriages between people from different countries, including those related to
citizenship and culture, which add complexity and challenges to these kinds of
relationships. In an age of increasing globalization, where a growing number of
people have ties to networks of people and places across the globe rather than
to a current geographic location, people are increasingly marrying across
national boundaries.
Reactions
Because globalization has brought people into greater contact with
foreign peoples and cultures, some have come to view one or more globalizing
processes as detrimental to social well-being on a global or local scale.
Xenophobia is the fear of that perceived as foreign or strange.
Xenophobia can manifest itself in the relations and
perceptions of an in-group towards an out-group, including the fear of losing one’s
identity, suspicion of another group’s activities, aggression, and a desire to
eliminate its presence to secure a presumed purity.
37.6.3: International Trade
International trade has become more entrenched
in the domestic policy of states and everyday life of citizens as globalization
increases.
Learning Objective
Identify how trade has changed since the 1990s
Key Points
Key Term
- International trade
-
The exchange of capital, goods, and services
across international borders or territories.
International trade is the
exchange of capital, goods, and services across international borders or
territories. In most countries, such trade represents a significant share of
gross domestic product (GDP). While international trade has existed throughout
history (for example, Uttarapatha, Silk Road, Amber Road, and salt roads), its
economic, social, and political importance has been on the rise in recent
centuries. Trading globally gives consumers and countries the opportunity to be
exposed to new markets and products. Almost every kind of product can be found
on the international market: food, clothes, spare parts, oil, jewelry, wine,
stocks, currencies, and water. Services are also traded: tourism, banking,
consulting, and transportation. A product sold to the global market is
an export, and a product bought from the global market is an import.
Imports and exports are accounted for in a country’s current account in the
balance of payments.
Industrialization, advanced
technology, globalization, multinational corporations, and outsourcing are all
having a major impact on the international trade system. Increasing
international trade is crucial to the continuance of globalization. Without
international trade, nations would be limited to the goods and services
produced within their own borders. International trade is in principle not
different from domestic trade, as the motivation and behavior of parties
involved in a trade do not change fundamentally regardless of whether trade is
across a border. The main difference is that international trade is
typically more costly than domestic trade, since a border
imposes tariffs, time costs due to border
delays, and costs associated with country differences such as language, the
legal system, or culture.
Another difference between
domestic and international trade is that factors of production such as capital
and labor are typically more mobile within a country than across countries.
Thus, international trade is mostly restricted to goods and services,
and only to a lesser extent to capital, labor, or other factors of
production. Trade in goods and services can serve as a substitute for trade in
factors of production. Instead of importing a factor of production, a country
can import goods that make intensive use of that factor of production and thus
embody it. An example is the import of labor-intensive goods by the United
States from China. Instead of importing Chinese labor, the United States
imports goods produced with Chinese labor.
Supply Chains
The global supply chain
consists of complex interconnected networks that allow companies to produce,
handle, and distribute goods and services to the public worldwide. A
supply chain is a system of organizations, people, activities, information, and
resources involved in moving a product or service from supplier to customer.
Supply chain activities involve the transformation of natural resources, raw
materials, and components into a finished product that is delivered to the end
customer. Corporations manage supply chains to take advantage of cheaper
costs of production. As the world has become more interconnected, resources,
labor, and processes along the chain may occur in various locations,
reaching an end point separate from all of these.
E-commerce is the act of
buying or selling online. Electronic commerce draws on technologies such as
mobile commerce, electronic funds transfer, supply chain management, Internet
marketing, online transaction processing, electronic data interchange (EDI),
inventory management systems, and automated data collection systems. Modern
electronic commerce typically uses the World Wide Web for at least one part of
the transaction’s life cycle, although it may also use other technologies such
as email. E-commerce has become an important tool for businesses of all sizes
worldwide, not only to sell to customers, but also to engage with them.
Offshore Outsourcing
Offshore outsourcing is the
practice of hiring an external organization to perform some business functions
(“outsourcing”) in a country other than the one where the products or
services are actually developed or manufactured (“offshore”). It can
be contrasted with offshoring, in which a company moves itself entirely to
another country or functions are performed in a foreign country by a
foreign subsidiary. The widespread use and availability of the Internet has
enabled individuals and small businesses to contract freelancers from all over
the world to get projects done at a lower cost. Crowd-sourcing systems such as
Mechanical Turk and CrowdFlower have added the element of scalability, allowing
businesses to outsource information tasks across the Internet to thousands of
workers. Opponents point out that the practice of sending work overseas by countries
with higher wages reduces their own domestic employment and domestic
investment. Many customer service jobs as well as jobs in the information
technology sectors (data processing, computer programming, and technical
support) in countries such as the United States and the United Kingdom have
been or will potentially be affected.
There are different views on the impact of offshore outsourcing, encapsulated
in the debates over protectionism versus free trade. Some see offshore
outsourcing as a potential threat to the domestic job market in the developed
world and ask for their home governments to enact protective measures or at
least to scrutinize existing trade practices. Others, particularly the
countries who receive work due to offshore outsourcing, see it as an
opportunity. Free-trade advocates suggest economies as a whole will obtain a
net benefit from labor offshoring, but it is unclear if those whose jobs are
displaced receive a net benefit.
37.6.4: Globalization and Democracy
At the turn of the 21st century,
globalization is seemingly hand-in-hand with political liberalization.
Learning Objective
Give examples of how democratic ideas can be
spread thanks to globalization
Key Points
- Cultural
globalization refers to the transmission of ideas, meanings, and values around
the world to extend and intensify social relations.
- One way in which
shared norms have reshaped the global landscape around the turn of the 21st
century is the liberalization of global society via the spread
of democratic norms.
- With the
ascendancy of the United States as sole global superpower in the aftermath of
the Cold War, liberal democratic norms were spread throughout the world
via U.S. ability to attract and co-opt other countries using soft power.
- Democratic peace
theory posits that democracies are hesitant to engage in armed conflict with
other identified democracies, thus making the liberalization of global society
in the aftermath of the Cold War a positive trend towards worldwide pacifism.
-
Capitalist peace
theory posits that once states reach certain criterion for capitalist economic
development, they are less likely to engage in war with each other and rarely
enter into even low-level disputes.
-
In Thomas L. Friedman’s 1999 book The
Lexus and the Olive Tree, Friedman observed that no two countries with
established McDonald’s franchises had fought a war against each other since
those franchises were established in both countries. In a later interview, he
admitted his theory was somewhat tongue-in-cheek.
Key Term
- soft power
-
A concept developed by Joseph Nye of Harvard
University to describe the ability to attract and co-opt using means of
persuasion other than forceful coercion. The currency of soft power is culture,
political values, and foreign policies.
Cultural globalization
refers to the transmission of ideas, meanings, and values around the world to extend and intensify social relations. This process is marked
by the common consumption of cultures that have been diffused by the Internet,
popular culture media, and international travel. This has added to processes of
commodity exchange and colonization, which have a longer history of carrying
cultural meaning around the globe. The circulation of cultures enables
individuals to partake in extended social relations that cross national and
regional borders. The creation and expansion of such social relations is not
merely observed on a material level. Cultural globalization involves the
formation of shared norms and knowledge with which people associate their
individual and collective cultural identities. It brings increasing interconnection
among different populations and cultures.
Historical Background
One way in which shared
norms have reshaped the global landscape around the turn of the 21st
century is the liberalization of global society via the spread
of democratic norms. This trend began in the 1980s as economic malaise and
resentment of Soviet oppression contributed to the collapse of the Soviet
Union, paving the way for democratization across the Iron Curtain. The most
successful of the new democracies were those geographically and culturally
closest to western Europe, many of which are now members or candidate members
of the European Union. The liberal trend spread to some nations in Africa in the
1990s, most prominently in South Africa. Some recent examples of attempts at
liberalization include the Indonesian Revolution of 1998, the Bulldozer
Revolution in Yugoslavia, the Rose Revolution in Georgia, the Orange Revolution
in Ukraine, the Cedar Revolution in Lebanon, the Tulip Revolution in
Kyrgyzstan, and the Jasmine Revolution in Tunisia.
Additionally, with the
ascendancy of the United States of America as sole global superpower in the
aftermath of the Cold War, liberal democratic norms were spread further
throughout the world via U.S. ability to attract and co-opt other countries
using soft power. Both Europe and the U.S. have promoted human rights and
international law throughout the world based on the strength of their
international reputations, influence, and culture. For example, the U.S. is one
of the most popular destinations for international students, who in turn
transmit ideas about and enthusiasm about liberal democracy back to their home
countries. Additionally, American films, among other pieces of easily
transmittable culture, have contributed to the Americanization of other cultures
around the world. The information age has also led to the rise of soft power
resources for non-state actors and advocacy groups. Through the use of global
media, and to a greater extent the Internet, non-state actors have been able to
increase their soft power and put pressure on governments that can ultimately
affect policy outcomes.
Democratic and Capitalist
Peace Theories
Democratic peace theory
posits that democracies are hesitant to engage in armed conflict with other
identified democracies, thus making the liberalization of global society in the
aftermath of the Cold War a positive trend towards worldwide pacifism. The state
of peace is not considered to be singularly associated with democratic states,
although there is recognition that it is more easily sustained between
democratic nations. Among proponents of the democratic peace theory, several
factors are held as motivating peace between democratic states:
- Democratic leaders are
forced to accept culpability for war losses to a voting public;
-
Publicly accountable
states are inclined to establish diplomatic institutions for resolving
international tensions;
-
Democracies are not inclined
to view countries with adjacent policy and governing doctrine as hostile;
-
Democracies tend to possess
greater public wealth than other states, and therefore eschew war to preserve
infrastructure and resources.
Those who dispute this theory
often do so on grounds that it conflates correlation with causation, and that
the academic definitions of democracy and war can be manipulated so as to
manufacture an artificial trend.
Capitalist peace theory was
developed in response to criticisms of democratic peace theory. The capitalist
peace theory posits that once states reach certain criterion for capitalist
economic development, they are less likely to engage in war with each other
and rarely enter into even low-level disputes. There are five primary theories
that have attempted to explain the capitalist peace.
- Trade interdependence:
Capitalist countries that have deeply interconnected trade networks with one
another are hesitant to engage in hostilities that might threaten the health of
the existing trade relationship and thereby threaten benefits derived from that
relationship.
-
Economic norms theory: In
contract-intensive societies, individuals have a loyalty towards the state that
enforces the contracts between strangers. As a consequence, individuals in
these societies expect that their states enforce contracts reliably and
impartially, protect individual rights, and make efforts to enhance the general
welfare. Moreover, with the assumption of bounded rationality, individuals
routinely dependent on trusting strangers in contracts will develop the habits
of trusting strangers and preferring universal rights, impartial law, and
liberal democratic government. In contrast, individuals in contract-poor
societies will develop the habits of abiding by the commands of group leaders
and distrusting those from out-groups. As a result, theorists link causation of
peace with liberal economies rather than liberal political systems, with the proliferation
of democratic norms occurring only secondarily to the establishment of
contract-intensive economies.
- Free capital markets/capital
openness: This theory, originally introduced by Eric Gartzke, Quan Li, and
Charles Boehmer, argues that nations with a high level of capital openness are
able to avoid conflict with each other and maintain lasting peace. In
particular, nations with freer capital markets are more dependent on
international investors because those investors are likely to withdraw if the
country is engaged in a war or interstate conflict. As a result, leaders of
states give greater credibility to threats made by countries with higher levels
of capital openness, causing the aforementioned countries to be more peaceful
than others by avoiding the possibility of misrepresentation of information.
-
Size of government: This explanation of capitalist peace relies
on a definition of capitalism that assumes capitalist states will also have
limited governments, and in turn, large private sectors. Given this definition,
the idea is that smaller governments are more dependent than larger or
socialist governments on raising taxes for fighting wars. This makes the
commitments of nations with smaller governments more credible than those with
larger ones, allowing for nations with smaller governments, and thus “capitalist”
economies, to be better positioned for avoiding conflicts.
-
Ruling others by force: This
theory adduces that if men want to oppose war, they must
oppose statism. So long as they hold the tribal notion that the individual is sacrificial
fodder for the collective, that some men have the right to rule others by
force, and that some (any) alleged “good” can justify it, there can be
no peace within a nation and no peace among nations. Most definitions of
capitalism are opposed to the strictures of statism and therefore, capitalist
societies must tend towards peace.
Golden Arches Theory
In Thomas L. Friedman’s 1999
book The Lexus and the Olive Tree,
Friedman observed that no two countries with established McDonald’s franchises
had fought a war against each other since those franchises were established in
both countries. He supported that observation as a theory by stating that when
a country has reached an economic development where it has a middle class
strong enough to support a McDonald’s network, it would become a “McDonald’s
country” and will not be interested in fighting wars anymore. Shortly after the
book was published, NATO bombed Yugoslavia. On the first day of the bombing,
McDonald’s restaurants in Belgrade were demolished by angry protesters and were
rebuilt only after the bombing ended. In the 2000 edition of the book, Friedman
argued that this exception proved the rule because the war ended quickly as a
result of the Serbian peoples’ desire to not lose their place in a global system
“symbolized by McDonald’s” (Friedman 2000: 252–253).
Critics have pointed to two
other conflicts fought before 2000 as counterexamples, depending on what one
considers a war:
- The 1989 United States
invasion of Panama; and
-
The 1999 Indian-Pakistani
war over Kashmir, known as the Kargil War. Both countries had (and continue to
have) McDonald’s restaurants. Although the war was not fought in all possible
theaters (such as Rajasthan and Punjab borders), both countries mobilized their
military along common borders and both countries made threats involving their
nuclear capabilities.
In a 2005 interview with The
Guardian, Friedman said that he framed his theory “with tongue slightly in cheek.”
36.1: Mexico
36.1.1: The Porfiriato
Jose de la Cruz Porfirio Diaz Mori strengthened his regime
to create the internal order necessary to foster economic development;
however, his authoritarian grasp on the presidency sparked the Mexican
Revolution.
Learning Objective
Describe the Porfiriato regime
Key Points
- Jose de la Cruz Porfirio Diaz Mori was a Mexican
soldier and politician. As president, he served seven terms in office for
a total of 35 years (1876 to 1911).
- Diaz initially served only one term in office in
light of his past resistance to Lerdo’s reelection policy. During his second
term, Diaz amended the constitution twice, initially allowing for two terms in
office, then removing all restrictions on re-election.
- As a popular military hero and astute
politician, Diaz determined that his main goal as president was to create the
internal order necessary to foster economic development throughout the country.
His eventual establishment of peace, termed the Pax Porfiriana, became one of
his crowning achievements.
- Diaz developed many pragmatic and personalist
approaches to the political conflicts that occurred during his first term in
office and was skilled at playing interest groups against each other to
create the illusion of democracy and quell rebellions before unrest began.
- Diaz’s massive display of electoral fraud during
the election of 1911 sparked the Mexican Revolution.
Key Terms
- Porfiriato
-
The
period during which Jose de la Cruz Porfirio Diaz Mori and his allies ruled
Mexico, from 1876 to 1911.
- Plan de la Noria
-
A
revolutionary call to arms with the intent of ousting Mexican President Benito
Juarez.
Jose de la Cruz Porfirio Diaz Mori was a Mexican soldier and
politician, a veteran of the Reform War and the French intervention in
Mexico. As president, he served seven terms in office for a total of 35 years (1876 to 1911). The period during which he and his allies
ruled the country became known as the Porfiriato.
The Campaign of “No Re-election”
In 1870, Diaz ran against President Juarez and Vice
President Lerdo de Tejada for president. Juarez won in July and
was confirmed by Congress in October, but Diaz claimed the election was fraudulent. Diaz launched the Plan de la Noria, a revolutionary call to arms
with the intent of ousting Mexican President Benito Juarez on November 8,
1871. The plan was supported by a number of local rebellions throughout the
country, but ultimately failed. Juarez died while in office in
1872, and when Vice President Lerdo succeeded him to the presidency, he offered
amnesty to the rebels, which Diaz accepted. Subsequently, Diaz took up
residency in Veracruz and served as the region’s representative in the
legislature.
Over time, opposition to Lerdo’s presidency grew as
anticlerical sentiment and labor unrest increased, and Diaz saw an
opportunity to plot a more successful rebellion. As a result, he left Mexico in
1875 for New Orleans and Brownsville, Texas, with his political ally Manuel
Gonzalez. A year later, he issued the Plan of Tuxtepec as a call to arms
against Lerdo, who was running for another presidential term. Lerdo was
re-elected in July 1876, but continued rebellion and political unrest before
and after the election forced him out of office. In November, Diaz occupied
Mexico City and Lerdo was exiled to New York. General Juan Mendez was named
provisional president, but Diaz was elected to the office in the beginning of
1877. One of Diaz’s government’s first actions was to amend the 1857 liberal
constitution to prevent re-election to the presidency.
Diaz initially served only one term in office in light of
his past resistance to Lerdo’s re-election policy. In order to side-step the
convention, he handpicked his successor, Manuel Gonzalez, with the intention of
maintaining his power in everything but name. During the four-year period of
Gonzalez’s rule, corruption and official incompetence abounded, so when Diaz
ran for office again in 1884, he was greeted with open arms by the public.
At that point, very few people remembered the “no re-election” promise
that had characterized his previous campaign, though some underground political
papers reversed his previous slogan, “Sufragio Efective, No Reeleccion”, to
“Sufragio Efectivo No, Reeleccion”. During his second term, Diaz amended the
constitution twice, initially allowing for two terms in office, then
removing all restrictions on re-election.
Political Career
As a popular military hero and astute politician, Diaz determined
that his main goal as president was to create the internal order necessary to
foster economic development throughout the country. His eventual establishment
of peace, termed the Pax Porfiriana, became one of his crowning achievements. To achieve this goal, Diaz created a systematic and methodical regime
with a staunch military mindset. He dissolved all local and federal-level
authorities that had once existed in order to ensure that all leadership
stemmed from his office. Legislative authorities that remained within Mexico
were stacked almost entirely with his closest and most loyal allies. Diaz also
suppressed the media and controlled the Mexican court system.
Diaz developed many pragmatic and personalist approaches to
the political conflicts that occurred during his first term in office. Although
known for standing with radical liberals, he made sure not to come across as a
liberal ideologue while in office and maintained control of his political
allies via generous systems of patronage. He was skilled at catering to interest groups and playing them off of one another to
create the illusion of democracy and quell rebellions before unrest began. He
maintained the structure of elections so that a facade of liberal democracy
remained during his rule, but his administration became famous for their
suppression of civil society and public revolts. He also paid the US $300,000
in settlement claims to secure recognition of his regime and met with
Ulysses S. Grant in 1878 while the latter visited Mexico.
Collapse
On February 17, 1908, Diaz gave an interview with an
American journalist, James Creelman of Pearson’s
Magazine, in which he stated that Mexico was ready for democracy and
elections. Diaz also stated that he would retire and allow other candidates to
compete for the presidency. Immediately, opposition groups began the search for
suitable candidates. As candidates began to campaign, Diaz decided he was not
going to retire, but instead run against a candidate he deemed appropriate. He
chose Francisco Madero, an aristocratic but democratically leaning reformer.
Madero was a landowner and very similar ideologically to Diaz, but hoped for
other Mexican elites to rule alongside the president. Ultimately, Diaz had Madero
jailed during the election.
Despite this, Madero gained substantial popular
support. However, when the results were announced, Diaz was proclaimed
re-elected almost unanimously in a massive display of electoral fraud, arousing
widespread anger throughout the country. Madero called for revolt against Diaz
and the Mexican Revolution began. Diaz was forced from office and fled the
country for Spain on
May
31, 1911.
36.1.2: The Mexican Revolution
The
Mexican Revolution took place over the course of a decade and radically
transformed Mexican culture and government.
Learning Objective
Outline
the events of the Mexican Revolution
Key Points
- The outbreak of the Mexican Revolution is
attributed to Porfirio Diaz’s failure in resolving the problem of presidential
succession. In the short term, events were precipitated by the results of the
1910 presidential election in which Diaz committed massive electoral fraud and
declared himself the winner against his then-jailed opponent, Francisco Madero.
- Despite Madero’s lack of political experience,
his election as president in October 1911 raised high expectations for positive
change. These expectations were tempered by the Treaty of Ciudad Juarez, which stipulated
that certain essential elements of the Diaz regime, such as the federal army,
remain in place.
- New institutional freedoms under Madero’s
regime and his failure to reward the revolutionary leaders who
brought him to power led to his resignation and the beginning of the
Huerta dictatorship.
- Although Huerta’s regime attempted to legitimize
his hold on power and demonstrate its legality by pursuing reformist policies
in the first several months of his presidency, after October 1913, he
dropped all attempts to rule within a legal framework and murdered political
opponents while battling revolutionary forces that had united against his
regime.
- On October 26, 1913, Huerta dispensed with the
Mexican legislature, surrounding the building with his army and arresting
congressmen he perceived to be hostile to his regime. Following a number of
military defeats, Huerta stepped down from the presidency and fled the country
less than a year later.
- Huerta’s resignation marked the dissolution of
the federal army and the beginning of an era of civil war among the
revolutionary factions that united to oppose Huerta’s regime.
- Venustiano Carranza and Pancho Villa’s forces
fought each other at the Battle of Celaya in April 6-15, 1915, which ended in
victory for the Constitutionalists and Carranza’s election to the presidency.
-
As
revolutionary violence subsided in 1916, the leaders of Mexico met to draw up a
new, strongly nationalist constitution. Though Carranza was able to enact many
reforms, his regime remained vulnerable to Zapata in the south and Villa in the
north.
Key Terms
- Plan de Ayala
-
A
document drafted by revolutionary Emiliano Zapata during November 1911,
denouncing President Madero for his perceived betrayal of revolutionary ideals
and setting out a vision of future land reform.
- Treaty of Ciudad Juarez
-
A
peace treaty signed between then-President of Mexico Porfirio Diaz and
revolutionary Francisco Madero on May 21, 1911, ending the fighting
between their respective forces and ending the initial phase of the Mexican
Revolution.
The Mexican Revolution was a major armed struggle from 1910 through 1920 that radically transformed Mexican culture and
government. Its outbreak is attributed to Porfirio Diaz’s failure to resolve the problem of presidential succession. In the short term, events were
precipitated by the results of the 1910 presidential election in which Diaz
committed massive electoral fraud and declared himself the winner against his
then-jailed opponent, Francisco Madero. Armed conflict ousted Diaz from power
and a new election was held in 1911, in which Madero won the presidency.
The Madero Presidency, 1911-1913
Despite Madero’s lack of political experience, his election
as president in October 1911 raised high expectations for positive change.
However, these expectations were tempered by the Treaty of Ciudad Juarez,
signed on May 21, 1911, between Diaz and Madero, which put an end to fighting
between the two factions but also stipulated that certain essential elements of
the Diaz regime, such as the federal army, stay in place. Madero called for the
rebels who had brought him to power to return to civilian life. In their place,
Madero increasingly relied upon the federal army to deal with armed rebellions
that broke out in Mexico from 1911 to 1912.
The press, newly unencumbered by Madero’s less authoritarian
regime, embraced their newfound freedoms by making the president himself the object of
criticism. Organized labor exercised their newfound freedoms under the
Madero regime by staging strikes, which foreign entrepreneurs found threatening
to their business concerns. A rise in anti-American sentiment
accompanied these developments. The anarcho-syndicalist Casa del Obrero Mundial
was founded in September 1912 and served primarily as a center of agitation and
propaganda rather than exclusively as a labor union. A number of
political parties also proliferated across the country, including the National
Catholic Party, which was particularly strong in a number of regions.
Madero, unlike Diaz, failed to reward those who had brought
him to power, though many revolutionary leaders expected personal rewards or major
reforms in return for their service. Emiliano Zapata, in particular, long
worked for land reform in Mexico and expected Madero to make some major
changes. However, during a personal meeting with the guerrilla leader, Madero
told Zapata that the agrarian question needed careful study, giving rise to the
belief that Madero, a member of a rich northern landholding family, was
unlikely to implement comprehensive agrarian reform. In response, Zapata
drafted the Plan de Ayala in November 1911, declaring himself in
rebellion against Madero. Zapata renewed guerrilla warfare in the state of
Morelos and Madero was forced to send the federal army to deal, unsuccessfully,
with his forces.
Likewise, the northern revolutionary general Pascual Orozco
felt slighted after being put in charge of large forces of rurales in Chihuahua
instead of being chosen as governor of the same region. After being passed over
and witnessing Madero’s refusal to agree to social reforms calling for better
working hours, pay, and conditions, Orozco assembled his own army to rebel
against the president, aggravating U.S. businessmen and other foreign investors
in the northern region. For many, these upheavals signaled Madero’s inability
to maintain the order that had underpinned Diaz’s 35-year long regime. Madero
dispatched General Victoriano Huerta of the federal army to put down Orozco’s
revolt in April 1912. Ultimately, Huerta was successful in ending the
rebellion, leading many conservative forces to tout him as a powerful
counter-force to Madero’s regime.
A number of other rebellions occurred during a period known
as the Ten Tragic Days. During this time, U.S. Ambassador Henry Lane Wilson
brokered the Pact of the Embassy, formalizing an alliance between Huerta and Felix
Diaz, a nephew of the former president and rebel leader. The treaty ensured
that Huerta would become provisional president of Mexico following the
resignations of Madero and his vice president. However, rather than being sent
into exile, the two were murdered during transport to prison, which though
shocking did not prevent recognition of Huerta’s regime by most world
governments. Following the assumption of Huerta of the presidency, former
revolutionaries had no formally organized opposition to the established
government.
The Huerta Dictatorship, 1913-1914
Although Huerta’s regime attempted to legitimize his hold on
power and demonstrate its legality by pursuing reformist policies in the first
several months of his presidency, after October 1913 he dropped all
attempts to rule within a legal framework and murdered political opponents
while battling revolutionary forces that had united against his regime. For
these reasons, Huerta’s presidency is usually characterized as a dictatorship.
Huerta’s regime was supported initially by foreign and domestic business
interests, landed elites, the Roman Catholic Church, and the German and British
governments, and Mexico was militarized to a greater extent than ever before. Within
a month of the coup that brought Huerta to power, several rebellions broke out
across the country. The Northern revolutionaries fought under the name of the
Constitutionalist Army and Zapata continued his rebellion in Morelos under the
Plan de Ayala, despite Huerta’s interest in land reform as an issue. Huerta
offered peace to Zapata, but he rejected it.
Incoming U.S. President Woodrow Wilson refused to recognize
Huerta’s government despite the urging of Ambassador Wilson, who played a
key role in the regime change. In the summer of 1913, President Wilson recalled
Ambassador Wilson and sent his own personal representative John Lind to
continue U.S.-Mexican diplomatic relations. Lind was a progressive who
sympathized with the Mexican revolutionaries and urged other European powers to
join America in non-recognition of the Huerta regime. He also urged Huerta to
call elections and not step up as a candidate, using economic and military
threats to back up his pleadings. Mexican conservatives were also seeking an
elected civilian alternative to Huerta’s regime and brought together a number
of candidates in a National Unifying Junta. The fragmentation of the
conservative political landscape reinforced Huerta’s belief that he would not
be removed from power, whereas the proliferation of political parties and
presidential candidates proved to the country’s conservative elite that there
was a growing disillusionment with Huerta and his regime.
On October 26, 1913, Huerta dispensed with the Mexican
legislature, surrounding the building with his army and arresting congressmen
he perceived hostile to his regime. Congressional elections went ahead,
but the fervor of opposition candidates decreased. The October 1913 elections
ended any pretension of constitutional rule within Mexico and civilian
political activities were banned. Additionally, many prominent Catholics were
arrested and Catholic periodicals were suppressed. Huerta’s position continued
to deteriorate and his army suffered several defeats during this time. Finally,
in mid-July 1914, he stepped down and fled the country. He died six months
after going into exile after having been arrested by US authorities and held at
Fort Bliss, Texas. Huerta’s resignation also marked the dissolution of the
federal army and the beginning of an era of civil war among the revolutionary
factions that united to oppose Huerta’s regime.
War of the Winners, 1914-1915
The revolutionary factions that remained in Mexico gathered
at the Convention of Aguascalientes in October 1914. During this time, there
was a brief break in revolutionary violence. Rather than facilitate a
reconciliation among the different factions, however, Venustiano Carranza and
Pancho Villa engaged in a power struggle, leading to a definitive break between
the two revolutionaries. Carranza expected to be named First Chief of the
revolutionary forces, but his supporters were overpowered during the convention
by Zapata and Villa’s supporters, who called on Carranza to resign executive
power. Carranza agreed to do so only if Villa and Zapata also resigned and went into exile. He also
stipulated that there be a pre-constitutionalist government to carry out the necessary political and social reforms the country needed
before a fully constitutional government was reestablished. As a result of these conditions, the
convention declared Carranza in rebellion and civil war resumed.
Northern general Villa formed an alliance with the southern
leader Zapata. The resultant combined forces were called the Army of the
Convention. In December 1914, their forces moved on Mexico City and captured
it, Carranza’s forces having fled shortly beforehand. In practice, however, the
Army of the Convention did not survive as an alliance beyond this initial
victory against the Constitutionalists. Shortly thereafter, Zapata returned to
his southern stronghold and Villa resumed fighting against Carranza’s forces in
the north. In the meantime, the United States sided with Carranza, who was based in American-occupied Veracruz. The United States timed its exit
from Veracruz to benefit Carranza, sending his forces munitions and
formally recognizing his government in 1915.
Villa’s forces met with those of Carranza’s allies at the
Battle of Celaya in April 6-15, 1915, which ended in a decisive
Constitutionalist victory due to their superior military tactics. As a result,
Carranza emerged as Mexico’s political leader with support from the army.
Constitutionalism Under Carranza, 1915-1920
As revolutionary violence subsided in 1916, the leaders of
Mexico met to draw up a new constitution. The Mexican Constitution of 1917 that
resulted was strongly national. Article 27 provided the government with the
right to expropriate natural resources from foreign interests, enabling land
reform. There were also provisions to protect organized labor and articles extending state power over the Roman Catholic Church within Mexico.
Carranza also pushed for women’s rights and equality during his presidency, which
helped to transform women’s legal status within the country.
Carranza, though able to enact many reforms, was
still vulnerable to revolutionary unrest. Zapata remained active in Morelos,
which due to its proximity to Mexico City remained a vulnerability for the
Carranza government. The Constitutionalist Army, renamed the Mexican National
Army, was dispatched to fight Zapata’s Liberating Army of the South, and government
agents assassinated Zapata in 1919. Carranza also sent generals to track down
Villa in the north, but they were only able to capture some of his men. Due to
the legacy of Diaz’s “no re-election” policy, it was politically untenable for
Carranza to seek re-election after his first term, so instead he endorsed political
unknown Ignacio Bonillas when his term in office was nearly finished. However,
some existing northern revolutionary leaders found the prospect of a civilian
Carranza puppet candidate untenable and hatched a revolt against Carranza
called the Plan of Agua Prieta. As a result, Carranza attempted to flee Mexico,
but died on his way to the Gulf Coast.
36.1.3: The National Revolutionary Party
The National Revolutionary Party held power
consistently from 1929 to 2000 by settling disputes among different
political interest groups within the framework of a single party machine.
Learning Objective
Describe the platform and political dominance of
the National Revolutionary Party
Key Points
- Political
unrest, including continued violence after the armed phase of the Mexican
Revolution, led to the foundation of the National Revolutionary Party, or PNR.
- The PNR would
undergo name changes over the years it remained in power. In 1938 its name was
changed to Partido de la Revolucion Mexicana (PRM), and in 1946, the party was
renamed Partido Revolucionario Institucional (PRI).
- The party was split
functionally into mass organizations that represented various interest
groups. Settling disputes within the framework of a single political party
helped prevent legislative gridlock and militarized rebellions, but only
provided an illusion of democracy to its constituents.
- Over time, the party became synonymous with political corruption and
voter suppression, and the growth of opposition parties led to the PRI’s loss of
the presidency in 2000.
Key Terms
- National Revolutionary Party
-
The Mexican political party founded in 1929 that
held executive power within the country for an uninterrupted 71 years. It
underwent two name changes during its time in power: once in 1938, to Partido
de la Revolucion Mexican (PRM), and again in 1946, to Partido Revolucionario
Institucional (PRI).
- Democratic Current
-
A movement within the PRI founded in 1986 that
criticized the federal government for reducing spending on social programs to
increase payments on foreign debt. PRI members who participated in the
Democratic Current were expelled from the party and formed the National
Democratic Front (FDN).
History
Although the armed phase of
the Mexican Revolution ended in 1920, Mexico continued to experience political
unrest in the years that followed. In 1928, president-elect Alvaro Obregon was
assassinated, giving rise to a political crisis. This led to the founding
of the National Revolutionary Party (in Spanish, Partdio Nacional
Revolucionario, or PNR) the following year by sitting president Plutarco Elias
Calles. Calles’ intention in founding the PNR was to end the violent
power struggles taking place between factions of the Mexican
Revolution and guarantee the peaceful transmission of power across presidential
administrations. In the first years of the PNR’s existence, it was the
only political machine in existence. In fact, from 1929 until 1982, the PNR won
every presidential election by well over 70 percent of the vote.
In 1938, Lazaro Cardenas,
the president of Mexico at the time, renamed the PNR to Partdio de la
Revolucion Mexicana, or PRM. The PRM’s revised aim was to establish a socialist
democracy of workers. In practice, however, this was never achieved, and the
PRM was split functionally into many mass organizations that
represented different interest groups. Settling disputes within the framework
of a single political party helped to prevent legislative gridlock and
militarized rebellions, which were common during the Mexican
Revolution. For these reasons, its supporters maintained that the party itself
was crucial to the modernization and stability of Mexico as a whole. In fact,
the first four decades of PRM rule were dubbed the “Mexican Miracle” due to the
economic growth that occurred as a result of import substitution, low
inflation, and the implementation of successful national development plans. Between
1940 and 1970, Mexican GDP increased sixfold and peso-dollar parity was
maintained. Party detractors, however, pointed to the lack of transparency and
democratic processes, which ultimately made the lower levels of administration
subordinate to the whims of the party machine.
Corruption and Opposing
Political Parties
As in previous regimes, the
PRM retained its hold over the electorate due to massive electoral fraud.
Toward the end of every president’s term, consultations with party leaders
would take place and the PRM’s next candidate would be selected. In other
words, the incumbent president would pick his successor. To support the party’s
dominance in the executive branch of government, the PRM sought dominance at
other levels as well. It held an overwhelming majority in the Chamber of
Deputies as well as every seat in the Senate and every state governorship.
As a result, the PRM became a symbol over time of corruption, including
voter suppression and violence. In 1986, Cuauhtemoc Cardenas, the former
Governor of Michoacan and son of the former president Lazaro Cardenas, formed
the Democratic Current, which criticized the federal government for reducing
spending on social programs to increase payments on foreign debt. Members of
the Democratic Current were expelled from the party, and in 1987, they formed
the National Democratic Front, or Frente Democratico Nacional (FDN). In 1989, the
left wing of the PRM, now called Partido Revolucionario Institucional, or PRI, went
on to form its own party called the Party of the Democratic Revolution. The
conservative National Action Party, likewise, grew after 1976 when it obtained
support from the business sector in light of recurring economic crises. The
growth of both these opposition parties resulted in the PRI losing the
presidency in 2000.
36.1.4: The Mexican Economic Miracle
The Mexican Economic Miracle refers to the
country’s inward-focused development strategy, which produced sustained
economic growth form the 1940s until the 1970s.
Learning Objective
Explain the Mexican
Economic Miracle
Key Points
- The reduction of
political turmoil that accompanied national elections during and
immediately after the Mexican Revolution was an important factor in laying the
groundwork for economic growth.
- During the
presidency of Lazaro Cardenas, significant policies were enacted in the social
and political spheres that had major impacts on the economic policies of the
country as a whole.
- The Mexican
government promoted industrial expansion through public investment in
agricultural, energy, and transportation infrastructure.
- Growth was
sustained by Mexico’s increasing commitment to provide quality education
options for its general population.
- Mexico benefited
substantially from World War II due to its participation supplying labor and
materials to the Allies.
- In the years following World War II, President Miguel Aleman Valdes
(1946-52) instituted a full-scale import-substitution program that stimulated
output by boosting internal demand.
Key Terms
- Bracero Program
-
A series of laws and diplomatic agreements
initiated on August 4, 1942, that guaranteed basic human rights and a minimum
wage of 30 cents an hour to temporary contract laborers traveling from Mexico
to the United States.
- import substitution industrialization
-
A trade and economic policy that advocates
replacing foreign imports with domestic production.
The Mexican Economic Miracle
refers to the country’s inward-focused development strategy, which produced
sustained economic growth of 3-4 percent with modest 3 percent inflation
annually from the 1940s until the 1970s.
Creating the Conditions for
Growth
The reduction of political
turmoil that accompanied national elections during and immediately after
the Mexican Revolution was an important factor in laying the groundwork for
economic growth. This was achieved by the establishment of a single,
dominant political party that subsumed clashes between various interest groups
within the framework of a unified party machine. During the presidency of
Lazaro Cardenas, significant policies were enacted in the social and political
spheres that had major impacts on the economic policies of the country. For instance, Cardenas nationalized oil concerns in 1938. He also
nationalized Mexico’s railways and initiated far-reaching land reform.
Some of these policies were
carried on, albeit more moderately, by Manuel Avila Camacho, who succeeded him
to the presidency. Camacho initiated a program of industrialization in early
1941 with the Law of Manufacturing Industries, famous for beginning
the process of import-substitution within Mexico. Then in 1946, President Miguel
Aleman Valdes passed the Law for Development of New and Necessary Industries,
continuing the trend of inward-focused development strategies.
Growth was sustained by
Mexico’s increasing commitment to primary education for its general population.
The primary school enrollment rate increased threefold from the late 1920s
through to the 1940s, making economic output more productive by the 1940s.
Mexico also made investments in higher education during this period, which
encouraged a generation of scientists and engineers to enable new levels of
industrial innovation. For instance, in 1936 the Instituto Politecnico Nacional
was founded in the northern part of Mexico City. Also in northern Mexico, the
Monterrey Institute of Technology and High Education was founded in 1942.
World War II
Mexico benefited substantially from World War II by supplying labor and materials to the
Allies. The Bracero Program was a series of laws and diplomatic agreements
initiated on August 4, 1942, that guaranteed basic human rights and a minimum
wage of 30 cents an hour to temporary contract laborers who came to the United
States from Mexico. Braceros, meaning manual laborer, literally “one who
works using his arms”, were intended to fill the labor shortage in agriculture
due to conscription. The program outlasted the war and offered employment
contracts to 5 million braceros in 24 U.S. states, making it the
largest foreign worker program in U.S. history. Mexico also received cash
payments for its contributions of materials useful to the war effort, which
infused its treasury with reserves. With these robust resources building up
after the war concluded, Mexico was able to embark on large infrastructure
projects.
Camacho used part of the
accumulated savings from the war to pay off foreign debts, which improved
Mexico’s credit substantially and increased investors’ confidence in the
government. The government was also in a better position to more widely distribute material
benefits from the Revolution given the robust revenues from the war
effort. Camacho used funds to subsidize food imports that affected urban
workers. Mexican workers also received high salaries during the
war, but due to the lack of consumer goods, spending did not increase
substantially. The national development bank, Nacional Financiera, was
founded under Camacho’s administration and funded the expansion of the
industrial sector.
Import-Substitution and
Infrastructure Projects
In the years following World
War II, President Miguel Aleman Valdes (1946-52) instituted a full-scale
import-substitution program that stimulated output by boosting internal demand.
The economic stability of the country, high credit rating, increasingly
educated work force, and savings from the war provided excellent conditions
under which to begin a program of import substitution industrialization. The
government raised import controls on consumer goods but relaxed them on capital
goods such as machinery. Capital goods were then purchased using international
reserves accumulated during the war and used to produce consumer goods
domestically. The share of imports subject to licensing requirements rose from
28 percent in 1956 to more than 60 percent on average during the 1960s and
approximately 70 percent during the 1970s. Industry accounted for 22 percent of
total output in 1950, 24 percent in 1960, and 29 percent in 1970. One industry
that was particularly successful was textile production. Mexico became a
desirable location for foreign transnational companies like Coca-Cola,
Pepsi-Cola, and Sears to establish manufacturing branches during this period. Meanwhile,
the share of total output arising from agriculture and other primary activities
declined during the same period.
The Mexican government
promoted industrial expansion through public investment in agricultural,
energy, and transportation infrastructure. Cities grew rapidly after 1940,
reflecting the shift of employment towards industrial and service centers
rather than agriculture. To sustain these population changes, the government
invested in major dam projects to produce hydroelectric power, supply drinking
water to cities and irrigation water to agriculture, and control flooding. By
1950, Mexico’s road network had also expanded to 21,000 kilometers, some 13,600
of which were paved.
Mexico’s strong economic performance continued into the 1960s when GDP
growth averaged around seven percent overall and approximately three percent
per capita. Consumer price inflation also only averaged about three percent
annually. Manufacturing remained the country’s dominant growth sector,
expanding seven percent annually and attracting considerable foreign
investment. By 1970, Mexico diversified its export base and became largely
self-sufficient in food crops, steel, and most consumer goods. Although imports
remained high, most were capital goods used to expand domestic production.
36.1.5: Art and Culture in 20th-Century Mexico
The Mexican Modernist School used large-scale
murals to reinforce political messages, especially those that emphasized
Mexican rather than European themes.
Learning Objective
Give examples of major works of art in Mexico
during the 20th century
Key Points
- The Mexican
Revolution had a dramatic effect on Mexican art, and the Mexican government commissioned
murals for public buildings to reinforce political messages, especially those
that emphasized Mexican rather than European themes.
- The Mexican muralist
movement reached its height in the 1930s with four main artists: Diego
Rivera, David Alfaro Siqueiros, Jose Clemente Orozco,and Fernando Leal. It is
now the most studied aspect of Mexico’s art history.
- Diego Rivera’s murals
were greatly influenced by his leftist political leanings, dealing with Mexican
society and reflecting the country’s 1910 Revolution.
- Frida Kahlo de Rivera was a Mexican painter known for her
self-portraits. Though she painted canvases instead of murals, she is still
considered part of the Mexican Modernist School due to the emphasis of Mexican
folk culture and use of color in her works.
Key Terms
- Mexican Modernist School
-
The artistic movement within Mexico that was especially
prolific in the 1930s, glorifying the
Mexican Revolution and redefining the Mexican people vis-à-vis their
indigenous and colonial past. Large-scale murals were its preferred
medium.
- surrealist
-
A cultural and artistic movement that mixed dream
and reality into one composition.
Mexican Muralism and
Revolutionary Art
The Mexican Revolution had a
dramatic effect on Mexican art. The government allied itself with
intellectuals and artists in Mexico City and commissioned murals for public
buildings to reinforce political messages, especially those that emphasized
Mexican rather than European themes. The production of art in conjunction with
government propaganda is known as the Mexican Modernist School, or the Mexican
Muralist Movement. Many such works glorified the Mexican Revolution or
redefined the Mexican people vis-à-vis their indigenous and colonial past. The
first of these commissioned works was done by Fernando Leal, Fermin Revueltas,
David Alfaro Siqueiros, and Diego Rivera at San Ildefonso, a prestigious Jesuit
boarding school.
The muralist movement
reached its height in the 1930s with four main artists: Diego Rivera,
David Alfaro Siqueiros, Jose Clemente Orozco, and Fernando Leal. It is now the
most studied aspect of Mexico’s art history. These four artists were
trained in classical European techniques and many of their early works were
imitations of then-fashionable European paintings styles. Many Mexican government buildings featured murals glorifying Mexico’s pre-Hispanic past and incorporating it into the
definition of Mexican identity. Many of these muralists also revived the fresco
technique in their mural work, although some like Siqueiros moved to industrial
techniques and materials such as the application of pyroxilin, a commercial
enamel used for airplanes and automobiles.
Diego Rivera
Rivera painted his first
significant mural, Creation, in the
Bolivar Auditorium of the National Preparatory School in Mexico City in January
1922 while guarding himself with a pistol against right-wing students. In the
autumn of 1922, Rivera participated in the founding of the Revolutionary Union
of Technical Workers, Painters and Sculptors, and later that year he joined the
Mexican Communist Party. His murals were greatly influenced by his leftist political
leanings, dealing with Mexican society and reflecting the country’s 1910
Revolution. He developed his own native style based on large, simplified
figures and bold colors. A strong Aztec influence was present in his works, and much of his art emulated the Mayan steles
of the classical era.
Frida Kahlo
Frida Kahlo de Rivera was a Mexican painter known for her
self-portraits. While she painted canvases instead of murals, she is still
considered part of the Mexican Modernist School due to the emphasis of Mexican
folk culture and use of color in her works. She was married to muralist Diego
Rivera and like Rivera was an active communist. Kahlo was influenced by
indigenous Mexican culture as demonstrated by her use of bright colors,
dramatic symbolism, and primitive style. She often included monkeys in her
works; while this is usually a symbol of lust in Mexican mythology, Kahlo’s portrayal was tender and protective. Christian and Jewish themes
were often depicted in Kahlo’s work. She combined elements of classic
religious Mexican traditions with surrealist components in her paintings.
36.2: Argentina
36.2.1: Argentina Before the Great Depression
Argentina, a non-industrialized country,
experienced recession after World War I and before the global depression hit,
but unlike neighboring countries, maintained relatively healthy growth rates
throughout the 1920s.
Learning Objective
Describe Argentina’s economic status before the
global depression hit.
Key Points
- Argentina was
not an industrialized country in the lead-up to the Great Depression and lacked the energy sources necessary to make it so.
- One of Argentina’s
most lucrative industries was agriculture, and its exports of frozen beef,
especially to Great Britain, proved highly profitable.
- Argentina, like
many other countries, entered a recession after the beginning of World War
I as the international flow of goods, capital, and labor declined.
- Foreign
investment in Argentina came to a complete standstill from which the country
never fully recovered.
- Nonetheless, Argentina maintained relatively healthy growth
throughout the 1920s, unlike neighboring countries, because it was relatively unaffected
by the worldwide collapse on commodity prices. However, the global depression
would eventually halt economic expansion within the country.
Key Term
- Southern
Cone
-
A geographic region composed of the southernmost areas of South America, south of and around the Tropic of Capricorn.
Traditionally, it is comprised of Argentina, Chile, and Uruguay.
It is bounded on the west by the Pacific Ocean and to the south by the junction
between the Pacific and Atlantic Oceans.
Argentina was not an
industrialized country by the standards of Britain, Germany, or the United
States in the lead-up to the Great Depression, and lacked energy sources
such as coal or hydropower to make it so. Experiments in oil extraction during
the early 20th century had poor results. Yacimientos Petroliferos
Fiscales (YPF), the first state-owned oil company in Latin America, was founded
in 1922 as a public company responsible for 51% of oil production, with the
remaining 49% in the hands of private concerns. Moreover, one of Argentina’s
most lucrative industries was agriculture, and its exports of frozen beef,
especially to Great Britain, proved highly profitable following the invention
of refrigerated ships in the 1870s.
Argentina, like many other
countries, entered a recession following the beginning of World War I as the
international flow of goods, capital, and labor declined. Additionally,
following the opening of the Panama Canal in 1914, Argentina and other Southern
Cone economies declined as investors turned their sights to Asia and the
Caribbean. Even beef exports took a hit as Britain imposed new restrictions on
meat imports in the late 1920s. Argentinian ranchers responded by switching
from pastoral to arable production, but lasting damage had already been done to
the Argentine economy. The United States viewed Argentina, and to a lesser
extent Brazil, as a potential rival on the world markets, making collaboration
less likely between the two countries. In light of the United States’ emergence
from WWI as a political and financial superpower, this would prove particularly
harmful to Argentina.
Meanwhile, foreign
investment into Argentina came to a complete standstill from which the country
never fully recovered. As a result, investable funds became concentrated over
time at a single institution: the Banco de la Nacion Argentina (BNA). This made
Argentina’s financial system vulnerable to rent-seeking. Re-discounting and
non-performing loans grew steadily at the BNA after 1914 as a result of crony
loans to other banks and the private sector, polluting the bank’s balance
sheet. The state bank allowed private banks to shed their risks, using money
not backed by collateral as security, then lent the private banks cash at a
rate of 4.5%, below the rate the BNA offered to its customers on certificates
of deposit. Ultimately, neither the Buenos Aires Stock Exchange nor any of the
private domestic banks within the country would develop rapidly enough to
fully replace the loss of foreign capital, the bulk of which was sourced
from now heavily indebted Great Britain.
Nonetheless, Argentina maintained relatively healthy growth
throughout the 1920s, unlike neighboring countries like Brazil and Chile,
because it was relatively unaffected by the worldwide collapse on commodity
prices. Similarly, unlike many European countries that abandoned it, the
gold standard was still in place in Argentina during this time, contributing to
the country’s relative financial stability. Automobile ownership in the country
at 1929 was the highest in the Southern hemisphere, an indicator of the
healthy purchasing power of the middle class on the eve of the Great
Depression. However, the economic downturn would eventually
halt economic expansion within the country.
36.2.2: The Infamous Decade
The 1930s in Argentina is referred to as the
Infamous Decade due to rampant electoral fraud, persecution of political
opposition parties, and generalized government corruption.
Learning Objective
Explain why the 1930s were referred to as the
Infamous Decade
Key Points
- Argentina’s
Infamous Decade refers to the period of time that began in 1930 with Jose Felix
Uriburu’s coup d’etat against standing President Hipolito Yrigoyen and ended
with Juan Peron’s rise to power after the military coup of 1943.
- Lieutenant
General Uriburu’s regime was strongly supported by rightist intellectuals and
his government adopted severe measures to prevent reprisals and
counter-revolutionary tactics by friends of the ousted regime.
- Agustin Pedro
Justo Rolon’s administration was tarnished by constant rumors of corruption and is best remembered for the outstanding diplomatic work of
his Foreign Minister.
- One of the most
controversial successes of Justo’s presidency is the signing of the
Roca-Runciman Treaty in 1933.
- Justo’s first
minister of the Treasury, Alberto Hueyo, took very restrictive measures against
the economy. Hueyo was eventually replaced by Frederico Pinedo, whose plan for
government intervention into the economy was even more significant.
- Pinedo began
Argentinian industrial development via a policy of import substitution and
created Argentina’s Central Bank.
- Roberto
Marcelino Ortiz was fraudulently elected president and assumed his office in
February 1938. He attempted to clean up the country’s corruption problem and
cancelled fraudulent elections won by conservative Alberto
Barcelo.
- In June 1942,
Ortiz resigned the presidency due to sickness and died a month later. He was
replaced by Vice President Ramon S. Castillo.
- On June 4, 1943,
a nationalist secret society within the army called the Grupo de Oficiales
Unidos (GOU) overthrew Castillo in a coup.
Key Terms
- import substitution
-
A trade and economic policy that advocates
replacing foreign imports with domestic production.
- Infamous Decade
-
The period of time in Argentina beginning in
1930 characterized by electoral fraud, the persecution of politica