Human Geography

Licensing Information

This text was adapted by #OpenCourseWare under an Attribution 4.0 International (CC BY 4.0)

Chapter 1: Introduction to Human Geography

  • Geography: The Science of Where, How, and Why
  • Scientific Inquiry
  • Geographic Perspective
  • Map Interpretation
  • Geospatial Technology

Chapter 2: Population and Migration

  • Population
  • Demographic Transition Model
  • Overpopulation
  • Migration

Chapter 3: Cultural Patterns and Processes

  • Understanding Race and Ethnicity
  • Understanding Culture
  • Geography of World Languages
  • Geography of World Religions

Chapter 4: Political Borders, Boundaries, and Governments

  • Defining Nation-States
  • Political Identities
  • International Relations
  • Challenges to Nation-States

Chapter 5: Sustainable Development

  • The Industrial Revolution
  • Economic Geography
  • Human Development Index
  • Social and Economic Inequality
  • Globalization and International Trade
  • Sustainable Development

Chapter 6: Food, Water, and Agriculture

  • The Roots of Agriculture
  • Types of Agriculture
  • Agricultural Regions
  • Population and Food Production
  • Environmental Impact of Agriculture

Chapter 7: Rural and Urban Landscapes

  • Defining Cities and Urban Centers
  • Megacities and Urban Sprawl
  • Cities as Cultural and Economic Centers
  • Cities as Environmental and Sustainable Centers

Chapter 8: Global Environmental Issues

  • Depletion of Natural Resources
  • Environmental Pollution
  • Anthropocentric Climate Change
  • Renewable Resources

Chapter 9: Living with Disasters

  • Natural Hazards and Looming Catastrophes
  • Theory of Plate Tectonics
  • Geologic Hazards
  • Weather Hazards

Chapter 9: Living with Disasters

  • Understand how humans constantly live with natural hazards and looming catastrophes.
  • Describe the basics of the theory of plate tectonics and it's influence on earthquakes and volcanoes.
  • Explain the various types of weather hazards that humans have learned to live with.

9.1 Natural Hazards and Looming Catastrophes

Science Behind Natural Hazards

Because of the scientific method, we understand why most natural disasters occur and where. For example, because of the theory of plate tectonics, we know understand why nearly 90 percent of all natural disasters occur in the Pacific Ocean called the Ring of Fire. That same theory has helped to explain why some volcanoes are more explosive than others. We also understand that different tectonic plate boundaries produce different fault lines and thus different types of earthquakes.

Natural hazards also have seasons – especially those controlled by external forces. The United States has more tornadoes than the rest of the world combined and mostly occur in the spring. Landslides are also most prone in the spring when snow begins to melt and over-saturate the ground. Wildfires are common in the middle of summer when the land is dry and thunderstorms tend to produce lightning without any precipitation. And hurricanes tend to peak in late August and into September when the ocean is warmest.

Since hazards are predictable in some manner, it becomes important to develop some kind of warning system. Predictions, such as weather predictions, state that it will occur at a specified time, date, and intensity. It is like saying a major snowstorm will reach Salt Lake City at 4:30 PM for the commute home. Now a forecast is slightly different. A forecast states a probability of something occurring; such as a 40 percent of showers today. Forecasts are much more broad than predictions.

One final note I'd like to discuss is the difference between a watch and a warning. A watch is issued when the conditions for a particular event are right. So if a thunderstorm is strong enough and rotating, it is possible that a tornado may form. Or if an earthquake with a magnitude of 7.5 strikes somewhere in the ocean, a tsunami watch may be issued because the earthquake was strong enough to create one. But a watch does not mean that it will occur. But is a tornado is spotted on the ground or a ocean sensor records an approaching tsunami, then a warning is sent out to the areas that could be impacted.

Natural Hazards are Connected

In order to understand how to prepare for a natural hazard, a risk assessment must be conducted. The risk of a potential hazard is defined as "the product of the probability of that that event occurring times the consequence should it occur."

Risk = Probability of Disaster x Consequence of Disaster

It is important to determine the potential risk a location has for any particular disaster in order to know how to prepare for one. Referring back to Salt Lake City again, the probability of an earthquake occurring anytime soon is small, but the consequences to human lives and destruction are very high. Thus there is a high risk of an earthquake striking Salt Lake City. One of the limiting factors of risk is knowing the probability of a disaster. Too often scientific data is lacking to data as to how often a disaster occurs for a particular location.

Hazards, Disasters, and Catastrophes

In the summer of 2008, China was rocked by a magnitude 8.0 earthquake that killed over 80,000 people. A week earlier a cyclone struck Burma killing 130,000. On January 12, 2010 a magnitude 7.0 earthquake killed nearly 300,000 people and leveled the capital city of Port-a-Prince. On March 11, 2011 a magnitude 9.0 earthquake generated a tsunami off the coast of eastern Japan killing 30,000 people. Are natural disasters getting worse? Not really, humans are overpopulating the place. Over the last 70 years, the world's population has tripled to 6.7 billion. It is expected by exponentially grow and by 2050 the world's population will reach 9 billion. Exponential growth means the world's population will not grow linearly (in a straight line), but rather as a percent. Our increased population size has caused air quality to suffer, reduced the availability of clean drinking water, increased the world's extreme poverty rate, and has made us more prone to natural hazards.

There is also a relationship between the magnitude of an event (energy released) and its frequency (intervals between episodes). The more earthquakes that occur for a particular location, the weaker they tend to be. That is because built-up energy is slowly being released at a fairly constant rate. But if their are long intervals between one earthquake and the next, the energy can build and can ultimately produce a stronger earthquake. That is the problem with earthquakes along the Wasatch Front of Utah. The interval or frequency between earthquakes tends to be 1,500 years, so the magnitude tends to be high because of the built-up energy. At some point we are going to want to get this earthquake over with because the longer it waits the worse it will be.

Now there are two types of effects caused by natural disasters: direct and indirect. Direct effects, also called primary effects, include destroyed infrastructure and buildings, injuries, separated families, and even death. Indirect, sometimes called secondary effects, are things like contaminated water, disease, and financial loses. In other words, indirect effects are things that happen after the disaster has occurred.

How we chose to build our cities will greatly determine how many lives are saved in a disaster. For example, we should not be building homes in areas that are prone to landslides, liquefaction, or flash floods. Rather these places should be left as open-space such as parks, golf courses, or nature preserves. This this is a matter of proper zoning laws which is controlled by local government. Other ways we can reduce the impact of natural disasters is by having evacuation routes, disaster preparedness and education, and building codes so that our building do not collapse on people.

So what is the difference between a natural hazard, a disaster, or a catastrophe? Using direct quotes from page 6 of the textbook, the author defines each as follows:

  • A hazard is any natural process that poses a threat to human life or property. The event itself is not a hazard; rather, a process becomes a hazard when it threatens human interests.
  • A disaster is the effect of a hazard on society, usually as an event that occurs over a limited span in a defined geographic area. The term disaster is used when the interaction between humans and a natural process results in property damage, injuries, or loss of life.
  • A catastrophe is a massive disaster with significant deaths, injury, and economic loss.

9.2 Theory of Plate Tectonics

Structure of Earth

The earth consists of three layers: an inner and outer core, the mantle, and two types of crust. The earth's core consists of two parts: a liquid outer core and a solid inner core, both made of iron and nickel from the early make-up of the planet, and where the temperatures can range from 8,600 degrees to 9,600 degrees Fahrenheit. The next and largest layer is called the mantle, which makes up two-thirds of Earth's mass. The mantle is actually called a plastic solid, which means it has the ability to flow very slowly. Heat from the earth's core causes the mantle to convect, like water over a stove but much slower, and it is the mantle's convection that is the driving force of plate tectonics.

The surface layer of the earth is called the crust and it makes up only 1 percent of Earth's mass. The crust is subdivided into two components: oceanic and continental crust. The oceanic crust is only about 3 miles thick, but is slightly more dense than continental crust. Most of this oceanic rock is called basalt and is a dark, dense rock. Continental crust is much thicker than oceanic crust (averages between 20 to 25 miles thick), but is actually slightly less dense than oceanic crust. The main type of rock on continents is called granite. So if these two types of crust were to collide into each other, what do you think would happen to the oceanic crust? As a whole, notice that the crust is lighter than the mantle. It is sometimes said that the crust "floats" on the mantle like an iceberg in water and that is not too far from the truth and is called isostacy. Finally, the crust is the coldest, most rigid, and brittle layer with lots of folds and fractures.

Theory of Plate Tectonics

The driving force of earthquakes and volcanoes is described in the theory of plate tectonics. The theory states that the earth is made of several tectonic plates along with several smaller plates. Each tectonic plate consists of oceanic and continental crust that move around the earth's surface like bumper cars because of convection within the mantle.

The theory also explains that the majority of earth's earthquakes and volcanoes occur along the boundaries of these tectonic plates as they either grind past or underneath each other. 

There are three major types of tectonic plate boundares: convergent, divergent, and transform. Let's first look at convergent plate boundaries, which can be broken down into three subcategories. 

Recall that oceanic crust is denser than continental rock like granite. Thus when two tectonic plates collide, the denser oceanic crust will subduct underneath the lighter continental crust. If the subducting rock becomes stuck, vast amounts of energy builds up. But once the pressure and energy is too great, the rock will rupture creating powerful earthquakes. As the subducted material sinks further, it will begin to melt under great heat and pressure, becoming less dense as it melts, and rise up as magma to form dangerous composite volcanoes. Mountain ranges created by oceanic-to-continental convergence are the Andes mountains in South America, the Cascades in the western United States, and the Ring of Fire in the Pacific Ocean.

With oceanic-to-oceanic convergence, the heavier of the two will subduct down beneath the other. Just like continental-to-oceanic convergence, this plate boundary can generate powerful earthquakes and volcanoes; but instead of volcanoes on land, volcanic islands form such as Japan, the Aleutian Islands of Alaska, and Indonesia. The great earthquake in Indonesia in 2004, which produced the devastating tsunami, was created by this process along with the 2011 earthquake and tsunami in Japan.

When two continental plates converge, instead of subduction, the two similar tectonic plates will buckle up to create large mountain ranges like a massive car pile-up. This is called continental-to-continental convergence, and geologically creates intense folding and faulting rather than volcanic activity. Examples of mountain ranges created by this process are the Himalayan mountains (taken from the International Space Station) as India is colliding with Asia, the Alps in Europe, and the Appalacian mountains in the United States as the North American plate collided with the African plate when Pangea was forming. The Kashmir India earthquake of 2005 that killed over 80,000 people occurred because of this process. And most recently, the 2008 earthquake in China which killed nearly 85,000 people before the Summer Olympics was because of this tectonic force.

When two tectonic plates move away from each other, or when a tectonic plate tears itself apart, divergent boundaries can form. As divergence occurs, shallow earthquakes can occur along with volcanoes along the rift areas. When the process begins, a valley will develop such as the Great Rift Valley in Africa. Over time that valley can fill up with water creating linear lakes. If divergence continues, a sea can form like the Red Sea and finally an ocean like the Atlantic Ocean. Check out the eastern half of Africa and notice the lakes that look linear. Eastern Africa is tearing apart from these linear lakes, to the Great Rift Valley, and up to the Red Sea. Notice how the Red Sea looks like it could be put back together again. The ultimate divergent boundary is the Atlantic Ocean, which began when Pangaea broke apart.

Transform boundaries occurs when two tectonic plates slide (or grind) past parallel to each other. The most famous transform boundary is the San Andreas Fault where the Pacific plate (that Los Angeles and Hawaii are on) is grinding past the North American plate (that San Francisco and the rest of the United States is on) at the rate of 3 inches a year. Recently, geologists have stated that San Francisco should expect another disastrous earthquake in the next 30 years. Another important transform boundary is the North Anatolian Fault in Turkey. This powerful fault last ruptured in 1999 in Izmit, Turkey which killed 17,000 people in 48 seconds.

9.3 Geologic Hazards

Earthquakes

An earthquake is a sudden motion or trembling in the earth caused by the abrupt release of slowly accumulated energy. All earthquakes occur along a fault, which is a fracture in the earth's crust where tectonic movement occurs. Where the actual break occurred along the fault is called the focus (also called the hypocenter) and the epicenter is the point on the Earth's surface that lies directly above the focus and is where the strongest shock wave is normally felt.

Recall that all around the planet, tectonic plates are moving because of convection in the mantle. Tectonic plates are also composed of two types of crust, oceanic and continental. The oceanic crust, which is made mostly of basalt is more dense than continental crust that is made of granite. When these tectonic plates come in contact, the denser oceanic crust subducts below the continental crust. Now sometimes when two tectonic plate come in contact they become stuck. As the rocks begin to bend or strain under tectonic forces, large amounts of energy, called strain, builds. When the stress becomes too great for the rocks to hold, segments may suddenly snap, releasing large amounts of energy, called the elastic rebound theory.

There are several types of faults that earthquakes occur on, which are dependent on whether the fault is occurring because of convergent, divergent, or transform tectonic plate forcing. Geologists use old mining terms to distinguish between different types of faults. Think of a minor walking down into the earth along a fault line. The ground the miner is walking on is called the foot-wall. If the minor needs to hand their lantern, the ceiling is called the hanging-wall.

Strike-slip faults (A) occur along transform boundaries where tectonic plates are moving horizontal or parallel to each other. Deformation of rivers, roads, fences, etc. can occur if they cross over these fault lines. Examples of strike-slip faults are the San Andreas Fault in the United States and the North Anatolian Fault in Turkey.

Normal faults (B) are common along divergent plate boundaries. As extensional forces occur, the foot-wall is forced upward, while the hanging wall slides downward. This can create a series of valleys (called a graben) and mountains (called a horst). Examples of mountain ranges and valleys created by normal faulting are theGrand Tetons, the Basin and Range in the western United States, and the Wasatch Front in Utah.

Reverse faults (C) are caused by compressional forces as tectonic plates collide together forcing one plate to rise above another. Using the mining terminology, movement along a reverse fault would cause the hanging-wall to rise up and the foot-wall to drop lower. The angle of a reverse fault is about 45 degrees, but if the angle of the fault is steeper than 45 degrees it is called a thrust fault. When two plates collide, intense folding and faulting can occur. Examples of where reverse and thrust faults occur are where convergent boundaries are common such as: the Northern Rocky Mountains, the Alps, Himalayas, and the Appalachian mountains.

Volcanoes

Earth is made up of a series of tectonic plates, each consisting of oceanic basalt and continental granite. Now because of convection within the mantle, new oceanic crust is created at divergent boundaries creating oceanic ridges. Where tectonic plates converge, the denser oceanic basalt subducts below the lighter continental granite. As the oceanic crust subducts deep enough it begins to melt under great heat and pressure to form molten rock called magma. The molten rock is less dense than the surrounding rock and thus rises to the surface to create volcanoes. Magma cools into rock much slower than on the surface because the heat gets trapped. Because of this, when magma reaches the surface, geologists call it lava.

Shield volcanoes tend to be the tallest volcanoes – and even the tallest mountains – in the world. These volcanoes tend to have gentle slopes with an arc in the shape of a Roman shield. It is their low viscosity lava flows that produces the gentle slopes. Eruptions tend to be mild in comparison to other volcanoes, but lava flows can destroy property and vegetation. The low viscosity magma can flow not only on the surface as lava, but also underground in lava tubes. The most well known shield volcano is Hawaii. There are two types of lava flows, pahoehoe which is a ropy type of lava that flows easily (low viscosity). The other type is called aa and is a blocky type of lava and has a higher viscosity and does not like to flow well.

Cinder cone volcanoes are the smallest type of volcanoes ranging from 300 to 650 feet high. The volcano is built up by eruptions of solid pyroclastic material, specifically tephra, along the volcanic neck. Many people live near cinder volcanoes because the weathered pyroclastic material becomes fertile soil for agriculture. These volcanoes kill few people, but can destroy property.

Composite volcanoes are some of the most dangerous volcanoes on the planet. They tend to occur along oceanic-to-continental and oceanic-to-oceanic convergent boundaries which produces highly viscous pyroclastic material that erupts violently when it reaches the surface. They are also called stratovolcanoes or andesite volcanoes because they erupt volcanic rock – called andesite – which builds up the volcano followed by lava which holds the material in place. This creates stratified layers within the volcano. Examples of composite volcanoes include Mount St. Helens, Mount Rainer, and Mount Pinatubo. Here's a great time-lapse of Mount St. Helens from NASA's Earth Observatory from 1979 to 2013.

Composite volcanoes and other violent volcanos can erupt so violently that they sometimes collapse in on themselves or actually blow themselves up to produce calderas. One of the most powerful volcanoes in the world – Yellowstone – is a massive caldera that has collapsed several times. Sometimes these calderas can fill up with water to produce beautiful lakes such as Mount Mazama (Crater Lake), in Oregon.

The theory of plate tectonics could never explain why some volcanoes form away from any tectonic plate boundaries. These anomaly volcanoes are called hot spots. Instead, they tend to form within tectonic plates in areas where the lithosphere is weak, which allows magma to rise up to the surface to create volcanoes. Though convection within the mantle causes tectonic plates to move, the hot spot does not. The hot spot stays stationary while the tectonic plate moves across it. Examples of hot spots include Hawaii and Yellowstone.

Hawaii is a shield volcano on top of a hot spot. It is a series of volcanic islands that have been created as the Pacific Plate has moved across the hot spot. Hawaii has gentle slopes and is the most active volcano in the world.

9.4 Weather Hazards

Atmospheric processes and energy exchanges are driven by Earth's energy balance and linked to climate and weather. Hurricanes, thunderstorms, tornadoes, blizzards, ice storms, dust storms, heat waves, as well as flash flooding resulting from intense precipitation, are all natural processes that are hazardous to people. These severe hazards affect considerable portions of the planet and are responsible for causing significant death and destruction each year.

Flash Floods

The number one weather related cause of death in the United States are flash floods. The National Weather Service states that "flash floods are short-term events, occurring within 6 hours of the causative event (heavy rain, dam break, levee failure, rapid, snow-melt and ice jams) and often within 2 hours of the start of high intensity rainfall. A flash flood is characterized by a rapid stream rise with depths of water that can reach well above the banks of the creek. Flash flood damage and most fatalities tend to occur in areas immediately adjacent to a stream or arroyo. Additionally, heavy rain falling on steep terrain can weaken soil and cause mud slides, damaging homes, roads and property."

Urbanized areas are susceptible to flash floods because soil and vegetation are removed and replaced by concrete, roads, and buildings. When intense precipitation occurs, the water has nowhere to go. Learn more about flash floods from the National Weather Service.

Tornadoes

One of the most violent and destructive forces of weather are tornadoes. The NWS states that "a tornado is a violently rotating (usually counterclockwise in the northern hemisphere) column of air descending from a thunderstorm and in contact with the ground." 

They range in size from 300 feet to over two miles wide, last minutes to hours, travel a few miles to over 250 miles, at speeds of 30-65 mph. About 75 percent of all the tornadoes in the world occur in the United States; in fact the United States has more tornadoes than the rest of the world combined in a region of the central plains called Tornado Alley.

What makes tornadoes so destructive are the wind speeds within them. Atmospheric pressure within a tornado can be 10 percent lower than the air surrounding the tornado, causing air to flow into the tornado from all directions. As the air flows into and up a tornado, the moisture begins to cool and condense into a cloud allowing the tornado to be seen. Debris picked up by the tornado will also cause it to darken. National Geographic has a great interactive website on tornadoes called Forces of Nature.

November 2013 has had a series of intense storm systems in the United States and the Philippines. The video on the right is of the massive tornado outbreak of on November 17, 2013 in Illinois. It is believed that an incredible 70 tornadoes struck the region.

Tropical Cyclones

Tropical cyclones are considered some of the most powerful weather systems on the planet because of their size, strength, and potential loss to life and property. Tropical cyclones go by different names depending on geography; in North and Central America they are called hurricanes, in the northwestern portion of the Pacific Ocean near China and Japan they are called typhoons, and in the Indian Ocean and Australia they are named cyclones. They all have winds exceeding 74 mph, can be hundreds of miles wide, and tower over 40,000 feet above sea level.

Tropical cyclones require warm ocean waters and humid moisture along with a low pressure system to generate the most powerful storms on the planet. Scientists are concerned that warming ocean temperatures, currently being recorded by satellites using infrared technology, could lead to more powerful storm systems like hurricanes.

Another very real and recent event in disaster history is the recent destruction of the Philippines by Typhoon Haiyan in November 2013. In general, typhoon are more powerful than hurricanes and Typhoon Haiyan was the most powerful typhoon ever recorded. The latest tally of the devastation from the typhoon is that 13 million were directly affected, over 4 million displaced by the storm, over 2 million need food assistance and it is believed that 7,000 are dead.

Summary

Living with potential natural disasters and catastrophes is just part of living on Planet Earth and as human populations continue to grow and living in disaster prone regions, the social and economic risks will continue to rise. Natural disasters also appear to be getting worse and that's part a function of human growth, but also to some disasters, a function of human influence. The magnitude 7.0 earthquake that struck Haiti on January 12, 2010 was strong, but the catastrophic death toll of 100,000 to 300,000 (data varies) was because of the extremely poor economic situations and lack of building codes. Whereas the magnitude 9.1 earthquake in Japan on March 11, 2011 that generated the tsunami was 100 times more powerful in terms of ground shaking, but the death toll was much lower at around 16,000. Still an enormous catastrophe, but not to the extent of Haiti. 

It appears that humans may be playing a part in the increased disasters related to weather and climate. Tornadoes, tropical storms, floods, droughts, and famines have always existed, but humans may be contributing to disruptions to weather patterns creating the ozone hole and variations in regional and global climates. Scientific data is showing that warmer oceans are beginning to rise due to glacial melt and thermal expansion, which will likely create more powerful tropical storms like we saw with Super Typhoon Haiyan. Overuse of available fresh water is causing many places to dry up and leading to the expansion of deserts called desertification. This in turn is creating an increase in epic famines like that was seen in Somalia between 2010 and 2012 that killed nearly 260,000 people. So the era of increased natural disasters is likely to stay and something humans will have to adapt too. 

Chapter 8: Global Environmental Issues

The planet can only support so many people before natural resources begin to become depleted and cannot support human needs, called Earth’s carrying capacity for humans. Many geographers and other scientists believe that humans have grown beyond earth’s carrying capacity; a concept called overshooting. In less developed countries, this has occurred because of population growth; in more developed countries, it has to do with our consumption of natural resources. A natural resource is something found within the natural environment that is accessible and economically valuable to humans, including food, water, soil, plants, animals, and minerals. However, most resources are not renewable, and humans are either consuming them faster than the planet can replenish them or in the case of water and air are polluting them.

8.1 Depletion of Natural Resources

There are primarily two types of resources: energy and minerals. As noted, a natural resource only has “value” as long as humans need it. As it turns out, humans need more and more energy and mineral resources, resulting in increased costs. There has also been a steady rise in the cost of petroleum, gold, copper, platinum, and titanium.

Throughout history, most of the world’s energy came from animate power; the use of animals such as mules, ox, and horses. However, following the Industrial Revolution, most of the energy in Europe and the United States was used for machinery. The energy used to power the machinery came from inanimate power such as biofuel and fossil fuels. Currently, the most used energy source for less-developed nations is biofuels, such as trees, coal, and methane. However, in more developed nations and nations transitioning, fossil fuels have become the central source of energy.

Deforestation

The planet’s growing population has increased demands on natural resources, including forest products. Humans have been using trees for firewood, building homes, and making tools for millennia. Trees are a renewable resource, but deforestation occurs when they are removed faster than they can be replenished. Most people in rural areas in developing countries rely on firewood to cook their food. Many of these areas are experiencing a fast decline in the number of trees available. People living in mainly type B climates may not have access to many trees to start with; therefore, when trees are cut down for firewood or building materials, deforestation occurs. In the tropical areas, it is common for hardwood trees to be cut down for lumber to gain income or to clear the land for other agricultural purposes, such as cattle ranching. Countries that lack opportunities and advantages look to exploit their natural resources – in this case, trees – for either subsistence agriculture or economic gain. Deforestation has increased across the globe with a rapid rise in the worldwide population.

During the Industrial Revolution, European countries chopped down their forests at a rapid rate. Much of the British Isles was forested at one point, but today few forests remain on the British Isles, and they are typically protected. Colonialism brought the Europeans to the Americas. The United States, in its early development, pushed west from the original thirteen colonies, and many old-growth forests were cut down in the process. As railroad tracks were laid down and pioneer development pushed west into the Great Plains, where there were few trees, the great cutover occurred in the eastern and central forests – cutover is a term indicating the systematic deforestation of the eastern and central forests. Michigan and Wisconsin saw their trees removed in systematic deforestation.

Some areas were allowed to grow back, but many other areas were turned into farmland. Few old-growth forests remain in the United States. Today there are conflicts over how the timber industry is handling the forests in places such as the Pacific Northwest region of the United States.

Countries that are better off economically no longer have to cut down their trees, but can afford to substitute other resources or import lumber from other places. Developing regions of the world in Latin America, Africa, and parts of Asia are experiencing severe problems with deforestation. Deforestation is widespread: Residents of Haiti have cut down about 99 percent of the country’s forests; most of the wood has been used as fuel to cook food. People in Afghanistan have cut down about 70 percent of their forests. Nigeria has lost about 80 percent of its old-growth forests since 1990. Ethiopia has lost up to 98 percent of its forested acreage, and the Philippines has lost about 80 percent of its forests.

Brazil’s Amazon basin has undergone many projects that have driven deforestation. For example, about half the state of Rondônia in western Brazil has been deforested since 1990. The countries of Central America have lost about half their original forests, and deforestation continues on a systematic basis. Tropical regions of Southeast Asia and Africa are being exploited for their timber at unsustainable rates, causing deforestation that the next generation will have to address. India, with over a billion people, still has a high demand for firewood and building materials; their forests are declining faster than they can be replanted. China, with its billion-plus population, has been attempting to address its deforestation problems by implementing a massive replanting program and conservation measures. Other countries are starting to adopt similar measures.

Tropical rain forests only makeup about 5 percent of the earth’s surface but contain up to 50 percent of the earth’s biodiversity. These forests are cut down for a variety of reasons. Norman Meyers, a British environmentalist, estimated that about 5 percent of deforestation in tropical regions is caused by the push for cattle production. Nineteen percent of these forests are cut down by the timber industry, 22 percent are cut down for the expansion of plantation agriculture, and 54 percent are removed due to slash-and-burn farming. Most tropical rain forests are located in the Amazon basin of South America, in central Africa, and Southeast Asia. All these areas are looking for advantages and opportunities to boost their economies; unfortunately, they often target their tropical rain forests as a revenue source.

Deforestation causes more than the loss of trees for fuel, building materials, paper products, or manufacturing. Another related issue in the deforestation equation is soil erosion. Without the trees to hold the soil during heavy rains, soils are eroded, leaving the ground in an unproductive state. In tropical areas, soils are often degraded and lack nutrients. Most of the nutrients in the tropical areas rest in decaying material at the base of the trees that supply energy back into the ecosystem. Once the trees are removed, there is little replenishing of this energy supply. Soil erosion in tropical areas makes it hard for forests to grow back once they have been removed. Landslides can be a more severe component of the soil erosion problem. After heavy rainfall, entire hillsides saturated with water can slide downward, causing severe structural damage to buildings, homes, and agricultural plots. Tree roots help hold hillsides together and therefore help prevent landslides.

Forests play an essential role in the water cycle. Trees pull up moisture with their roots from the soil and transpire it through their leaves back into the atmosphere. Moisture in the atmosphere collects into clouds, condenses, and falls back to Earth. Not only do trees store water, but the organic matter at the base of the trees also stores water and makes it available to the broader ecosystem, which may slow down water runoff. Forest canopies disperse water during rainfall and create another layer of moisture in their leaves and branches, which either is used by other organisms or evaporates back into the atmosphere. Deforestation eliminates the role that forests play in the water cycle.

Forest ecosystems provide for a diverse community of organisms. Tropical rain forests are one of the most vibrant ecosystems on the planet. Their abundant biodiversity can provide insight into untapped solutions for the future. Plants and organisms in these habitats may hold the key to medical or biological breakthroughs, but wildlife and vegetation will be lost as deforestation eliminates their habitat and accelerates the extinction of endangered species.

Trees and plants remove carbon dioxide from the atmosphere and store it in the plant structure through the process of photosynthesis. Carbon dioxide is a significant greenhouse gas that is a part of the climate change process. Carbon dioxide and other similar gases reduce the amount of long-wave radiation (heat) that escapes from the earth’s atmosphere, resulting in increased temperatures on the planet. As more carbon dioxide is emitted into the atmosphere, climate change occurs. The removal of trees through deforestation results in less carbon dioxide being removed from the atmosphere, which contributes to climate change. Slash-and-burn farming methods that burn forests release the carbon in the plant life directly into the atmosphere, increasing the climate change effect.

Fossil Fuels

Everything that is or was alive is made out of carbon. Millions of years ago when the planet was a lot warmer, plant life was quite abundant. Over geologic time, these carbon bodies were buried and ultimately converted to fossil fuels (i.e., coal, petroleum, and natural gas). When you fill your car up at the gas station, you are technically putting ancient plant life into your vehicle. When you drive off, that fuel is burned, and the ancient carbon is released into the environment in the form of carbon dioxide.

There are two concerns about fossil fuels. One is that the carbon dioxide released is a greenhouse gas, and the other is that it is considered a finite resource. A natural resource is considered a renewable resource if nature can reproduce it within a human lifetime. So energy sources such as solar energy, wind power, and geothermal are considered renewable energy sources. Fossil fuels are not considered renewable because it requires millions of years for the earth to replenish them. So ultimately, humans will run out of fossil fuels, but the question is when. In terms of coal, the world has well over 200 years worth, but with petroleum, the question becomes more complicated.

Currently, there are over a trillion barrels of petroleum, called proven reserves, that we are aware of with current technology. Potential reserves are resources of petroleum not discovered yet by society. Currently, there is much concern about how many reserves of petroleum are left to discover. Technology today is allowing the industry to discover reserves deeper than ever before and tap into petroleum reserves in ways never allowed before.

Uneven Distribution

Another global problem in terms of fossil fuels is that it is not found uniformly around the planet. Coal forms in tropical regions where there are lots of vegetation and swamps. As the vegetation falls into oxygen-poor water, it is converted into a carbon-based rock over geologic time. Because of plate tectonics, the slow movement of continents around the planet, most of the mid-latitude countries such as China, Russia, and the United States were located near the equator 250 million years ago. Today these countries have abundant amounts of coal. Petroleum and natural gas forms on the ocean floor under high pressure from overlying water and sediment. Some of these areas are still underwater, such as in the Persian Gulf and the Gulf of Mexico. Other regions are no longer underwater such as the Middle East.

Most of the world’s sources of fossil fuels exist in more developed countries, which has much helped in their development. Today the United States and China are the largest consumers of fossil fuels on the planet. In the 21st century, the demand for coal, petroleum, and natural gas will shift to less-developed nations as they move through the demographic transition model.

The majority of the world’s petroleum prices is determined by As noted earlier, mid-latitude countries such as the United States, Russia, and China have the most abundant supply of coal. In terms of petroleum, the mission of the Organization of Petroleum Exporting Countries (OPEC) “is to coordinate and unify the petroleum policies of its Member Countries and ensure the stabilization of oil markets in order to secure an efficient, economic and regular supply of petroleum to consumers, a steady income to producers and a fair return on capital for those investing in the petroleum industry.”

In the 1970s, there was a global energy crisis. This occurred when Arab countries of OPEC were angered by Europe and the United States’ support over Israel during the 1973 war with Egypt, Jordan, and Syria. The Arab OPEC members refused to supply oil to the United States, which immediately created a fuel shortage. During the 1980s and 1990s, prices of oil dropped dramatically, stimulating global economies all around the world. After the fall of the Soviet Union, Russia struggled to survive as a modern society. However, starting in the late 1990s, Russia began exporting its petroleum and coal resources and its political, economic, and military power grew substantially. Cheap fuel in the United States spurred the automotive industry to build large SUVs with low miles-per-gallon. However, the mid-2000s saw a sharp increase in fuel prices with record prices occurring in the summer of 2008. Following the summer of 2008, SUV sales plummeted risking the possibility of Ford and GM becoming extinct.

Nonrenewable Substitutions

With the increase of oil in the last few years, there has been a desire to find alternatives. There have been a sharp increase in natural gas vehicles because natural gas is cheaper and pollutes less than oil. However, the underlying economics of supply and demand state that as natural gas is used more (demand), the cost is likely to follow.

Since the world has plenty of coal to last hundreds of years, some have pushed more coal burning. There are several environmental concerns with coal. First, coal is the “dirtiest” fossil fuel in terms of air pollution. Burning coal releases vast amounts of sulfur, which creates acid rain and mercury, which damages our neurological system. It also releases the most substantial amount of carbon dioxide, which is a greenhouse gas. With the current concern with global warming, there have been many talks about carbon sequestration. The idea behind this is that if humans can capture the carbon dioxide before it is released, we might be able to “lock” it deep within the earth and thus preventing it from contributing to global warming. However, the technology here is far from proven yet.

The third source of nonrenewable energy is nuclear. Since Chernobyl in 1986 in the former Soviet Union and the Three-Mile Island incident in the United States, our country has been very apprehensive in creating new nuclear power plants. The benefit of nuclear power is that incredible amounts of energy can be generated without polluting the environment. There are serious concerns about potential accidents and the radioactive waste it generates. There has been a recent heated debate in the West as to where to store radioactive waste. In Utah, there have been conversations regarding the storing of nuclear waste at the Goshute Indian Reservation as a short-term stop to Yucca Mountain in Nevada. However, many in Utah believe that a nuclear waste, which takes tens of thousands of years to decompose, in Utah will never leave even though we do not have a nuclear power plant. In Nevada, there is concern about the actual safety of storing nuclear waste in a mountain with nearby fault lines. Moreover, after the September 11 terrorist attacks, there is renewed interest in nuclear power plants becoming targets.

Natural Substitutes

With the increase of oil in the last few years, there has been a desire to find alternatives. There have been a sharp increase in natural gas vehicles because natural gas is cheaper and pollutes less than oil. Basic economics of supply and demand state that as natural gas is used more (demand), the cost is likely to follow.

Since the world has plenty of coal to last hundreds of years, some have pushed more coal burning. However, there are several environmental concerns with coal. First, coal is the “dirtiest” fossil fuel in terms of air pollution. Burning coal releases vast amounts of sulfur, which creates acid rain and mercury, which damages our neurological system. It also releases the most significant amount of carbon dioxide, which is a greenhouse gas. With the current concern with global warming, there have been many talks about carbon sequestration. The idea behind this is that if humans can capture the carbon dioxide before it is released, we might be able to “lock” it deep within the earth and thus preventing it from contributing to global warming. However, the technology here is far from proven yet.

The third source of nonrenewable energy is nuclear. Since Chernobyl in 1986 in the former Soviet Union and the Three-Mile Island incident in the United States, our country has been very apprehensive in creating new nuclear power plants. The benefit of nuclear power is that incredible amounts of energy can be generated without polluting the environment. However, there are serious concerns about potential accidents and the radioactive waste it generates. There has been a recent heated debate in the West as to where to store radioactive waste. In Utah, there has been talking of storing nuclear waste at the Goshute Indian Reservation as a short-term stop to Yucca Mountain in Nevada. But many in Utah believe that a nuclear waste, which takes tens of thousands of years to decompose, in Utah will never leave even though we do not have a nuclear power plant. In Nevada, there is concern about the actual safety of storing nuclear waste in a mountain with nearby fault lines. Moreover, after the September 11 terrorist attacks, there is renewed interest in nuclear power plants becoming targets.

8.2 Environmental Pollution

Pollution of the environment occurs when humans contaminate the air, water, or land. Pollution can also be broken down into two categories: primary and secondary. Primary pollution is when humans directly contaminate the earth in some manner. Examples include mercury, sulfur, and even carbon dioxide. Secondary pollution happens when a primary pollutant reacts with another primary pollutant, sunlight, and water to create a different pollutant.

An example is acid rain. Sulfur dioxide is a primary pollutant, but when it reacts with precipitation is becomes a secondary pollutant called acid rain. One of the biggest problems with pollution is that those who pollute are usually not the ones affected by it; instead, the down-winders are.

Air Pollution

The atmosphere is mostly made of 78 percent nitrogen, 21 percent oxygen, and small percents of other trace molecules such as ozone, carbon dioxide, water vapor, and aerosols. Air pollution occurs when humans add unnatural substances into the atmosphere. Most of the air pollution from the industry comes from coal, while automotive pollute vast amounts of ozone, carbon dioxide, and sulfur into the atmosphere. However, in the 1970s, the United States created the Clean Air Act, which has dramatically enhanced the quality of our nation’s air. Check out this video from National Geographic on the world’s air quality.

Those who pollute are usually not the ones affected by it. Industrialization in eastern North American and eastern Europe have generated large-scale pollutants such as sulfur oxides and nitrogen oxides through the burning of fossil fuels. When these pollutants react with water, they form acid precipitation. Acid precipitation can cause large-scale damage to aquatic life and forests by making the vegetation very sick and dying. In forests, this can lead to disease through pest infestation. Acid precipitation can also damage or destroy buildings and monuments made out of marble such as tombstones.

Ozone Hole

In the 1920s, humans developed a chemical called chlorofluorocarbons (CFCs) for things such as refrigerating and air conditioners. However, in the 1970s, two American scientists discovered that these CFCs were weakening the ozone hole. What they learned is that when the CFC’s reach the layer of the ozone hole, the ultraviolet radiation from the sun breaks the chlorine off which can attach and destroy over 100,000 ozone molecules and continue in the upper atmosphere for over 100 years. Over time and much debate, the world got together and signed the Montreal Protocol in 1987 to phase out CFCs. Today, most industrialized countries have eliminated the use of CFCs, but the ozone hole is not required to heal for another 50-100 years. Learn more about what is currently going on with the ozone hole at NASA’s Ozone Hole Watch.

Water Pollution

Water is the most valuable resource on the planet, but humans keep polluting it in various ways. Manufactures use water to create and process food. Farmers pollute vast amounts of water through fertilizer and waste from pigs and cows in unhealthy feedlots. Water is used by coal powerplants to extract and wash coal, along with cooling the steam used to make electricity. All of these processes, along with residential use, have negative impacts on water quality.

Water pollution can significantly harm aquatic life in rivers, lakes, and the ocean. Many of the fertilizers in farmers and the cleaners we use can create algae blooms in our local rivers. When the algae die, it can also remove the oxygen from the water, which can kill fish and other aquatic life. These are called dead zones, and one of the biggest in the world is forming in the Gulf of Mexico because of the pollution in the Mississippi River. Just like our air, the nation’s water has dramatically improved since the 1970s because of the Clean Water Act.

8.3 Anthropocentric Climate Change

Weather and Climate

When it comes to defining climate, it is often said that “climate is what you expect; weather is what you get.” That is to say; climate is the statistically-averaged behavior of the weather. In reality, it is a bit more complicated than that, as climate involves not just the atmosphere, but the behavior of the entire climate system—the complex system defined by the coupling of the atmosphere, oceans, ice sheets, and biosphere. Weather is the current conditions of the atmosphere for a specific location and time.

Having defined climate, we can begin to define what climate change means. While the notion of climate is based on some statistical average of the behavior of the atmosphere and ocean, this typical behavior can change over time. That is to say, what you “expect” of the weather is not always the same. For example, during El Niño years, we expect it to be wetter in the winter in California and snowier in the southeastern U.S., and we expect fewer tropical storms to form in the Atlantic during the hurricane season. So, the climate itself varies over time.

If the climate is always changing, then is climate change by definition always occurring? Yes and No. A hundred million years ago, during the early part of the Cretaceous period, dinosaurs roamed a world that was almost certainly warmer than today. The geological evidence suggests, for example, that there was no ice even at the North and South poles. Climate change is a naturally occurring process of the planet, following a variety of different cycles. Something else is occurring that is causing the planet to warm

So, the significant climate changes in Earth’s geologic past were closely tied to changes in the greenhouse effect. Those changes were natural. The changes in greenhouse gas concentrations that scientists talk about today are, however, not natural. They are due to human activity.

The scientific consensus demonstrates that climate change in the 21st century is necessarily a human problem. People are causing climate change through their everyday actions and the socioeconomic forces underlying those actions. At the same time, people are feeling the consequences of climate change through various impacts on things they value, and through the responses, they are making to address climate change.

Climate is the average of weather (typically precipitation and temperature) in a particular location over a long period, usually for at least 30 years. A location’s climate can be described by its air temperature, humidity, wind speed and direction, and the type, quantity, and frequency of precipitation. Climate can change, but only over long periods. The climate of a region depends on its position relative to many things.

Scientific Consensus

The scientific consensus is clear, in that 97 percent of all scientists who directly study climates and climate change believe that the current warming of the planet is anthropogenic (human) in nature. Moreover, all of the scientific evidence and planetary vital signs indicate that more greenhouse gases are trapping Earth’s heat, causing average annual global temperatures to rise.  While temperatures have risen since the end of the Pleistocene, 10,000 years ago, this rate of increase has been more rapid in the past century and has risen even faster since 1990. The nine warmest years on record have all occurred since 1998, and NASA and NOAA reported in 2019 that the year 2018 was the fourth warmest ever recorded on the planet. The 2010-2020 is predicted to be the warmest decade yet, followed by 2000-2010.

The United States has long been the largest emitter of greenhouse gases, with about 20 percent of total emissions. As a result of China’s rapid economic growth, its emissions surpassed those of the United States in 2008. However, it is also essential to keep in mind that the United States has only about one-fifth the population of China. What is the significance of this? The average United States citizen produces far more greenhouse gases than the average Chinese person.

Predicted Future Warming

Climate change can be a naturally occurring process and has created environments much warmer than today, such as the early Cretaceous period. During this time, life thrived even in regions, such as the interior of Antarctica, that is uninhabitable today.

One misconception is that the threat of climate change has to do with the absolute warmth of the Earth. That is not, in fact, the case. It is, instead, the rate of change that has scientists concerned. Living things, including humans, can quickly adapt to substantial changes in climate as long as the changes take place slowly, over many thousands of years or longer. However, adapting to changes that are taking place on timescales of decades is far more challenging. However, the planet is warming at such a rate that most species, especially mammals, will struggle to adapt and evolve quickly enough to the coming warmer climates.

The natural increase in atmospheric carbon dioxide that led to the thaw after the last Ice Age was an increase from 180 parts per million (ppm) to about 280 ppm. This was a smaller increase than the present-time increase due to human activities, such as fossil fuel burning, which thus far have raised CO2 levels from the pre-industrial value of 280 ppm to a current level of over 410 ppm – a level which is increasing by 2 ppm every year. So, arguably, if the dawn of industrialization had occurred 18,000 years ago, we may very likely have sent the climate from an ice age into the modern pre-industrial state.

How long it would have taken to melt all of the ice is not precisely known, but it is conceivable it could have happened over a period as short as two centuries. The area ultimately flooded would be considerably more significant than that currently projected to flood due to the human-caused elevation of carbon dioxide that has taken place so far. Below is a video from Science Insider on what the planet would like today if all the glaciers melted.

By some measures, human interference with the climate back then, had it been possible, would have been even more disruptive than the current interference with our climate. That interference would merely be raising global mean temperatures from those of the last Ice Age to those that prevailed in modern times before industrialization. What this thought experiment tells us is that the issue is not whether some particular climate is objectively “optimal.” The issue is that human civilization, natural ecosystems, and our environment are heavily adapted to a particular climate — in our case, the current climate. Rapid departures from that climate would likely exceed the adaptive capacity that we and other living things possess, and cause significant consequent disruption in our world.

The amount of carbon dioxide levels will continue to rise in the decades to come. However, the impacts will not be evenly distributed across the planet. Some of those impacts will depend on environmental and climate factors; other impacts will be dependent on whether the countries are developed or developing. Scientists use sophisticated computer models to predict the effects of greenhouse gas increases on climate systems globally for specific regions of the world.

If nothing is done to control greenhouse gas emissions, and they continue to increase at current rates, the surface temperature of the Earth can be expected to increase between 0.5 degrees C and 2.0 degrees C (0.9 degrees F and 3.6 degrees F) by 2050 and between 2 degrees and 4.5 degrees C (3.5 degrees and 8 degrees F) by 2100, with carbon dioxide levels over 800 parts per million (ppm). On the other hand, if severe limits on carbon dioxide emissions begin soon, temperatures could rise less than 1.1 degrees C (2 degrees F) by 2100.

Whatever the temperature increase, it will not be uniform around the globe. A rise of 2.8 degrees C (5 degrees F) would result in 0.6 degrees to 1.2 degrees C (1 degree to 2 degrees F) at the equator, but up to 6.7 degrees C (12 degrees F) at the poles. So far, global warming has affected the North Pole more than the South Pole, but temperatures are still increasing at Antarctica.

Effects of Anthropogenic Climate Change

There are a variety of possible and likely effects of climate change on human and natural environments. NASA has tried to list some of those potential effects and can be found here. NASA also has a website called the Climate Time Machine, to help visualize Earth’s key climate indicators and how they are changing over time.

Species Mating and Migration

The timing of events for species is changing. Mating and migrations take place earlier in the spring months, and species that are more mobile are migrating uphill. Some regions that were already marginal for agriculture are no longer farmable because they have become too warm or dry.

Melting Snowpack and Glaciers

Decreased snowpacks, shrinking glaciers, and the earlier arrival of spring will all lessen the amount of water available in some regions of the world, including the western United States and much of Asia. Ice will continue to melt, and sea level is predicted to rise 18 to 97 cm (7 to 38 inches) by 2100. An increase this large will gradually flood coastal regions where about one-third of the world’s population lives, forcing millions of people to move inland.

Glaciers are melting, and vegetation zones are moving uphill. If fossil fuel use exploded in the 1950s, why do these changes begin early in the animation? Does this mean that the climate change we are seeing is caused by natural processes and not by fossil fuel use?

Oceans and Rising Sea Levels

As greenhouse gases increase, changes will be more extreme. Oceans will become slightly more acidic, making it more difficult for creatures with carbonate shells to grow, and that includes coral reefs. A study monitoring ocean acidity in the Pacific Northwest found ocean acidity increasing ten times faster than expected and 10 percent to 20 percent of shellfish (mussels) being replaced by acid-tolerant algae.

Plant and animal species seeking cooler temperatures will need to move poleward 100 to 150 km (60 to 90 miles) or upward 150 m (500 feet) for each 1.0 degrees C (8 degrees F) rise in global temperature. There will be a tremendous loss of biodiversity because forest species cannot migrate that rapidly. Biologists have already documented the extinction of high-altitude species that have nowhere higher to go.

One may notice that the numerical predictions above contain wide ranges. Sea level, for example, is expected to rise somewhere between 18 and 97 centimeters by 2100. The reason for this uncertainty is in part because scientists cannot predict precisely how the Earth will respond to increased levels of greenhouses gases. How quickly greenhouse gases continue to build up in the atmosphere depends in part on the choices we make.

Extreme Weather

Weather will become more extreme with heatwaves and droughts. Some modelers predict that the Midwestern United States will become too dry to support agriculture and that Canada will become the new breadbasket. In all, about 10% to 50% of current cropland worldwide may become unusable if CO2 doubles. There are global monitoring systems to help monitor potential droughts that could turn into famines if they occur in politically and socially unstable regions of the world, and if appropriate action is not taken in time. One example is the Famine Early Warning System Network (FEWS NET), which is a network of social and environmental scientists using geospatial technology to monitor these situations. However, even with proper monitoring, if nations do not act, catastrophes can occur like in Somalia from 2010-2012.

Although scientists do not all agree, hurricanes are likely to become more severe and possibly more frequent. Tropical and subtropical insects will expand their ranges, resulting in the spread of tropical diseases such as malaria, encephalitis, yellow fever, and dengue fever.

An important question people ask is this: Are the increases in global temperature natural? In other words, can natural variations in temperature account for the increase in temperature that we see? The scientific data shows no, natural variations cannot explain the dramatic increase in global temperatures. Changes in the Sun’s irradiance, El Niño and La Niña cycles, natural changes in greenhouse gas, plate tectonics, and the Milankovitch Cycles cannot account for the increase in temperature that has already happened in the past decades.

In December 2013 and April 2014, the Intergovernmental Panel on Climate Change (IPCC) released a series of damaging reports on not only the current scientific knowledge of climate change but also on the vulnerability and impacts to humans and ecosystems. Below are two videos detailing the physical science of climate change and the risks and impacts on the planet.

8.4 Renewable Resources

Humans cannot sustain the path we have been traveling with our consumption of resources, and a global population expected to peak at 9 billion by 2050. We need to learn how to live differently without decreasing our quality of life. One such possibility is to move towards a renewable energy economy. The following are the diverse types of renewable energy.

Biomass

Biomass is when humans burn vegetation as a fuel source. Many argue that this not a viable option for human energy consumption. Burning biomass releases large amounts of carbon dioxide into the atmosphere and requires the destruction of ecosystems such as deforestation. There has also been a recent push for ethanol as a “green” source of energy. In the United States, corn has been used and subsidized to make ethanol. The effects have been a spiraling rising in the cost of corn-based food. Plus many would argue that humans should not be using food for fuel when humans are now consuming more food than we are producing. In Brazil, they are using sugarcane to produce ethanol. Because there is much money to be made in the ethanol industry, Brazil is cutting down the Amazon rainforest to produce more sugarcane for its energy economy. So it can be argued that ethanol is not “green” energy if it requires deforesting the rain forests along with causing food prices to rise.

Hydroelectric Power

Hydroelectric power is also questionable as an energy source even though it is renewable. Hydroelectric power requires dams being built in order for flowing water to turn turbines within the dam to generate electricity. There are numerous problems with power coming from hydroelectric dams. It requires flooding usable and often time fertile land to create a lake. Over time, the lake can fill up as sediment gets deposited into the lake. Dams can also harm aquatic wildlife such as salmon because they prevent them from returning to their spawning locations. Many northwestern states in American have dismantled damns because salmon are near extinction. However, it must be said that it is “clean” energy in that hydroelectric power does not pollute the air or water.

Wind Power and Geothermal Energy

Windmills have been around for hundreds of years, but only recently have they been used to generate electricity. Until last year, wind power was the fastest-growing energy source in the world. Moreover, with the rising costs of fossil fuels, wind power is now cheaper to produce than energy from fossil fuels. Farmers are getting onboard with wind power because power companies will rent space to place the windmills, which will provide a steady income for the farmer. However, the farmer can still grow their crops or have their cattle and maintain their way of life. The energy created by wind is similar to dams because the wind turns the blades, which turns a turbine within the windmill to generate electricity.

There are a few concerns with wind power, however. Some do not like how windmills look because they require being out in the open; whereas coal power plants are easier to hide behind mountains. There is also concern that windmills can harm migratory birds and bats. However, wildlife is more likely to be hurt by changing climates than by small-scale windmills. Now out in Europe, they have been earnest about wind power. Some of the windmills along the continental shelf, where the winds are steadily consistent, have windmills so giant they can land a helicopter on them. They are so large that each blade is over 300 feet long (said another way, each blade is taller than the Statue of Liberty). The United States is still far behind other countries in Europe, but that is starting to change. It is now possible to purchase wind power from various energy companies. The most extensive wind power program in the United States is called the Blue Skies Program by Rocky Mountain Power.

The Earth’s interior is still sweltering because of Earth’s formation. A new technique being implemented is to use water and the internal heat the earth to produce steam, which can turn turbines to generate electricity. It requires using existing groundwater or pumping groundwater into the earth so the heat can evaporate the water into steam and turn a turbine. The image below is a geothermal plant in Iceland where they plan to use the heat from their volcano (Iceland is a volcanic island) to power their entire country.

Solar Energy

With the sun still having 5 billion years of life, our star is the ultimate renewable energy source. There are two types of solar energy: passive and active. Passive solar energy requires no special devices, rather south-facing windows and dark surfaces to light and heat buildings. This is a very inexpensive alternative, and, surprisingly, it is not used more often. Active solar energy captures heat and generates electricity by using photovoltaic cells with solar panels. The panel’s cells are made from silicon, which is the second most abundant mineral on Earth’s crust and when combined with other materials become sensitive to sunlight, called the photovoltaic effect. The electrons within the cells move through the silicon and produce an electrical current. In 2008 solar panels surpassed windmills as the fastest-growing energy source in the world.

Recycling

There has also been a steady demand to recycle rather than through products into our landfills. However, recycling is not only about saving landfill space; it is about water, natural resources, and energy. It requires less energy, water, and natural resources from the earth to re-create something than to mine and process the raw material. Take a soda can. How long do you keep a soda can once you open it? Did you know that it may take up to three years for the material to be mined from the mountain, processed, shipped, filled with soda, and shipped to you? This requires a lot more energy than we typically consider, and learning to recycle projects does more than just savings than just landfill space.

There is now a variety of ways someone can recycle. Many cities around the nation have curbside recycling. There are also several drop-off sites, which are often found at retail and grocery stores. Buy-back centers are commercial businesses that purchase recyclable goods. However, it is important to note that what you can recycle varies based on the recycling company. Therefore citizens must learn what products can be recycled for your geographic area.

Chapter 7: Rural and Urban Landscapes

Enduring Understandings

  • The form, function, and size of urban settlements are continually changing.
  • Models help to understand the distribution and size of cities.
  • Models of internal city structure and urban development provide a framework for urban analysis.
  • Built landscapes and social space reflect the attitudes and values of a population.
  • Urban areas face economic, social, political, cultural and environmental challenges.

7.1 Defining Cities and Urban Centers

Cities and Metropolises

Must of us are “city people,” whether we like it or not. Many people say they do not like the city, with its noise, pollution, crowds, and crime, but living outside the city has its challenges as well. Living outside a city is inconvenient because rural areas lack access to the numerous amenities found in cities. The clustering of activities within a small space is called agglomeration, and it reduces the friction of distance for thousands of daily activities. Cities are convenient places for people to live, work, and play. Convenience has economic consequences, as well. Reduced costs associated with transportation, and the ability to share expenses for infrastructure creates what is known as economies of agglomeration, which is the fundamental reason for cities. The convenience and economic benefits of city life have led nearly 8 in 10 Americans to live in urban areas. In California, America’s most urban state, almost 95% of its people live in a city. This chapter explores the evolution of cities, why cities are where they are, and how the geography of cities affects the way urbanites live.

Though it seems simple enough, distinguishing cities from rural areas is not always that easy. Countries around the world have generated a plethora of definitions based on a variety of urban characteristics. Part of the reason stems from the fact that defining what constitutes urban is somewhat arbitrary. Cities are also hard to identify because they look and function quite differently in different parts of the world. Complicating matters are the great variety of terms we use to label a group of people living together. Hamlets are tiny, rural communities. Villages are slightly larger. Towns are larger than villages. Cities are larger than towns. Then there are words like metropolis and even megalopolis to denote huge cities. Some states in the United States have legal definitions for these terms, but most do not. The United States Census Bureau creates the only consistent definition of “city,” and it uses the terms “rural” and “urban” to distinguish cities from non-city regions. This definition has been updated several times since the 1800s, most radically in recent years as the power of GIS has allowed the geographers are working for the U.S. Census Bureau to consider multiple factors simultaneously. It can get involved.

For decades, the U.S. Census Bureau recognized an area as “urban” if it had incorporated itself as a city or a town. Incorporation indicates that a group of residents successfully filed a town charter with their local state government, giving them the right to govern themselves within a specific space within the state. Until recently, the U.S. Census Bureau classified almost any incorporated area with at least 2,500 people as “urban.”

There were problems though with that simple definition. Some areas which had quite large populations but were unincorporated, failed to meet the old definition or urban. For example, Honolulu, Hawaii, and Arlington, Virginia, are not incorporated, therefore they were technically labeled “census-designated places,” rather than cities. Conversely, some incorporated areas may have very few people. This can happen when a city loses population, or when the boundaries of a city extend far beyond the populated core of the city. Jacksonville, Florida, is the classic example of this problem. Jacksonville annexed so much territory that its city limits extend far into the adjacent countryside, making it the largest city in land area in the United States (874.3 square miles!).

Therefore, the Census Bureau created a complex set of criteria capable of evaluating a variety of conditions that define any location as urban or rural. Among the criteria now used by the Census is a minimum population density of 1,000 people per square mile, regardless of whether the location is incorporated or not. Additionally, a territory that includes non-residential but still urban land uses is included. Therefore, areas with factories, businesses, or a large airport, that contains few residences still counted as part of a city. The Census uses a measure of surface imperviousness to help make such a decision. This means that even a parking lot may be a factor in classifying a place as urban. Finally, the Census classifies locations that are reasonably close to an urban region if it has a population density of at least500 persons per square mile. That way, small breaks in the continuity of built-up areas do not result in the creation of multiple urban areas, but instead form a single, contiguous urban region. Therefore, people in the suburbs within five miles of the border of a larger city, are counted by the Census as residents of the urban region, associated with a central city.

City Push and Pull Factors

Cities began to form many thousands of years ago, but there is little agreement regarding why cities form. The chances are that many different factors are responsible for the rise of cities, with some cities owing to their existence to multiple factors and cities that arose as a result of more specific conditions.

Two underlying causal forces contribute to the rise of cities. Site location factors are those elements that favor the growth of a city that is found at that location. Site factors include things like the availability of water, food, good soils, a quality harbor, and characteristics that make a location easy to defend from attack. Situation factors are external elements that favor the growth of a city, such as distance to other cities, or a central location. For example, the exceptional distance invading armies have had to travel to reach Moscow, Russia has helped the city survive many wars. Most large cities have good site and situation factors.

Indeed, the earliest incarnation of cities offered residents a measure of protection against violence from outside groups for thousands of years. Living in a rural area, farming or ranching, made any family living in such isolation vulnerable to attack. Small villages could offer limited protection, but larger cities, especially those with moats, high walls, professional soldiers, and advanced weaponry, were safer.

The safest places were cities with quality defensible site locations. Many of Europe’s oldest cities were founded on defensible sites. The European feudal system was built upon an arrangement whereby the local lord/duke/king supplied protection to local rural peasants in exchange for food and taxes. For example, Paris and Montreal were founded on defensible island sites. Athens was built upon a defensible hillside, called an acropolis. The Athenian Acropolis is so famous that it is called merely, The Acropolis. On the other hand, Moscow, Russia, takes advantage of its remote situation. Both Napoleon and Hitler found out the hard way the challenges associated with attacking Moscow.

In the United States, the Atlantic and Pacific Oceans have primarily functioned as America’s defensive barriers, and therefore few cities are located on defensive sites. Washington, D.C. has no natural defense-related site or situation advantages. On the only occasion the U.S. was invaded, the city was overrun by the British in the War of 1812. The White House and the Capitol were burned to the ground. The poor defensibility of the American capital led to numerous calls for its relocation to a more defensible site during the 1800s. This is partly the reason, so many state-capitol buildings in the Midwest closely resemble the U.S. Capitol building in Washington, D.C.; many states were trying to lure the seat of the Federal government to their state capital.

San Francisco is the best example of a large American city founded upon the basis of its defensibility. Located on a peninsula between the Pacific Ocean and a large bay, San Francisco was established where it is because of the military advantage provided by that site. San Francisco boasts two kinds of defensible site advantages. It is both a peninsula site and a sheltered harbor site. Cannons positioned on either side of the Golden Gate could fire upon any enemy ships trying to pass into the San Francisco Bay. Armies coming northward up the peninsula would be forced into a handful of narrow passes where the Spanish Army could focus their defenses. These site advantages led the Spanish to establish the fort, El Presidio Real de San Francisco, there in 1776. The U.S. Army took control of the fort in 1846, and it remained a military base until 1994.

People who possess a specific skill set to become a site factor that can significantly affect the location and growth of a city. One specialized skill set was confined to the priestly class, and proximity to religious leaders is another probable reason for the formation of cities. Priests and shamans would have likely gathered the faithful near to them, so that, as the armies of the lordly class, they could offer protection and guidance in return for food, shelter, and compensation (like tithes). The priestly class was also the primary vessels of knowledge – and the tools of knowledge like writing and science (astronomy, planting calendars, medicine, e.g.), so a cadre of assistants in those affairs would have been necessary. Mecca and Jerusalem are probably the best examples of holy cities, but others dot the landscape of the world. Rome existed before the Catholic faith, but it assuredly grew and prospered as a result of becoming the headquarters of Christianity for hundreds of years.

Cities may have evolved as small trading posts where local farmers and wandering nomads exchanged agricultural and craft goods. The surplus wealth generated through trade required protection and fortifications, so cities with walls may have been built to protect marketplaces and vendors. Some trace the birth of London to an ancestral trading spot called Kingston upon the Thames, a market town founded by the Saxons southwest of London’s present core. The place-names of many ancient towns in England reveal their original function – Market Drayton, Market Harborough, Market Deeping, Market Weighton, Norton Chipping, Chipping Ongar, and Chipping Sodbury. “Chipping” is a derivation of a Saxon word meaning “to buy.”

Throughout history, cities, big and small, have served market functions for those who live in adjacent hinterlands. Some market cities grow much more substantial than others because they are more centrally located. Central location relative to other competing marketplaces is another example of an ideal situation factor. Large cities have excellent site and situation characteristics. Every major US city, including New York, Chicago, Los Angeles, Atlanta, and Houston is located ideally for commerce and industry.

Some cities grow large because of specific site location advantages that favor trade or industry. All cities compete against one another to attract industry, but only those with quality site factors, like excellent port facilities and varied transportation options, grow large. Cities ideally located between significant markets for exports and imports have unique situation factor advantages versus other competing cities and will grow most.

Most large cities in the United States emerged where two or more modes of transportation intersect, forming what geographers call a break of bulk point. Breaking bulk happens whenever cargo is unloaded from a ship, truck, barge, or train. Until the 1970s, unloading (and reloading) freight required a vast number of laborers, and therefore any city that had a busy dock or port or station attracted workers. Los Angeles, Chicago, New Orleans, and Houston all grew very large because multiple transportation modes well served each.

New York City is the largest city in the United States, but it was not always that way. It outgrew competitors on the East Coast because of the specific advantages of transportation. Early on, Boston and Philadelphia were more significant, but New York City’s break of bulk advantages helped it immensely. Key among the factors helping New York out-compete rivals were its additional transportation options. First, it had a port on the Atlantic Ocean. Second, it had the navigable Hudson River, which served inland cities far from the ocean via riverboat and barge. Then, in 1825, the Erie Canal opened, effectively connecting the Atlantic Ocean with Lake Erie and all the markets of the Great Lakes Region via New York City. The canal was a massive advantage. With the opening of the canal, agricultural products coming from the Midwest could be transported across the Great Lakes and Erie Canal to New York City, where it was off-loaded from riverboats to ocean-going ships headed for Europe. Simultaneously, goods coming from Europe and destined for any location in the Midwest had to be unloaded at the port in New York City. The additional jobs working at docks and warehouses attracted other industries, and a snowball effect was achieved by the mid- 1850s that made New York City, for a time, the largest city in the world.

With all of this in mind, it is possible to develop a view of cities that is based on innovations and diffusions of technology. This is what was done by the geography of John R. Borchert during the 1960s. Borchert developed a view of the urbanization of the United States that is based on the epochs of technology. As the components of technology wax and wane, the urban landscape undergoes dramatic changes.

  • Stage 1: Sail-Wagon Epoch (1790–1830); the only means of international trade was sailing ships. Once goods were on land, they were hauled by wagon to their final destination.
  • Stage 2: Iron Horse Epoch (1830–70); characterized by the impact of steam engine technology, and development of steamboats and regional railroad networks.
  • Stage 3: Steel Rail Epoch (1870–1920); dominated by the development of long-haul railroads and a national railroad network.
  • Stage 4: Auto-Air-Amenity Epoch (1920–70); with growth in the gasoline combustion engine.
  • Stage 5: Satellite-Electronic-Jet Propulsion (1970–?), also called the High-Technology Epoch. This stage has continued to the present day as both transportation and technology improves.

Rivers have also played an essential role in the establishment of cities. Most cities are established along rivers of some sort. Rivers provide fresh water for drinking (and irrigation), but the effect navigable rivers have had on urban growth is hard to overstate. Before the age of trains and highways, rivers were by far the most efficient way to transport heavy cargo, especially over long distances. Interestingly, the interruptions to river navigation were most often responsible for creating conditions that attracted settlement and favored growth. Waterfalls were for many years a complete nuisance to river traffic, but they also are responsible for several cities. Not only do waterfalls provide a source of power for industry (see fall line cities below), but they also create a special kind of break of a bulk point called a head of navigation. At a waterfall, people had to stop, get out of their boats and carry the boat, and their cargo. Louisville, Kentucky, is an excellent example of a head of the navigation site because it arose next to the Falls of Ohio. In this place, the Ohio River tumbled over a waterfall forcing all boats to stop and breakbulk, again providing jobs at the boat dock, in warehouses, and encouraging manufacturing.

Those process of carrying boats and/cargo between two navigable stretches of the river (or to another river) is called Portage. Towns evolve where critical portage zones arose. Indiana, New York, Ohio, Wisconsin, Michigan, and Maine all have municipalities named “Portage,” but the essential portage zone in the United States appeared in Chicago, Illinois. Just southwest of what is now downtown Chicago, near Midway Airport was a portage zone where the Chicago River, which flows north into the Lake Michigan nearly intersected the Des Plaines River, which flows southward into the Mississippi River system. Around 1850, the people of Chicago built a canal connecting America’s two most significant navigable water systems, and by doing so, gave Chicago an enormous transportation advantage over other locations in the Midwest.

Business people value break of bulk because they offer opportunities for warehousing and manufacturing. Those industries not only attract migrants seeking work, but also additional transportation modes, which in turn create even more jobs. For example, the completion of the Illinois-Michigan canal in 1848 made Chicago an especially attractive terminus for multiple railroad companies that sprang up in the 1850s. It took Chicago just over 30 years to grow from the 100th most populous American city to the number two spot. Later still, interstate highways and airline routes also converged in Chicago.

Rivers also create chokepoints for the movement of goods and people traveling by land. Rivers are often difficult to cross in many locations because the water either the water is too deep or the river too wide. In such places, before bridges were standard, those trying to cross a river would seek out a ford, which is a shallow place to cross the river without a boat. City names like Stratford, Oxford, and Frankfurt all contain clues that they were once good places to cross a river. These fording sites often were simultaneously ideal locations for bridge construction because engineering a bridge across a shallow part of a wide river is simpler at a ford. Bridges funnel overland traffic to specific points, and provide another break of bulk opportunity, especially if the river is navigable.

Sometimes two rivers merge into a single, more massive river at a confluence site, creating yet another unique opportunity to gain an advantage over competitors. Pittsburgh, Pennsylvania, lies at America’s best-known confluence site. The steel industry thrived in Pittsburgh for over 100 years thanks in large part to the industrial advantages created by its location.

Los Angeles (L.A.) is the great metropolis on the west coast of the United States. The Spanish chose a location near what is now downtown L.A. for a pueblo (town) because they found fertile soil and a consistent source of water there alongside a large population of Indians that they hoped would form the core of a vibrant Spanish colony. As the years went by, Los Angeles’ only significant advantage over potential competitors in Southern California was its river. Spanish water law declared all the water in the L.A. River belonged to the people of Los Angeles. This law prevented other towns from forming either upstream or downstream from the original pueblo. People living along the L.A. River and hoping to use its precious waters were forced by Los Angelenos to become part of L.A.

Los Angeles remained a small town until the Santa Fe/Southern Pacific Railroad opened a second transcontinental railroad terminus in L.A. in 1881. Not long afterward, the local port facilities at San Pedro were upgraded, and L.A. began competing with San Francisco for business. With the invention of refrigerated boxcars and the discovery of oil in the region, L.A. proliferated. Good weather helped encourage migrants to journey westward to take jobs in the petroleum and citrus industries. The same great weather helped attract the movie and aeronautical industries decades later. Water resources, though, have remained a problem. The Los Angeles River was never sufficient to serve the needs of a large city, so a series of canals and pipelines have been constructed over the years to bring fresh water from vast distances into the Los Angeles region.

Sanctuary Cities

Sanctuary cities are common in many large cities across the United States. They are jurisdictions that have local policies that prevent or limit cooperation between local law enforcement and federal immigration enforcement officials. Based on the Center for Immigration Studies, there were roughly 300 sanctuary cities across the nation. The National Conference of State Legislatures currently states that there are 12 states and the District of Columbia that have laws allowing undocumented immigrants to obtain a driver’s license.

There is not a singular definition of what a sanctuary city is. They are also very controversial from a liberal and conservative perspective. Those who want sanctuary cities believe undocumented immigrants offer a workforce to fill employment gaps that major cities need. Others believe the immigration system is broken, so sanctuary cities provide places of refuge for law-abiding undocumented immigrants. Whereas, those against sanctuary cities think they’re locations violating federal laws by intentionally employing undocumented immigrants.

Because of the separation of power between local, state, and federal governments, as noted in the 10th Amendment, neither Congress nor the President can stop sanctuary cities. But Congress and the President can block federal funding to these cities.

Understanding Distribution and City Size

Under very unusual circumstances, one might find that among a group of cities, no single city has unique site location advantages over others. This might happen out on a vast plain, like in Kansas, where there are no navigable rivers, waterfalls, or ports. In instances like this, situation advantages come to the fore, and a regular, geometric pattern of cities may emerge. This process was more pronounced when transportation was primitive, and the friction of distance was considerable, but it can still be witnessed by picking up a map of almost any flat region of the earth. Geographer Walter Christaller noticed the pattern and developed the Central Place Theory to explain the pattern and the logic driving it forward.

According to Christaller, if a group of people (like farmers) diffuse evenly across a plain (as they were when Kansas opened for homesteaders), a predictable hierarchy of villages, towns, and cities will emerge. The driving force behind this pattern is the basic need everyone has to go shopping for goods and services. Naturally, people prefer to travel less to acquire what they need. The maximum distance people will travel for a good or service is called the range of that good or service. Goods like a hammer have a short-range because people will not travel far to buy a hammer. A tractor, because it is an expensive item, has a much higher range. The cost of getting to a tractor dealership is small about the value of the tractor itself, so farmers will travel long distances to buy the one they want. Hospital services have even higher ranges. People might travel to the moon if a cure for a deadly disease was available there.

Each merchant and service provider also requires a minimum number of regular customers to stay in business. Christaller called this number the threshold population. A major-league sports franchise has a threshold population of probably around a million people, most of whom must live in that team’s range. There are only 30 Major League Baseball teams in the United States, and the team with the smallest market (Milwaukee Brewers) has a threshold population of 2 million people. An ordinary Wal-Mart store probably has a threshold of about 20,000 people, so they are far more numerous. Starbuck’s Coffee shops probably have a threshold of about 5,000 people or less, because there are so many locations.

When customers and merchants living and working on featureless plain interact over time, some villages will attract more merchants (and customers) and grow into towns or even cities. Some communities will not be able to attract or retain merchants, and they will not grow. Competition between towns on this plain prevents nearby locations from increasing simultaneously. As a result, centrally located villages tend to grow into towns at the expense of their neighbors. A network of centrally located towns will emerge, and among these towns, only a few will grow into cities. One very centrally located city may evolve into a much larger city.

The largest cities will have businesses and functions that require significant thresholds (like major league sports teams or highly specialized boutiques). People from villages and small towns can access only the most essential goods and services (like gas stations or convenience stores) and are forced to travel to larger cities to buy higher-order products and services. Those goods and services not available to the nearest large city (regional service center) require customers to travel further. Some goods and services are only available at the top of the urban hierarchy, the mega-cities. In the United States, a handful of cities (New York, Los Angeles, Chicago, and Dallas) may offer exceptionally high order goods, unavailable in other large cities like Cleveland, Seattle, or Atlanta.

Geographer Mark Jefferson developed the law of the primate city to explain the phenomenon of huge cities that capture such a large proportion of a country’s population as well as its economic activity. These primate cities are often, but not always, the capital cities of a country. An excellent example of a primate city is Paris, which truly represents and serves as the focus of France. They dominate the country in influence and are the national focal point. Their sheer size and activity become a strong pull factor, bringing additional residents to the city and causing the primate city to grow even larger and more disproportional to smaller cities in the country. However, not every country has a primate city.

Some scholars define a primate city as one that is larger than the combined populations of the second and third-ranked cities in a country. This definition does not represent real primacy; however, as the size of the first ranked city is not disproportionate to the second.

The law can be applied to smaller regions, as well. For example, California’s primate city is Los Angeles, with a metropolitan area population of 16 million, which is more than double the San Francisco metro area of 7 million. Even counties can be examined about the Law of the Primate City.

Examples of Countries with Primate Cities

  • Paris (9.6 million) is the focus of France, while Marseilles has a population of 1.3 million.
  • Similarly, the United Kingdom has London as its primate city (7 million), while the second-largest city, Birmingham, is home to a mere one million people.
  • Mexico City, Mexico (8.6 million) outshines Guadalajara (1.6 million).
  • A considerable dichotomy exists between Bangkok (7.5 million) and Thailand’s second city, Nonthaburi (481,000).

Examples of Countries that Lack Primate Cities

  • India’s most populous city is Mumbai (formerly Bombay) with 16 million; second is Kolkata (formerly Calcutta) with more than 13 million, and third is less than 13 million.
  • China, Canada, Australia, and Brazil are additional examples of non-primate- city countries.
  • Utilizing the metropolitan area population of urban areas in the United States, we find that the U.S. lacks an actual primate city. With the New York City metropolitan area population at approximately 21 million, second-ranked Los Angeles at 16 million, and even third-ranked Chicago at 9 million, America lacks a primate city.

In 1949, George Zipf devised his theory of rank-size rule to explain the size cities in a country. He explained that the second and subsequently, smaller cities should represent a proportion of the largest city. For example, if the largest city in a country contained one million citizens, Zipf stated that the second city would contain one-half as many as the first, or 500,000. The third would contain one-third or 333,333, the fourth would be home to one-quarter or 250,000, and so on, with the rank of the city representing the denominator in the fraction.

While some countries’ urban hierarchy somewhat fits into Zipf’s scheme, later geographers argued that his model should be seen as a probability model and that deviations are to be expected.

Understanding Internal City Structure and Urban Development

Most urban centers begin in the downtown region called the central business district (CBD). The CBD tends to be the node or of transportation networks along with commercial property, banking, journalism, and judicial departments like City Hall, courts, and libraries. Because of high competition and limited space, property values for commercial and private ownership tend to be at a premium. CBDs also tend to use land above and below ground in the form of subways, underground malls, and high- rises. Sports facilities and convention centers also tend to be dominating forces in CBDs.

Urban planning is a sub-field of geography and until recently was part of geography departments in academia. An urban planner is someone trained in multiple theories of urban development along with developing ways to minimize traffic, decrease environmental pollution, and build sustainable cities. Urban planners, sociologists, along with geographers, have come up with three models to demonstrate and explain how cities grow.

The first model is called the concentric zone model, which states that cities can develop in five concentric rings. The inner zone of the cities tends to be the CBD, followed by a second ring that tends to the zone of transition between the first and third rings. In this transition zone, the land tends to be used by industry or low-quality housing. The third ring is called the zone of independent workers and tends to be occupied by working-class households. The fourth ring is called the zone of better residences and is dominated by middle-class families. Finally, ring five is called the commuter’s zone, where most people living there have to commute to work every day.

The second model for city development and growth is called the sector model. This model states that cities tend to grow in sectors rather than concentric rings. The idea behind this model is that “like groups” tend to grow in clusters and expand as a cluster. The center of this model is still the CBD. The next sector is called the transportation and industry sector. The third sector is called the low-class residential sector, where lower-income households tend to group. The fourth sector is called the middle-class sector, and the fifth is the high-class sector.

The third and final urban design is called the multiple nuclei model. In this model, the city is more complex and has more than one CBD. A node could exist for the downtown region, another where a university is situated, and maybe another where an international airport is located. Some clustering does exist in this model because some sectors tend to stay away from other sectors. For example, the industry does not tend to develop next to high-income housing.

The multiple nuclei model also features zones common to the other models. Industrial districts in these new cities, unfettered by the need to access rail or water corridors, rely instead on truck freight to receive supplies and to ship products, allowing them to occur anywhere zoning laws permitted. In western cities, zoning laws are often far less rigid than in the East, so the pattern of industrialization in these cities is sometimes random. Residential neighborhoods of varying status also emerged in a nearly random fashion as well, creating “pockets” of housing for both the rich and poor, alongside large zones of lower-middle-class housing. The reasons for neighborhoods to develop where they do are similar as they are in the sector model. Amenities may attract wealthier people, transport advantages attract industry and commerce, and disamenity zones are all that poor folks can afford. There is a sort of randomness to multiple nuclei cities, making the landscape less legible for those not familiar with the city, unlike concentric ring cities that are easy to read by outsiders who have been to other similar cities.

Another model is referred to as “Keno Capitalism.” In this model, based in Los Angeles, different districts are laid out in a mostly random grid, similar to a board used in the gambling game keno. The premise of this model is that the internet and modern transportation systems have made location and distance mostly irrelevant to the location of different sorts of activities within a city.

Geographers Ernest Griffin and Larry Ford recognized that the popular urban models did not fit well in many cities in the developing world. In response, they created one of the more compelling descriptions of cities formerly colonized by Spain – the Latin American Model. The Spanish designed Latin American cities according to rules contained in the Spanish Empire’s Law of the Indies. According to these rules, each significant city was to have at its center a large plaza or town typical for ceremonial purposes. A grand boulevard along which housing for the city’s elite was built stretched away from the central plaza and served as both a parade route and an opulent promenade. For several blocks outward from this elite spine was built the housing for the wealthy and powerful.

The rest of the city was initially left for the poor because there was almost no middle class. The poorly built houses close the central plaza where jobs and conveniences existed. Over time, the houses built by the poor, perhaps little more than shacks, were improved and enlarged. Ford and Griffin called this process in situ accretion. As the city’s population grew, young families and in-migrants built still more shacks, adding rings of housing that is always being upgraded. At the edges of the city are always the newest residents, often squatting on land they do not own.

Sociologists, geographers, and urban planners know that no city exactly follows one of the urban models of growth. However, the models help us understand the broader reason why people live where they do. Higher-income households tend to live away from lower-income households. Renters and house owners also tend to segregate from each other. Renters tend to live closer to the CBD, whereas homeowners tend to live in the outer regions of the city. It should be noted that the three models were developed shortly after World War II and based on U.S. cities; many critics now state that they do not truly represent modern cities.

7.2 Megacities and Urban Sprawl

Megacities

A megacity is pegged as any city with more than 10 million residents. Another term often used to describe this is conurbation, a somewhat more comprehensive label that incorporates agglomeration areas such as the Rhine-Ruhr region in Germany’s west, which has 11.9 million inhabitants.

Of the 30 biggest megacities worldwide, 20 of them are in Asia and South America alone, including Baghdad, Bangkok, Buenos Aires, Delhi, Dhaka, Istanbul, Jakarta, Karachi, Kolkata, Manila, Mexico City, Mumbai, Osaka-Kobe-Kyoto, Rip de Janeiro, Sao Paulo, Seoul, Shanghai, Teheran, and Tokyo-Yokohama. European megacities include London and Paris, and the UN estimates that the number of megacities worldwide will only increase.

The explosive growth of these and other cities is a rather new phenomenon, a result of industrialization. The megacities of the world differ not only according to whether they lie in the southern or northern hemisphere, but also by country, climatic, political, economic, and social conditions. Megacities can be productive, poor, organized, or chaotic. Paris and London are megacities, but it is difficult to compare them demographically or economically with Jakarta or Lagos. Vibrant megacities tend to stretch out further than their poorer counterparts: Los Angeles’ settlement area is four times as big as Mumbai’s despite its population being relatively smaller. Wealthy city inhabitants have a much higher rate of land consumption for apartments, transport, business, and industry. The situation is similar in terms of water and energy consumption, which is much higher in affluent cities. Cairo and Dhaka are without doubt ‘monster cities’ in terms of their population size, spatial, and urban planning. However, they are also “resourceful cities,” home to millions of people with few resources.

The high population levels in megacities and mega-urban spaces are leading to a host of problems, such as guaranteeing all residents a supply of essential foods, drinking water, and electricity. Related to this are concerns about sanitation and disposal of sewage and waste. There is not enough living space for incoming residents, leading to an increase in informal settlements and slums. Many urban residents get around via bus, truck, or motorized bicycles, leading to chaos on the streets and CO2 emissions leaking into the air.

The faster a city develops, the more critical these issues become. Due to their rapid growth, megacities in developing countries and the southern hemisphere have to battle in order to provide for their inhabitants. Between 1950 and 2000, cities in the north have grown an average of 2.4 times. In the south, they have grown more than 7-fold over the same period. Lack of financial resources and sparse coordination between stakeholders at different levels intensify the problems. Megacities usually do not represent one political-administrative unit, instead of dividing the city into parts such as with Mexico City, which is made up of one primary core district (Distrito Federal) and more than 20 outlying municipalities, where differing planning, construction, tax, and environmental laws are carried out than in the core district.

Two critical causes behind city growth are high rates of immigration as well as growing birth numbers. People move to the city with the hope of a more prosperous life and leave the country in search of brighter prospects. Without careful planning and infrastructure in place, this road can often lead to another poverty trap. As cities grow, so too do the unplanned and underserved areas, the so-called slums. In some regions of the world, more than 50 percent of urban populations live in slums. In parts of Africa south of the Sahara, that number jumps to around 70 percent. In 2007, a reported one billion people lived in slums, and by 2020, that figure could grow to 1.4 billion, according to the UN.

Gated communities are also on the rise. These are fenced and well-monitored communities in which affluent members live, further driving the trend towards separation among urban populations.

However, it is not just living spaces splitting the cities, globally; there is a significant push towards big new building projects like über, modern banks, and business districts, which stand in stark contrast to informal areas for the poor. These central business districts (CBD) are often siloed off from the central part of the city and migrate, along with the gated communities, towards the outskirts of town, as is the case in Pudong, Shanghai, and Beijing.

For the most part, urban planning is based on the needs of the consumer and culture-oriented upper classes and economic growth sectors, with the result being that the gap between rich and poor continues to grow. Such fragmented cities are a fragile entity in which conflicts are inevitable.

Because most people on the planet are city-dwellers, questions are starting to be asked about how to develop and design urbanization and urban migration in a sustainable way. Urban residents, the world over, require good air to breathe, clean drinking water, access to proper healthcare, sanitary facilities, and reliable energy supply.

The current situation in cities in developing countries can be precarious: the air is thick enough to touch; sewage treatment plants, if any, are overloaded, and industrial factories secrete virtually unregulated highly toxic waste and wastewater. Also, climate change will likely hit more impoverished cities harder. However, cities in the developed countries have to deal with environmental challenges in the areas of transport, energy and waste and wastewater.

On an international level, there are countless efforts currently being undertaken to support sustainable urban development. Several large UN projects, such as the UN-HABITAT Program and the Sustainable Urban Development Network, are endeavoring to improve and strengthen governmental and planning abilities. One of the goals of the UMP is also to implement the Millennium Development Goals at the city level.

Many urban problems can be explained not only at the city level, but must be regarded as results of political disorder and economic instability on a global and national level – and that this is where the solutions lie!

Challenges to Urban Growth

One of the major problems that cities face is deteriorating areas, high crime, homelessness, and poverty. As noted in the urban models, many lower-income people live near the city, but lack the job skills to compete for employment within the city. This often results in a variety of social and economic problems. Census data shows that 80 percent of children living in inner cities only have one parent. Because childcare services are limited in the city, single parents struggle to meet the demands of childcare and employment. Problems associated with lower-income areas are often violent crime (assault, murder, rape), prostitution, drug distribution, and abuse, homelessness, and food deserts.

Slums and Shanty Towns

The United Nations defines slums as overcrowded, inadequate, informal forms of housing that lack reasonable access to clean drinking water and sanitary facilities and deprive residents of power of the land. Above all, slums are an architectural and spatial expression of lack of housing and growing urban poverty. The well-known symbols of this are makeshift huts, such as the favelas in Brazil, but also desolate and overcrowded apartment buildings in major Chinese cities where the growing army of migrant workers and workers find makeshift accommodation.

Slums are densely populated urban informal settlements consisting of poor, inadequate living standards. Most slums lack proper sanitation services, access to clean drinking water, law enforcement, or other necessities of living in an urban area.

A shantytown, also known as a squatter, is a slum settlement that usually consists of building material made from plywood sheets of plastic, cardboard boxes, and other cheap material. They are typically found on the periphery of cities or near rivers, lagoons, or city trash dumps.

The reasons so many of these cities are poor include underemployment and insufficient pay as well as low productivity within the informal sector. Around half the people in megacities that lie in the southern hemisphere are employed in the informal sector, many of whom are coerced into accepting any employment. They sell various products – cigarettes, drinks, food, bits, and pieces – simple services like shoe cleaning and letter writing as well as smuggling goods or ending up in prostitution. Exploitation is, at times, rife in slum settings due to insecure residences, lack of legal protection, poor sanitation, and unstable acquisition conditions.

When residents in a neighborhood lack the money, political, organizational skills, or the motivation to protect themselves from disamenities, defined as drawbacks or disadvantages, especially about location, significant neighborhood degradation is possible. Poor people of all ethnicities can rarely afford to live in neighborhoods that have outstanding schools, parks, and air quality. So they are often able to afford to live only in the most dangerous, toxic, degraded neighborhoods. Racism is undoubtedly a common variable in the poverty equation, but it is rarely the only one.

Gentrification and Redlining

As a way for city officials to deal with inner-city problems, there has been a push recently to renovate cities, a process called gentrification. Middle-class families are drawn to city life because housing is cheaper, yet can be fixed up and improved, whereas suburban housing prices continue to rise. Some cities also offer tax breaks and affordable loans to families who move into the city to help pay for a renovation. Also, city houses tend to have more cultural style and design compared to quickly made suburban homes. Transportation tends to be cheaper and more convenient, so that commuters do not spend hours a day traveling to work. Couples without children are drawn to city living because of the social aspects of theaters, clubs, restaurants, bars, and recreational facilities.

The logic behind gentrification is that it not only reduces crime and homelessness; it also brings tax revenue to cities to improve the city’s infrastructure. However, there has also been a backlash against gentrification because some view it as a tax break for the middle and upper class rather than spending much-needed money on social programs for low-income families. It could also be argued that improving lower-class households would also increase tax revenue because funding could go toward job skill training, childcare services, and reducing drug use and crime.

The Federal Housing Authority (FHA), created in 1934 as one of Franklin D. Roosevelt’s New Deal projects, was tasked with ensuring that housing was built safely while encouraging banks to make loans to people seeking to buy new homes or repair older homes, so they were suitable to sell. The FHA was part of a grand scheme to stimulate the housing sector of the economy during the Great Depression, but also to provide government help and oversight to the home loan industry. Since many of those who qualified for loans were white and not in poverty, the government helped increase residential segregation by encouraging white flight from the cities. Meanwhile, minorities faced still with racist deed restrictions in many new suburbs, found themselves stuck in the city, where the FHA’s mortgage assistance programs were far less helpful.

Some have argued that FHA policies encouraged a series of discriminatory mortgage and insurance practices, known as redlining. During the Depression, the federal government refinanced more than a million mortgages to stem the tide of foreclosures, but not everyone was eligible for this help. Neighborhoods with poor terrain, old buildings, or those threatened by “foreign-born, negro or lower grade population” were judged to be too risky for government help. They appeared on government maps of cities in red. After the war, banks, insurance companies, and other financial institutions also mapped out where not to do business.

Residents in neighborhoods with a “red line” drawn around them would not be able to get loans to buy, repair, or improve housing. Some could not get insurance on what they owned. If they could, the terms of the loan or the insurance rates were higher than those outside the zone; a practice called reverse redlining. It appears that the main criteria for inclusion in a redlined neighborhood were the percentage of minorities. Therefore, most of the people who suffered from the ill-effects of redlining were minorities. African-Americans were harmed most often. Individuals with good credit histories and a middle-class income could find it impossible to buy homes in specific neighborhoods. Redlining was a death sentence to neighborhoods.

In 1968, the Fair Housing Act tried to outlaw redlining (and other forms of housing discrimination), but new laws were needed to bolster the language in the 1970s. However, by that time, long-term damage was evident in inner cities across the United States. Although it is illegal to discriminate against minorities (or anyone really) for non-economic characteristics, there is ample evidence to suggest it still occurs.

Homelessness

Homelessness is another primary concern for citizens of large cities. More than one half- million people are believed to live on the streets or in shelters. In 2013, about one- third of the entire homeless population were living as a member of a family unit. One-fourth of homeless people were children. In Los Angeles County, at the same time, there were somewhere around 40,000 homeless people living either in shelters and on the street. Another 20,000 persons were counted as near homeless or precariously housed, typically living with friends or acquaintances in short-term arrangements.

There are multiple reasons why people become homeless. The Los Angeles Homeless Authority estimates that about one-third of the homeless have substance abuse problems, and another third are mentally ill. Nearly a quarter have a physical disability. A disturbing number are veterans of the armed forces or victims of domestic abuse. Economic conditions locally and nationally also have a significant impact on the overall number of homeless people in a particular year, not only because during recessions, people lose their jobs and homes, but because the stresses of poverty can worsen mental illness.

The government plays a significant role in the pattern and intensity of homelessness. Ronald Reagan is the politician most associated with the homeless crisis both nationally and in California. When Reagan became governor of California in the late 1960s, the deinstitutionalization of people with a mental health condition was already a state policy. Under his administration, state-run facilities for the care of mentally ill persons were closed and replaced by the for-profit board and care homes. The idea was that people should not be locked up by the state solely for being mentally ill and that government-run facilities could not match the quality and cost-efficiency of privately run boarding homes. Many private facilities, though, were severely run, profit-driven, located in poor neighborhoods, and had little professional staff. Patients could, and did, leave these facilities in large numbers, frequently becoming homeless or incarcerated. Other states followed California’s example. By the late 1970s, the federal government passed some legislation to address the growing crisis, but sweeping changes in governmental policy at the federal level during the Regan presidency shelved efforts started by the Carter administration. Drastic cuts to social programs during the 1980s ensured an explosion of mental illness related homelessness. Most funding has never been restored, though the Obama administration has aggressively pursued policies aimed at housing homeless veterans.

Though homeless people come from many types of neighborhoods, facilities for serving homeless populations are not well distributed throughout the urban regions. Many cities have a region known as Skid Row, a neighborhood unofficially reserved for the destitute. The term originated as a reference to Seattle’s lumber yard areas where workers used skids (wooden planks) to help them move logs to mills. Today, many of the shelters and services for the homeless are found in and around skid row.

Food Deserts

A food desert is an area, especially one with low-income residents, that has limited access to affordable and nutritious food. In contrast, an area with supermarkets or vegetable shops is termed a food oasis. The term food desert considers the type and quality of food available to the population, in addition to the number, nature, and size of accessible food stores. Food deserts are characterized by a lack of supermarkets, which decreases residents’ access to fruits, vegetables, and other whole foods.

In 2010, the United States Department of Agriculture reported that 23.5 percent of Americans live in a food desert, meaning that they live more than one mile from a supermarket in urban or suburban areas, and more than 10 miles from a supermarket in rural areas. Food deserts lack whole food providers who supply fresh protein sources along with whole food such as fresh fruits and vegetables, and instead provide processed and sugar- and fat-laden foods in convenience stores.

As noted in the NPR video, a food desert more often occurs in low-income communities, where people do not have access to a vehicle, and must travel over half a mile to reach a grocery store. For those who live in food deserts, access to food can only be found at fast food places or small convenience stores. Those in food deserts seeking healthy food options may have to travel several miles, often by mass transit, to reach grocery stores.

The video by Penn State University, called the Geospatial Revolution, highlights how Philadelphia and Pennsylvania are using geospatial technology such as geographic information systems to help fund and locate supermarkets in underserved communities.

In Mari Gallagher’s Ted Talk, she focuses on “food desert  awareness and solutions,” provides a critical examination of the Supplemental Nutrition Assistance Program (SNAP), and emphasizes the need for improving public health through “Truth in Data for the Common Good.” Mari is the author of Examining the Impact of Food Deserts on Public Health in Chicago, a study that popularized the term “food desert” across the country.

Ron Finley returns to reinforce the importance of community gardens as a way to combat food deserts, or what he calls, food prisons. He talks about how growing gardens empowers your life and local communities. The self-proclaimed Gangsta gardener says, “All of life starts from a garden. Planting your food equates to growing your life, and Ron encourages everyone to create their own opportunities.”

Urban and Suburban Sprawl

Not all of a city’s residents live within the urban cores. Over half of all people live in the suburbs rather than in the city or rural areas. There was a suburban sprawl model developed to explain U.S. development called the peripheral model. This model states that urban areas consist of a CBD followed by sizeable suburban areas of business and residential developments. The outer regions of the suburbs become transition zones of rural areas.

The attraction to suburbs is low crime rates, lack of social and economic problems, detached single-family housing, access to parks, and usually better schooling. These are universal generalizations and not necessarily true everywhere. Suburbs also tend to create economic and social segregation, where tax revenues and social resources provide better funding opportunities than in inner cities.

Of course, there is also a cost to suburban sprawling. Developers are always looking for cheaper land to build, which usually means developing rural areas and farmland rather than expanding next to existing suburbs. Air pollution and traffic congestion also become a problem as working households are required to travel farther to and from work. Suburbs tend to be less commuter-friendly to those who walk or bike because the model of development is based around vehicle transportation.

Water is another challenge to urban growth. It is an elemental part of the fabric of urban lives, providing sustenance and sanitation, commerce, and connectivity. Our fundamental needs for water have always determined the location, size, and form of our cities, just as water shapes the character and outlook of their citizens. Urban health is inextricably linked with water. From the first cities, planners have appreciated the potential linkages of water with health and the need for consistent water supplies. Indeed, the modern field of public health owes a substantial debt to the sanitary engineers who strove to provide potable water and safe disposal of human wastes in burgeoning cities of the Industrial Revolution.

Scientists and decision-makers have recently begun to appreciate that, as in the case of other urban systems, the linkages between water management, health, and sustainability are involved in ways that undermine the effectiveness of traditional approaches. Unprecedented urban populations and densities, urban inequities, and urban mobility pose new problems, and climate change adds a novel and uncharted dimension. This has, in some cases, led to worsening urban health, or increased risks. For instance, some water-associated diseases like dengue are on the rise globally, while others, like cholera, continue to pose serious threats elsewhere. Many regions face increased food and water scarcity, and many urban slums present conditions that challenge effective water management.

7.3 Cities as Cultural and Economic Centers

Cities as a Place

In one way, cities are vast, complex machines that produce goods and services, but that way of conceiving the city overlooks genuine emotional qualities that define almost any location. Most people would argue that cities have personalities, qualities that define them as a place. People who live in particular cities often develop a sort of tribal attitude toward their city. This attitude is reflected most visibly in the genuine, emotional attachment citizens have to their sports teams. It is not uncommon for citizens of a city to take great offense at derogatory remarks directed toward “their city,” especially if those remarks come from an outsider.

How we know what we know about cities is primarily bound up in the symbolism of cities provided us through countless media. Often people have enormous storehouses of knowledge about specific places (New York, Paris, Hollywood), even though they have never even visited. We also have powerful ideas about generic places, “small towns,” “the suburbs,” “the ghetto,” yet though we may not have visited these places either. This knowledge is imperfect and may very well be dangerously inaccurate to both those people who live in these places and us. We must recognize how our understanding of places has been constructed, and we must seek to understand what purposes these constructions serve.

Geographer Donald Meinig proposed that Americans have particularly strong ideas and emotions about three unique, but generic landscapes: The New England Village, Small Town America, and the California Suburb (Meinig’s Three Landscapes).

Meinig’s first symbolic landscape is the sleepy New England Village, with its steepled white church and cluster of tidy homes surrounded by hardwood forests is powerfully evocative of a lifestyle centered around family, hard work, prosperity, Christianity, and community. He called its rival from the American Midwest Main Street USA. This landscape is found in countless small towns, and symbolizes order, thrift, industry, capitalism, and practicality. It is less cohesive and less religious than the New England Village, and more focused on business and government. Finally, Meinig points to the California Suburb as the last of the significant urban landscapes deeply embedded in the national consciousness. Suburban California symbolizes the good-life: backyard cookouts with the family and neighbors, a prosperous, healthy lifestyle, centered on family leisure.

So powerful are these images that they often appear as settings for novels, movies, television shows as well as political or product advertising campaigns.

Cultural Reflections in Urban Landscapes

The built environment is a product of socio-economic, cultural, and political forces. Every urban system has its own ‘genetic code,’ expressed in architectural and spatial forms that reflect a community’s values and identity. Each community chooses specific physical characteristics, producing the unique character of its city. This ‘communal eye’ exemplifies the city’s architectural legacy and gives a sense of place.

In old Sana’a, the capital of Yemen, unique buildings decorated with geometric patterns, create a distinctive visual character unique to the city. Another example is Egypt’s Nubian village, where the building materials and colors are unique and reflect the vernacular architecture of the region.

However, current architectural practices, in almost every city in the world, do not respect the past identities and traditions of our cities. Most projects bear little or no relationship to the surrounding urban context, the city’s genetic code. Architects only follow international architectural movements such as “Modern architecture,” “Postmodernism,” “High-Technology,” and “Deconstructionism.” The result is a fragmented and discontinuous dialogue among buildings, destroying a city’s communal memory.

Street art and graffiti have been filling this gap, explaining the conflict between the traditional culture and contemporary sociopolitical issues of cities. Street artists are repurposing city walls to highlight heritage, history, and identity and, in some cases, to humanize this struggle. Each city has a unique wall art that has become part of its overall genetic code. Some of the art in Santiago, for example, highlights Chilean identity. Another example is how wall art was used during the Egyptian revolution to memorialize the events. In March 2012, young graffiti artists launched the “No Walls” movement when the Egyptian authorities constructed several concrete walls to block important street junctions to control peaceful demonstrations.

Many scholars of urban morphology suggest that the street network of any city is made up of a dual network −the foreground network, consisting of the main streets in the urban system, and background network, made up of alleyways or smaller streets. The foreground network, or the leading street network, usually have a universal form, a ‘deformed wheel’ structure composed of small semi-grid street pattern in the center (hub) linked with at least one ring road (rim) through diagonal streets (spokes). However, the form of the background network differs from a city to another; therefore, it is this network that gives a city its spatial identity.

Many cities such as London, Tokyo, and Cairo have a similar universal street pattern of a ‘deformed wheel’ in the foreground network despite having different background networks, possibly as a result of cultural differences or contributing to the creation of those cultural differences. In short, the background network reflects the unique structure of each city, and could be considered its genetic code.

Economic Development and City Infrastructure

The evidence of the definite link between urban areas and economic development is overwhelming. With just 54 percent of the world’s population, cities account for more than 80 percent of global GDP. In virtually all cases, the contribution of urban areas to national income is more significant than their share of the national population. For instance, Paris accounts for 16 percent of the population of France, but generates 27 percent of GDP.

Similarly, Kinshasa and metro Manila account for 13 percent and 12 percent of the population of their respective countries, but generate 85 percent and 47 percent of the income of the democratic republic of Congo and Philippines respectively. The ratio of the share of urban areas’ income to share of the population is more considerable for cities in developing countries vis-à-vis those of developed countries. This is an indication that the transformative force of urbanization is likely to be higher in developing countries, with possible implications for harnessing the positive nature of urbanization.

The higher productivity of urban areas stems from agglomeration economies, which are the benefits firms and businesses derive from locating near to their customers and suppliers to reduce transport and communication costs. They also include proximity to a vast labor pool, competitors within the same industry, and firms in other industries.

These economic gains from agglomeration can be summarized as three essential functions: matching, sharing, and learning. First, cities enable businesses to match their distinctive requirements for labor, premises, and suppliers better than smaller towns because a more extensive choice is available. Better matching means greater flexibility, higher productivity, and more vigorous growth. Second, cities give firms access to a bigger and improved range of shared services, infrastructure, and external connectivity to national and global customers because of the scale economies for providers. Third, firms benefit from the superior flows of information and ideas in cities, promoting more learning and innovation. Proximity facilitates the communication of complex ideas between firms, research centers, and investors. Proximity also enables formal and informal networks of experts to emerge, which promotes comparison, competition, and collaboration. It is not surprising, therefore, that large cities are the most likely places to spur the creation of young high growth firms, sometimes described as “gazelles.” It is cheaper and easier to provide infrastructure and public services in cities. The cost of delivering services such as water, housing, and education is 30-50 percent less expensive in concentrated population centers than in sparsely populated areas.

The benefits of agglomeration can be offset by rising congestion, pollution, pressure on natural resources, higher labor, and property costs. More significant policing costs occasioned higher levels of crime and insecurity, often in the form of negative externalities or agglomeration diseconomies. These inefficiencies grow with city size, especially if urbanization is not adequately managed, and if cities are deprived of essential public infrastructure. The immediate effect of dysfunctional systems, gridlock, and physical deterioration may be to deter private investment, reduce urban productivity, and hold back growth. Cities can become victims of their success, and the transformative force of urbanization can diminish.

The dramatic changes in the spatial form of cities, brought about by rapid urbanization over time, present significant challenges and opportunities. Whereas new spatial configurations play a crucial role in creating prosperity, there is an urgent demand for more integrated planning, robust financial planning, service delivery, and strategic policy decisions. These interventions are necessary if cities are to be sustainable, inclusive, and ensure a high quality of life for all. Urban areas worldwide continue to expand, giving rise to an increase in both vertical and horizontal dimensions.

With cities growing beyond their administrative and physical boundaries, conventional governing structures and institutions become outdated. This trend has led to expansion not just in terms of population settlement and spatial sprawl, but has altered the social and economic spheres of influence of urban residents. In other words, the functional areas of cities and the people that live and work within them are transcending physical boundaries.

Cities have extensive labor, real estate, industrial, agricultural, financial, and service markets that spread over the jurisdictional territories of several municipalities. In some cases, cities have spread across international boundaries plagued with fragmentation, congestion, degradation of environmental resources, and weak regulatory frameworks. City leaders struggle to address demands from citizens who live, work, and move across urban regions irrespective of municipal jurisdictional boundaries. The development of complex interconnected urban areas introduces the possibility of reinventing new mechanisms of governance.

A city’s physical form, its built environment characteristics, the extent and pattern of open spaces together with the relationship of its density to destinations and transportation corridors, all interact with natural and other urban features to constrain transport options, energy use, drainage, and future patterns of growth. It takes careful, proper coordination, location, and design (including mixed uses) to reap the benefits more compact urban patterns can bring to the environment (such as reduced harmful emissions) and quality of life.

Urban space can be a strategic entry point for driving sustainable development. However, this requires innovative and responsive urban planning and design, that utilizes density, minimizes transport needs and service delivery costs, optimizes land-use, enhances mobility and space for civic and economic activities, and provides areas for recreation, cultural, and social interaction to improve the quality of life. By adopting relevant laws and regulations, city planners are revisiting the compact and mixed land-use city, reasserting notions of urban planning that address the new challenges and realities of scale, with urban region-wide mobility and infrastructure demands.

The need to move from sectoral interventions to strategic urban planning and more comprehensive urban policy platforms is crucial in transforming city form. For example, transport planning was often isolated from land- use planning, and this sectoral divide has caused wasteful investment with long-term negative consequences for a range of issues, including residential development, commuting, and energy consumption. Transit and land- use integration is one of the most promising means of reversing the trend of automobile-dependent sprawl and placing cities on a sustainable pathway.

The more compact a city, the more productive and innovative it is, and the lower it is per capita resource use and emissions. City planners have recognized the need to advance higher density, mixed-use, inclusive, walkable, bikeable, and public transport-oriented cities. Accordingly, sustainable and energy-efficient cities, low carbon, with renewable energy at scale, are re-informing decision making on the built environment.

Despite shifts in planning thought, whereby compact cities and densification strategies have entered mainstream urban planning practice, the market has resisted such approaches, and consumer tastes have persisted for low-density residential land. Developers of suburbia and exurbia continue to subdivide the land and build housing, often creating single-purpose communities. The new urbanists have criticized the physical patterns of suburban development and car-dependent subdivisions that separate malls, workspaces, and residential uses by highways and arterial roads. City leaders and planning professionals have responded and greatly enhanced new community design standards. Smart growth is an approach to planning that focuses on rejuvenating inner-city areas and older suburbs, and remediating brownfields. Where new suburbs are developed, the goal is to design them to be town-centered, transit and pedestrian-oriented, less automobile-dependent, mixed-use housing, and commercial and retail uses that use cleaner energy and other green technologies.

The tension in planning practice needs to be better acknowledged and further discussed if sustainable cities are to be realized. The forces that continue to drive the physical form of many cities, despite the best intentions of planning, present challenges that need to be at the forefront of any discussion on the sustainable development goals of cities. Some pertinent issues, which suggest the need for rethinking past patterns of urbanization and addressing them include:

  • Competing jurisdictions between cities, towns and surrounding peri-urban areas whereby authorities compete with each other to attract suburban development
  • The actual costs to the economy and society of fragmented land use and car-dependent spatial development; and
  • How to come up with affordable alternatives to accommodate the additional 2.5 billion people that would reside in cities by 2050.

In reality, it is mainly these outer suburbs, edge cities, and outer city nodes in more significant city regions where new economic growth and jobs are being created, and where much of this new population will be accommodated. While densification strategies and more robust compact city planning in existing city spaces will help absorb a portion of this growth, the critical challenge facing planners is how to accommodate new growth beyond the existing core and suburbs. This will largely depend on local governments’ ability to overcome fragmentation in local political institutions, and a more coherent legislation and governance framework, which addresses urban complexities spread over different administrative boundaries.

7.4 Cities as Environmental and Sustainable Centers

Urban and Suburban Sprawl

Not all of a city’s residents live within the urban cores. Over half of all people live in the suburbs rather than in the city or rural areas. There was a suburban sprawl model developed to explain U.S. development called the peripheral model. This model states that urban areas consist of a CBD followed by sizeable suburban areas of business and residential developments. The outer regions of the suburbs become transition zones of rural areas.

The attraction to suburbs is low crime rates, lack of social and economic problems, detached single-family housing, access to parks, and usually better schooling. These are universal generalizations and not necessarily true everywhere. Suburbs also tend to create economic and social segregation, where tax revenues and social resources provide better funding opportunities than in inner cities.

Of course, there is also a cost to suburban sprawling. Developers are always looking for cheaper land to build, which usually means developing rural areas and farmland rather than expanding next to existing suburbs. Air pollution and traffic congestion also become a problem as working households are required to travel farther to and from work. Suburbs tend to be less commuter-friendly to those who walk or bike because the model of development is based around vehicle transportation.

Water is another challenge to urban growth. It is an elemental part of the fabric of urban lives, providing sustenance and sanitation, commerce, and connectivity. Our fundamental needs for water have always determined the location, size, and form of our cities, just as water shapes the character and outlook of their citizens. Urban health is inextricably linked with water. From the first cities, planners have appreciated the potential linkages of water with health and the need for consistent water supplies. Indeed, the modern field of public health owes a substantial debt to the sanitary engineers who strove to provide potable water and safe disposal of human wastes in burgeoning cities of the Industrial Revolution.

Scientists and decision-makers have recently begun to appreciate that, as in the case of other urban systems, the linkages between water management, health, and sustainability are involved in ways that undermine the effectiveness of traditional approaches. Unprecedented urban populations and densities, urban inequities, and urban mobility pose new problems, and climate change adds a novel and uncharted dimension. This has, in some cases, led to worsening urban health, or increased risks. For instance, some water-associated diseases like dengue are on the rise globally, while others, like cholera, continue to pose serious threats elsewhere. Many regions face increased food and water scarcity, and many urban slums present conditions that challenge effective water management.

Cities and Sustainability

While there are numerous definitions of sustainable development, many start with the definition provided in the 1987 Brundtland report: “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” The goals for sustainable cities are grounded on a similar understanding – urban development, which strives to meet the essential needs of all, without overstepping the limitations of the natural environment. A sustainable city has to achieve a dynamic balance among economic, environmental, and socio-cultural development goals, framed within a local governance system characterized by deep citizen involvement and inclusiveness.

A core component of sustainable cities is sustainable infrastructure – the interconnected physical and organizational structure, set of services and systems that support the daily functioning of a society and its economy. Sustainable infrastructure is that which is designed, developed, maintained, reused, and operated in a way that ensures minimal strain on resources, the environment, and the economy. It contributes to enhanced public health and welfare, social equity, and diversity. Investment in sustainable infrastructure is pivotal in planning for the sustainable development of cities. Despite the importance of urban infrastructure, there is a clear under-investment as characterized by the backlog and state of deficient infrastructure. Globally, $57 trillion is needed for infrastructure investment between 2013 and 2030 to support economic growth and urbanization. This is of particular concern about developing countries, where many large cities experience severe congestion, and to developing countries, where improved primary socioeconomic conditions have been long overdue.

As a factor of inclusion and integration, urban mobility has a specific transformative role. Urban mobility is a multidimensional concept, encapsulating the multitude of physical components about urban transport (air, road, and rail systems, waterways, light and heavy rail, cable cars), including the economic, environmental, and social dimensions of mobility. Sustainable urban mobility provides efficient access to goods, services, job markets, social connections, and activities while limiting both short and long-term adverse consequences on social, economic, and environmental services and systems. A sustainable mobility strategy serves to protect the health of users and the environment, while fostering and promoting the city’s economic prosperity.

City dwellers are negatively impacted by inadequate and inefficient public transit systems, low-density development, urban sprawl, and the growing distance between residents and their place of employment, markets, education, and health facilities. Although faced with enormous challenges, behavioral, technological, and political shifts, cities remain at the forefront of transformative changes to improve quality of life through investing in connected, sustainable urban mobility.

An evolving trend is a cultural shift away from auto-dependency. Singapore, Hong Kong, and Tokyo are examples of cities where the costs of car ownership and use have been set high, and planning strategies have emphasized development patterns oriented to transit, walking, and cycling. In Europe and the U.S., the popularity of the sharing economy has allowed people to move to more walkable, livable urban communities. Consequently, urban space is being reimagined, leading to denser and greener cities, enhanced flow of traffic, improved walkability, and increased use of public transit. This shift could catalyze reinvestment in public transport and a reduction in automobile subsidies, while also allowing for equitable access. New mobility services and products such as e-hailing, autonomous driving, in-vehicle connectivity, and car-sharing systems offer multimodal, demand transportation alternatives.

More compact, better-connected cities with low-carbon transport could save as much as $3 trillion in urban infrastructure spending over the next 15 years. This would simultaneously result in substantial annual returns due to energy savings, higher productivity, and reduced healthcare costs. The private sector and civil society can also help city leaders advance sustainable mobility, with improvements in telecommunications technology. For instance, the Paris-based company BlaBlaCar has developed an online platform that connects passengers with private drivers and allows them to book seats for long-distance journeys. Increased passenger numbers per car reduce carbon emissions and improve the quality of life.

If the world is to achieve its sustainable development goals, and reach targets that range from eradicating poverty and social inequity, to combating climate change and ensuring a healthy and livable environment, global efforts in the transition to sustainable energy are pivotal. As cities represent more than 70 percent of global energy demand, they have been playing a central role in moving the sustainable energy agenda forward. The current global share of renewable energy supply is 11 percent. The diversity of renewable energy resources is vast, and research indicates a potential contribution of renewable energy reaching 60 percent of total world energy supply.

While many renewable energy technologies remain more costly than conventional sources and are often site-specific, investment in renewable cleaner energy can reduce health impacts from environmental pollution and climate change. Increasing renewable energy sources, maximizing conservation, and lessening dependence on non-renewable sources of energy, mainly those most damaging and contributing to global warming, are critical steps to sustainable cities.

Cities are harnessing local capabilities to develop green technologies and renewable energy sources that enhance their ability to withstand climate-related shocks as well as boosting local economies. Governments are investing in green technologies, presenting an excellent opportunity for cities to channel their innovation capabilities into a new sector of the economy. The economies of scale and concentration of enterprises and innovation in cities make it cheaper and easier to take actions to minimize both emissions and climate hazards.

The risks that cities are now facing as a result of climate change and natural disasters, the pressing short-falls in urban water, sanitation and waste management services, and the deteriorating quality of air and water, are being experienced in the context of their rapid growth. A growing international focus on resilience is a core agenda item for cities today. The increase in severe weather and natural disasters has highlighted the need for cities to respond better, mitigate, and adapt to such events. This includes being able to respond to such risks in ways that minimize the impact on the social, environmental, and economic infrastructure of the cities. Consequently, city leaders have been making significant transformative changes and investments in the resilience of their cities.

Any city’s resilience to external shock relies primarily on effective institutions, governance, urban planning, and infrastructure. In this respect, the U.N. Office for Disaster Reduction (UNISDR) has set out several general practical recommendations for urban authorities.

A critical aspect of the creation of resilient cities in the construction of physical infrastructure, that can absorb the shocks and stresses created by extreme weather events. Climate change is putting pressure on infrastructure that is already overtaxed from deferred maintenance, population growth, and development. As municipalities plan, design, and implement sustainable infrastructure projects; they need to consider the impact of extreme weather and natural disasters on the city’s physical infrastructure to build resilience.

There is a growing consensus that good governance is crucial to developing, maintaining, and restoring sustainable and resilient services and social, institutional, and economic activity in cities. Many city governments are weakened due to limited power and responsibility for essential public services, including planning, housing, roads and transit, water, land-use, drainage, waste management, and building standards. City governments also often lack the power to raise revenues to finance infrastructure and build more sustainable and resilient cities. When governance capacity is weak and constrained, cities are limited in their abilities to take programmatic action on climate change mitigation and adaptation. The multiple forms of risk and vulnerability in cities call for more integrated approaches, combining established policies (urban governance, planning, and management) with additional policy leverage, powers, and responsibilities for local government.

Sustainable, resilient, and inclusive cities are often the outcome of good governance that encompasses effective leadership, land-use planning, jurisdictional coordination, inclusive citizen participation, and efficient financing. Strong, effective leadership is critical for overcoming fragmentation across departments, multiple levels of government, and investment sectors when building consensus and eliciting action on specific agendas. Land-use planning across these broad urban regions is another key criterion for effective governance. Territorial and spatial strategies are central in addressing climate change risks and building effective mitigation and adaptation strategies. Coordination across the metropolitan area is fundamental not only in areas such as land, transport, energy, emergency preparedness, and related fiscal and funding solutions, but in addressing issues of poverty and social exclusion through innovative mechanisms of inter-territorial solidarity.

Including stakeholders in the urban planning process is critical to creating livable, sustainable cities, where citizens are active players in determining their quality of life. Including stakeholders in the design of infrastructure, urban space, and services legitimize the urban planning process and allow cities to leverage their stakeholders’ expertise. Finance, however, can be a significant impediment to effective governance. Municipal governments around the world are increasingly looking for new and innovative ways to finance sustainable projects. Consequently, partnership with the private sector is increasing since the private sector has capital not available to the public sector.

Sustainable Development Moving Forward

On May 28th, 2013, students in Istanbul, Turkey, staged a sit-in protest against an urban development project to build a mall in the city’s most significant green space, Gezi Park. One hundred activists were met with police opposition on May 30th, when water cannons and tear gas were used to disperse the crowds who had gathered in front of the green space. Finally, the tents and belongings of the protestors were burned, and the park was barricaded.

Using the Internet, the activists reached out for help and organized a massive effort to retake control of the park. The protests soon poured into the street as others, emboldened by police actions, joined the students in the park and Taksim Square. As the crowds grew, the protests soon began focusing on issues beyond development and became a protest against the government of Prime Minister Tayyip Erdogan, whom many feel stifle democracy and opposition in the country.

While the movement has mostly become an anti-government protest, calling for reforms and the resignation of the Prime Minister, its initial goal as a desire to preserve green space in the city, and subsequent evolution, highlights how the environmental movement and democracy are entwined. Further, the protestors made use of social media and technology to organize a large group of people in a short amount of time. This use of technology has become unprecedented in recent years as smartphones, and the Internet helped protests to grow instantly. As in other environmental demonstrations such as the WTO “Battle in Seattle” in 1999, the main organizers are young college-aged students, showing that the environment remains a central concern to today’s youth.

Opponents of globalization fear that uncontrolled economic growth, fueled by free trade, harms the environment by causing more pollution and exhaustion of natural resources. Many of now see environmental problems as being of international concern, not just national interest—such as protection of the oceans and the atmosphere from pollution. The environment is now considered the “common heritage of mankind,” and environmental problems are increasingly the subject of international efforts because of their cross-border effects and the impossibility that just one or anew nations can solve these problems on their own. Furthermore, they suspect that environmental protection laws are weakened under the guise of promoting free trade by corporations and governments unconcerned about the adverse environmental effects of commerce.

In contrast, many corporations, governments, and citizens in developing countries (and some in developed countries as well) are willing to accept a certain level of environmental damage in exchange for economic well-being. They fear that environmental protection laws are ways for developed countries to prevent their goods from competing fairly.

As a strategy, sustainable development recognizes that past policies sometimes achieved progress by means that could not be kept up over time. For example, in the 1990s, between 10,000 and 30,000 square kilometers a year of Brazilian rainforest were cleared, fueling rapid economic growth in farming and ranching operations. In the short term, the practice created jobs and increased food production, but environmental damage caused by the clearing made much of the newly cleared land unusable in the longer term; the net result in many cases was a negative economic outcome.

Environmental protection can entail a drag on economic growth in the short-term. Industries that have to adjust to environmental regulations face disruption and higher costs, harming their competitive position. The question is what to make of this. Some argue that it may be worth slower economic growth to protect the environment. Others say that the free market and technological advances are the best tools to solve environmental problems and lift people out of poverty, rather than greater regulation.

The link between the environment and economic development may be more complicated than that, however. In fact, in many ways, protecting the environment and promoting economic growth are complementary goals. Poverty in developing countries is a leading cause of environmental degradation. For instance, “slash-and-burn” land- clearing by subsistence farmers has been a significant cause of depletion of the Amazon rainforest. Boosting economic growth may then be a useful tool to promote the protection of the environment. This is the idea behind the sustainable development movement, which seeks to advance economic opportunities for poorer nations in environmentally friendly ways.

Sustainable consumption and production are about promoting resource and energy efficiency, sustainable infrastructure, and providing access to essential services, green and decent jobs and a better quality of life for all. Its implementation helps to achieve overall development plans, reduce future economic, environmental, and social costs, strengthen economic competitiveness, and reduce poverty.

Sustainable consumption and production aim at “doing more and better with less,” increasing net welfare gains from economic activities by reducing resource use, degradation and pollution along the whole lifecycle, while increasing quality of life. It involves different stakeholders, including businesses, consumers, policymakers, researchers, scientists, retailers, media, and development cooperation agencies, among others.

It also requires a systemic approach and cooperation among actors operating in the supply chain, from producer to final consumer. It involves engaging consumers through awareness-raising and education on sustainable consumption and lifestyles, providing consumers with adequate information through standards and labels, and engaging in sustainable public procurement, among others.

  • Each year, an estimated one-third of all food produced – equivalent to 1.3 billion tons worth around $1 trillion – ends up rotting in the bins of consumers and retailers, or spoiling due to poor transportation and harvesting practices
  • If people worldwide switched to energy-efficient lightbulbs the world would save US$120 billion annually
  • Should the global population reach 9.6 billion by 2050, the equivalent of almost three planets could be required to provide the natural resources needed to sustain current lifestyles

Policymakers all over the world are facing similar challenges. While we certainly know that the climate will change, there is considerable uncertainty as to what the local or regional impacts will be and what will be the impacts on societies and economies. Coupled with this is often significant disagreement among policymakers about underlying assumptions and priorities for action.

Many decisions to be made today have long-term consequences and are sensitive to climate conditions – water, energy, agriculture, fisheries forests, and disaster risk management. We cannot afford to get it wrong.

However, sound decision making is possible if we use a different approach. Rather than making decisions that are optimized to a prediction of the future, decision-makers should seek to identify decisions that are sound no matter what the future brings. Such decisions are called “robust.”

For example, Metropolitan Lima already has significant water challenges: shortages and a rapidly growing population with 2 million underserved urban poor. Climate models suggest that precipitation could decrease by as much as 15 percent, or increase by as much as 23 percent. The World Bank is partnering with Lima to apply tested, state-of-the-art methodologies like Robust Decision Making to help Lima identify no-regret, robust investments. These include, for example, multi-year water storage systems to manage droughts and better management of demand for water. This can help increase Lima’s long-term water security, despite an increasingly unpredictable future.

With advances in transportation and information technology, even the most remote places on Earth are within reach of the traveler. Tourism is now the world’s largest industry, with nature tourism the fastest growing service sector. Tourism in a globalized world can also pose environmental challenges. Unsustainable tourism may cause overcrowding and pressure on local infrastructure and services, and fragile local ecosystems. Indiscriminate tourism development can encourage intensive or inappropriate land use and contribute to coastal zone degradation. Disposal of liquid and solid wastes generated by the tourism industry may also strain the capacity of local infrastructure to treat the additional wastes generated by tourism activities.

To mitigate these economic, social, cultural, and environmental impacts, the United Nations has recommended that governments rely on sustainable ecotourism, while taking into account local carrying capacity for tourism.

Ecotourism is environmentally responsible travel to natural areas, to enjoy and appreciate nature (and accompanying cultural features, both past, and present) that promote conservation, have a low visitor impact and provide for beneficially active socioeconomic involvement of local peoples. Ecotourism is distinguished by its emphasis on conservation, education, traveler responsibility, and active community participation. Specifically, ecotourism possesses the following characteristics:

  • Conscientious, low-impact visitor behavior
  • Sensitivity towards, and appreciation of, local cultures and biodiversity
  • Support for local conservation efforts
  • Sustainable benefits to local communities
  • Local participation in decision-making
  • Educational components for both the traveler and local communities

Ecotourism that establishes a suitable balance between the environmental, economic, and socio-cultural aspects of tourism development, also plays a vital role in conserving biodiversity. It attempts to minimize its impact on the environment and local culture so that it will be available for future generations, while contributing to generate income, employment, and the conservation of local ecosystems.

By doing so, sustainable tourism maximizes the positive contribution of tourism to biodiversity conservation and, thus, to poverty reduction and the achievement of common goals towards sustainable development.

Sustainable tourism provides crucial economic incentives for habitat protection. Revenues from visitor spending are often channeled back into nature conservation or capacity building programs for local communities to manage protected areas.

Furthermore, ecotourism can be a crucial vehicle in raising awareness and fostering positive behavior change for biodiversity conservation among the millions of people traveling the globe every year.

Connected Cities

Over the last two decades, the transformative power of urbanization has, in part, been facilitated by the rapid deployment of Information and Communications Technology (ICT), and by a revolution in city data to inform decision-making and propel a global movement to smart cities. This has been accompanied by deeper connectivity of cities at local and global levels.

Cities have to contend with a wide range of challenges, from crime prevention, to more efficient mobility, to creating healthier environments, to more energy-efficient city systems, to emergency preparedness, among others. To address these challenges, the Internet of things, or networked connections in cities and data, are deployed to improve service delivery and quality of life. The use of data allows cities to measure their performance and to re-inform investments in city infrastructure. Cities are increasingly relying on metrics and globally comparable city data to guide more effective and smarter city decision-making that build efficiencies in city budgets.

Central to the communications revolution is the deployment of ICT in cities. High- quality infrastructure, innovation, investment, well-connected firms, efficiencies in energy and budgets, are often cited as ICT-driven benefits to cities. However, the potential consequences of this deployment are yet not well understood. When ICT is deployed unevenly in cities, it can create a digital divide— which can exacerbate inequality, characterized by well-connected affluent neighborhoods and business districts coexisting with under-serviced and under-connected low- income neighborhoods. The wealthy tend to have greater access to these technologies, and ICT can often serve to extend their reach and control while curbing that of the more socioeconomically marginalized residents.

Over the past two decades, the growth and expansion of mobile networks have been extensive and overtaken most predictions, changing the course of development for the post-2015 era. According to the Eriksson mobility report, the total number of mobile subscriptions in the third quarter of 2015 was 7.3 billion, with 87 million new subscriptions. For the vast majority of low-income population in emerging countries, mobile telephony is likely to be the sole connectivity channel. Although an affordable and reliable internet is not yet a reality for the majority of people in the world, the technology has proliferated since inception, spurring enormous innovation, various network expansion, and increased user engagement in a virtuous circle of growth. The number of internet users stood at one billion in 2005 and two billion in 2010, reaching over three billion by 2015.

As a transformative force, the deployment of ICT in cities supports innovation and poverty eradication, by promoting efficiencies in urban infrastructure, leading to lower-cost city services. In some cases, urban economies can leapfrog stages of development by deploying new technologies in the initial construction of infrastructure. Cities like Hong Kong and Singapore are notable examples of economies that were able to make this leap by digitizing their infrastructure. Shows how the city of Kigali in Rwanda is providing internet connectivity to its residents via the public bus system. In 2010, Curitiba, Brazil, was the first city in the world to connect public buses to a 3g mobile-broadband network. Such innovation opened up new possibilities for traveler services that helped commuters plan their route and enabled them to purchase tickets wherever and whenever it is most convenient. Cities worldwide, such as Chicago, London, and Vancouver, are implementing digital inclusion programs to ensure that all citizens have the tools to thrive in an increasingly digitalized world. As cities depend increasingly on electronic information and technology for their functioning and service delivery, city leaders are proceeding with caution to avoid an unequal distribution of ICT and to examine ways to bridge the digital divide.

The ever-increasing application of data and the Internet of things is supporting a much more collaborative relationship between city governments, citizens, and businesses. This trend is driving the smart cities phenomenon worldwide. The definition of a smart city continues to evolve, but a consistent component is the application of ICT and the Internet of things to address urban challenges. Many conceptual frameworks of smart cities also consider sustainability, innovation, and governance as essential components in addition to the application of ITC. The International Telecommunication Union defines a smart, sustainable city as “an innovative city that uses information and communication technologies and other means to improve quality of life, efficiency of urban operation and services, and competitiveness, while ensuring that it meets the needs of present and future generations concerning economic, social, environmental as well as cultural aspects.”

A smart city can guide better decision-making concerning prosperity, sustainability, resilience, emergency management, or effective and equitable service delivery. The city of Rio de Janeiro collaborated with IBM, to create a municipal operations center that combines data and information from city and state agencies, and private utility and transportation companies to collaborate on logistics and management challenges. The city, faced with growing concerns in flooding and traffic gridlock, can now monitor data and provide citizens with relevant information via mobile phones and other warning systems.105 Barcelona is a leading smart city for its application of innovative solutions aimed at improving city services and the quality of life of its citizens. Barcelona’s smart city model aims “to use ICT to transform the business processes of public administration…to be more accessible, efficient, effective, and transparent.” Singapore has also been at the forefront of the smart city movement; its smart nation program seeks to harness ICT, networks, and data to support better living, create more opportunities, and to support stronger communities.107 Singapore was the first city in the world to introduce congestion pricing and, now, by using more advanced systems, can analyze traffic data in real-time to adjust prices. One hundred eight technology solutions and the effective use of data are providing city leadership with new tools and opportunities for effective change.

Estimates show that the global smart city market will grow by 14 percent annually, from the U.S. $506.8 billion in 2012 to the U.S. $1.3 trillion in 2019. Over the next two decades, city governments in the U.S. will invest approximately U.S. $41 trillion to upgrade their infrastructure and take advantage of the Internet of things. With China’s cities projected to grow by 350 million people over the next 20 years, investment in smart cities is expected to exceed the U.S. $159 billion in 2015 and the U.S. $320 billion by 2014. India announced plans to build 100 smart cities, in response to the country’s growing population and pressure on the urban infrastructure. To realize the potential of ICT towards sustainable development, an enabling environment has to be created, with participatory governance models, the modernized infrastructure, and technology (capacity building, ensuring inclusion, and bridging the digital divide.

Chapter 6: Food, Water, and Agriculture

In this chapter, we will examine geographic hearths where domestication of plants and animals first occurred and study the processes by which domesticated crops and animals spread. This diffusion process helps explain why distinct regional patterns emerge concerning diet, energy use, and the adaption of biotechnology.

This module also exams major agricultural production regions of the world, which are categorized as commercial or subsistence operations and are characterized as extensive (e.g., shifting cultivation) or intensive (e.g., mixed crop/livestock). Agricultural production regions were examined, as are settlement patterns and landscapes typical of each significant agriculture type. We will learn about land survey systems, environmental conditions, sustainability, global food supply issues, and the cultural values that shape agricultural patterns. In addition, this module will address the roles of women in agriculture production, particularly in subsistence farming and market economies in the developing world.

We will analyze theories and models about patterns of rural use and associated settlements (e.g., von Thunen’s land use model). We also will study the impacts of large-scale agribusiness on food production and consumption. The effects of economic and cultural globalization on agriculture and the need to increase food supplies and production capacity are also addressed.

6.1 The Roots of Agriculture

The traditional story about agriculture goes something like this: initially, people were hunter-gatherers who lived short lives because they had to scrounge for food from what nature provided. At some point, someone in the tribe made the discovery that people could plant crops. This led to better food supplies, less work, and more leisure time to develop higher civilization. Geographers now know that this traditional story gets it backward in many ways. Hunting and gathering is a comfortable way of life, while agriculture is often an adaptation of necessity with significant negative ramifications.

To start, we need to define “agriculture.” The traditional story proposes that there is a significant leap forward – sometimes called the “agricultural revolution” or “Neolithic revolution” – when societies invent agriculture. However, it is more accurate to see agriculture as one stage on a continuum of intensification. Intensification refers to the amount of production per unit of land that is extracted for human use. Raising the level of intensification practiced by society requires increased manipulation of natural processes by humans. We can imagine a scale of intensification running from a wilderness where the only human activity is hikers picking a few berries to eat on their way, to a modern industrial farm that mass-produces corn.

Hunter-gatherers do not merely wander the landscape, picking up whatever food and other resources they happen across. Hunter-gatherer societies have the sophisticated knowledge of the plants and animals found in their territory, and when and how they can be harvested. While they are somewhat at the mercy of the earth’s cycles and the bioclimatic zone in which they live, hunter-gatherers do not just wait for nature to provide them with resources. Instead, they are astute observers of weather and the seasonal migration patterns of animals and growth patterns of plants, and they may deliberately manipulate the environment to encourage the production of the plants and animals they want.

Australian Aborigines practiced such a high level of intensification using fire that they were able to manipulate animals to gather in one place for easier kill or capture. Other hunter-gatherer societies had quite high levels of intensification as well. For example, the Native Americans of the Northwest Coast, such as the Tlingit and Haida, sustained high population levels usually characteristic of agricultural societies because they had found ways to extract vast amounts of resources from their environment.

Agriculture is defined as the cultivation of crops and efforts to breed better strains. Cultivate means “to care for,” and a crop is any plant cultivated by people. If society continues to increase its level of intensification, eventually it will find itself practicing types of production that we would recognize as agriculture. This is what occurred in different regions dating from 10,000 to 8,000 BC in the Fertile Crescent and perhaps 8000 BC in the Kuk Early Agricultural Site of Melanesia. There are various debates within the scientific community between human geographers, sociologists, and anthropologists as to why agriculture arose throughout these various locations, called hearths, around the world. Despite the debate, in each hearth area, the transition from a largely nomadic hunter-gatherer way of life to a more settled, agrarian-based one, included not just the cultivation and domestication of plants, but also the domestication of animals.

We can make some general assumptions that the cultivation of plants and domestication of animals was because of environmental or cultural push factors. It was likely a combination of both, since a variety of agricultural hearths were grown around the world and under different circumstances. From a climate science perspective, the likely catalyst of agriculture was that around 10,000 years ago, the earth was shifting away from the Pleistocene Ice Age and into a warming period called an interglacial period.

Still, the question lingers: why would a society intensify to the point of developing agriculture? Agriculture increases the output of food per unit of land. Farmers can get more food and other resources, and hence support more people, out of a given chunk of land than hunter-gatherers can. Agriculture is thus associated with a boom in population. However, if a population declines, a society may de-intensify to hunting and gathering.

Take, for example, the native people of the Amazon basin. When they were first encountered by European explorers, the explorers assumed that, since agriculture was a better system of production, any society without agriculture must never have learned of it. However, the pre-Columbian Amazon was home to massive agricultural civilizations. Vast numbers of these people, perhaps 90 percent, were killed by European diseases, which spread faster than the explorers. With so many people gone, the surviving Amazonians decided they might as well return to hunting and gathering since they no longer needed the high intensification of agriculture to support their population.

6.2 Types of Agriculture

Today, there are two divisions of agriculture, subsistence and commercial, which roughly correspond to the less developed and more developed regions. One of the most significant divisions between more and less developed regions is the way people obtain the food they need to survive. Most people in less developed countries are farmers, producing the food they and their families need to survive. In contrast, fewer than 5 percent of the people in North America are farmers. These farmers can produce enough to feed the remaining inhabitants of North America and to produce a substantial surplus.

Subsistence agriculture is the production of food primarily for consumption by the farmer and mostly found in less developed countries. In subsistence agriculture, small-scale farming is primarily grown for consumption by the farmer and their family. Sometimes if there is a surplus of food, it might be sold, but that is not common. In commercial agriculture, the primary objective is to make a profit.

The most abundant type of agriculture practiced around the world is intensive subsistence agriculture, which is highly dependent on animal power, and is commonly practiced in the humid, tropical regions of the world. This type of farming is evidenced by significant efforts to adapt the landscape to increase food production. As the word implies, this form of subsistence agriculture is highly labor-intensive on the farmer using limited space and limited waste. This is a widespread practice in East Asia, South Asia, and Southeast Asia where population densities are high, and land use is limited. The most common form is wet rice fields, but could also include non-wet rice fields like wheat and barley. In sunny locations and long growing seasons, farmers may be able to efficiently get two harvests per year from a single field, a method called double cropping.

Another form of subsistence agriculture is called shifting cultivation because the farmers shift around to new locations every few years to farm new land. Farming a patch of land tends to deplete its fertility and land that is highly productive after it is first cleared, loses its productivity throughout several harvests. In the first agricultural revolution, shifting cultivation was a common method of farming.

There are two processes in shifting cultivation: 1) farmers must remove and burn the earth in a manner called slash-and-burn agriculture where slashing the land clears space, while burning the natural vegetation fertilizes the soil, and 2) farmers can only grow their crops on the cleared land for 2-3 years until the soil is depleted of its nutrients then they must move on and remove a new area of the earth; they may return to the previous location after 5-20 years after the natural vegetation has regrown. The most common crops grown in shifting cultivation are corn, millet, and sugarcane. Another cultural trait of LDCs is that subsistence farmers do not own the land; instead, the village chief or council controls the earth. Slash-and-burn agriculture has been a significant contributor to deforestation around the world. To address deforestation and the protection of species, humans need to address root issues such as poverty and hunger.

Pastoral nomadism is similar to subsistence agriculture except that the focus is on domesticated animals rather than crops. Most pastoral nomads exist in arid regions such as the Middle East and Northern Africa because the climate is too dry for subsistence agriculture. The primary purpose of raising animals is to provide milk, clothing, and tents. What is interesting with pastoral nomads is that most do not slaughter their herds for meat; most eat grains by trading milk and clothing for grain with local farmers.

The type of animals chosen by nomads is highly dependent on the culture of the region, the prestige of animals, and the climate. Camels can carry heavy cargo and travel great distances with very little water; a significant benefit in arid regions. Goats require more water, but can eat a wider variety of food than the camel.

Most probably believe that nomads wander randomly throughout the area in search of water, but this is far from the truth. Instead, pastoral nomads are very aware of their territory. Each group controls a specific area and will rarely invade another area. Each area tends to be large enough to contain enough water and foliage for survival. Some nomad groups migrate seasonally between mountainous and low-lying regions; a process called transhumance.

The second agricultural revolution coincided with the Industrial Revolution; it was a revolution that would move agriculture beyond subsistence to generate the kinds of surpluses needed to feed thousands of people working in factories instead of in agricultural fields. Innovations in farming techniques and machinery that occurred in the late 1800s and early 1900s led to better diets, longer life expectancy, and helped sustain the second agricultural revolution. The railroad helped move agriculture into new regions, such as the United States Great Plains. Geographer John Hudson traced the major role railroads, and agriculture played in changing the landscape of that region from open prairie to individual farmsteads. Later, the internal combustible engine made possible the mechanization of machinery and the invention of tractors, combines, and a multitude of large farm equipment. New banking and lending practices helped farmers afford new equipment. In the 1800s, Johann Heinrich von Thünen (1983-1850) experienced the second agricultural revolution firsthand— because of which he developed his model (the Von Thünen Model), which is often described as the first effort to analyze the spatial character of economic activity. This was the birth of commercial agriculture.

More developed nations tend to have commercial agriculture with a goal to produce food for sale in the global marketplace called agribusiness. The food in commercial agriculture is also rarely sold directly to the consumer; rather, it is sold to a food-processing company where it is processed into a product. This includes produce and food products.

An interesting difference between emerging countries and most developed countries (MDC) regarding agriculture is the percent of the workforce that farm. In emerging countries, it is not uncommon that over half of the workforce are subsistence farmers. In MDCs like the United States, the workforce that is farmers are far fewer than half. In the United States alone, less than 2 percent of the workforce are farmers, yet have the knowledge, skills, and technology to feed the entire nation.

One of the reasons why only 2 percent of the United States workforce can feed the entire nation has to do with machinery, which can harvest crops at a large scale and very quickly. MDCs also have access to transportation networks to provide perishable foods like dairy long distances in a short amount of time. Commercial farmers rely on the latest scientific improvements to generate higher yields, including crop rotation, herbicides and fertilizers, and hybrid plants and animal breeds.

Another form of commercial agriculture found in warm, tropical climates, are plantations. A plantation is a large-scale farm that usually focuses on the production of a single crop such as tobacco, coffee, tea, sugar cane, rubber, and cotton, to name a few. These forms of farming are commonly found in LDCs but often owned by corporations in MDCs. Plantations also tend to import workers and provide food, water, and shelter necessities for workers to live there year-round.

Making Sense of Land Use

Geographers are concerned with understanding why things happen in geographical spaces. Johann Heinrich von Thünen (1783-1850) was a farmer on the north German plain, and he developed the foundation of rural land use theory. Because he was a keen observer of the landscape around him, he noticed that similar plots of land in different locations were often used for very different purposes. He concluded that these differences in land use between plots with similar physical characteristics might be the result of differences in location relative to the market. Thus, he went about trying to determine the role that distance from markets plays in creating rural land-use patterns. He was interested in finding laws that govern the interactions between agricultural prices, distance, and land use as farmers sought to make the greatest profit possible.

The von Thünen model is focused on how agricultural is distributed around a city in concentric circles. The dot represents a city, and the first ring (white) is dedicated to market gardening and fresh milk production. That is because of milk products and garden crops, such as lettuce, spoil quickly. Remember that at the time von Thünen developed this model, there was no refrigeration, so it was necessary to get perishable produce to the market immediately. Because of this, producers of perishable crops were willing to outbid producers of less perishable crops to gain access to the land closest to the market. This means that land close to the community created a higher level of economic rent.

The second ring, von Thünen believed, would be dedicated to the production and harvest of forest products. This was because, in the early 19th century, people used wood for building, cooking, and heating. Wood is bulky and heavy and therefore difficult to transport. Still, it is not nearly as perishable as milk or fresh vegetables. For those reasons, von Thünen reasoned that wood producers would bid more for the second ring of land around the market center than all other producers of food and fiber, except for those engaged in the production of milk and fresh vegetables.

The third ring, von Thünen believed, would be dedicated to crop rotation systems. In his time, rye was the most important cash grain crop. Inside the third ring, however, von Thünen believed there would be differences in the intensity of cultivation. Because the cost of gaining access to the land (rent) drops with distance from the city, those farming at the other edges of the ring would find that lower rents would offset increased transportation costs. Moreover, because those farming the outer edges would pay less rent, the level of input they could invest prior to reaching the point of decreasing marginal returns (the term “marginal returns” refers to changes in production relative to changes in input), would be at a lower level than would be the case for those paying higher rent to be closer to the market. Therefore, they would not farm as intensely as those working land closer to the urban center.

The fourth ring would be dedicated to livestock ranching. Von Thünen reasoned that unlike perishable or bulky items, animals could be walked to the market. Additionally, products such as wool, hide, horn, and so on could be transported easily without concern about spoilage.

In von Thünen’ s model, wilderness bounded the outer margins of von Thünen’ s Isolated state. These lands, he argued, would eventually develop rent value, as the population of the state increased. Thus, in this fundamental theory, the only variable was the distance from the market.

Von Thünen was a farmer, and as such, he understood that his model did not exist in the whole of the real world. He developed it as an analytical tool that could be manipulated to explain rural land-use patterns in a world of multiple variables. To do this, Von Thünen relaxed his original assumptions, one at a time, to understand the role of each variable.

One of the more stringent assumptions in the Von Thünen model was his assumption that all parts of the state would have equal access to all other parts of the nation (with distance being the only variable allowed). He knew that this did not represent reality because already in his time, some roads were better than others, railways existed, and navigable water routes significantly reduced the friction of distance between the places they served. Therefore, he introduced a navigable waterway into his model, and found that because produce would be hauled to docks on the stream for transport, each zone of production would elongate along the stream.

Von Thünen also considered what would happen if he relaxed his assumption that production costs were equal in all ways except for the costs associated with distance from the market. Eventually, as he worked with his model, he began to consider the effects of differences in climates, topography, soils, and labor. Each of these could serve to benefit or restrict production in a given place. For example, lower wages might offset the advantages realized by being near a market. The difference in the soil might also offset the advances of being close to the market. Thus, a farmer located some distance from the market with access to well-drained, well-watered land with excellent soil, and low-cost labor nearby, might be willing to pay higher rent for the property in question even if it were a bit further from the market than another piece of land that did not have such amenities.

Von Thünen’s concentric circles were the result of the limits he imposed on his model in order to remove all influences except for distance. Once real-world influences are allowed to invade the model, the concentric land-use pattern does not remain in place. Modern technology, such as advances in transportation systems, increasingly complicates the basic concentric circle model. Recent changes, like the demand for agricultural products, also influence land-use patterns.

Changes in demand for farm products often have dramatic impacts on land uses. For example, when fuel production companies demanded dramatically increased quantities of corn to produce ethanol, and the price of corn rose accordingly, farmers responded by shifting from other food crops to ethanol-producing corn. As a result, land well suited for corn production now sells at premium prices (in Iowa and other corn-producing states, an acre of farmland may bring $12,000.00 or more). Currently, there is little extra farmland available upon which an expansion might take place. Therefore, changes in demand typically result in farmers shifting to crops that will bring the highest return.

The mid-Willamette Valley of Oregon provides another example of how changes in demand affect agricultural land uses. For years, the mid-Willamette Valley was the site of many medium-sized grain farms. The primary grain crops included wheat, barley, oats, Austrian peas, and clover. Also, farmers in the region produced row crops, orchard crops, hay, and grass seed. During the 1970s, in response to increasing demand, the price of grass seed increased dramatically. As a result, Willamette Valley farmers quickly changed their focus from the production of grain to grass seed. Soon after, several grain processing facilities closed, and grass seed cleaning, storage, and market facilities opened. There were other unexpected impacts, as well. For example, Willamette Valley grain farms once provided excellent habitat for Chinese pheasants. Pheasants eat grain, but they do not eat grass seed. When the grain fields disappeared, so, too, did the pheasants.

Like pheasants, people do not eat grass seed. On the other hand, oats, wheat, and barley are all food crops. Once a nation can meet its basic food needs, agriculture can meet other demands, such as the demand for Kentucky bluegrass for use on golf courses, lawns, and other landscaping. As incomes go up, the demand for food crops will grow proportionately. Eventually, however, when the demand for food is satiated, subsequent increases in income will no longer bring corresponding increases in the demand for food. This is the result of the elasticity of demand relative to changes in income. The measure of elasticity of demand is calculated by noting the amount of increase in demand for an item that a unit of increase in income generates. For example, luxury products such as expensive wines have a high elasticity of demand, whereas more common items such as rice have a low elasticity of demand. Once a family has all the rice they can typically eat, it will not purchase more as a result of more income. More income, however, would likely bring an increase in the consumption of prime cuts of beef or other such luxury foods.

New technologies in transportation, agricultural production, and the processing of food and fiber often have substantial impacts on the use of rural land. Technological changes mainly influence transportation. For example, the construction of the rail lines that connected the Midwestern United States with the market centers of the East made it possible for farmers in Iowa, Illinois, and other prairie states to improve their profits by feeding the corn they grew to hogs which they then shipped to the markets in the east. This is because the value of a pound of pork has always been far greater than the value of a pound of corn. Thus, by feeding the corn to the hogs, and then shipping the hogs, the farmers could earn greater profits because the shipping costs of their product were lower. In a sense, the farmers were selling corn on the hoof. Without easy access to railheads, this profitable agricultural scheme would not have been possible.

Of course, some folks have specialized in selling corn after it has been distilled into a liquid form. When the sale of alcohol was illegal in the USA, the transport of “liquid corn” was made easier when, in 1932, Henry Ford introduced the Ford V8, thereby enabling “Moonshiners” to move their product from hidden distilleries to waiting markets without being caught by the police. Additionally, “moonshiners” became expert mechanics who could turn a standard 60 horsepower V8 into a powerful, fast, agile machine. People who specialized in modifying these stock cars became pioneers in NASCAR racing.

Over the years, improvements in technologies have tended to drive down the relative costs associated with shipping farm produce. Furthermore, inventions such as refrigerated rail cars and trucks have eliminated some of the land- use constraints that once limited the locational choices of farmers who produced perishable goods. Less expensive haulage costs, decreased transit times, and better handling and processing methods have all served to make transportation systems more efficient and, hence, less expensive.

In theory, this should serve to reduce the importance of distance relative to other non-distance factors. Consider how far from the market a producer of fresh vegetables could locate in the early 19th century. The lack of all-weather roads and reliance on the transportation conveyances of the time (human and animal power) dictated a production location within a few miles of the market. The creation of all-weather roads that could be traversed by a horse and wagon, however, changed the situation. Without the roads, fresh vegetable growers would have been forced to pay high prices for land very near the market. With the roads, they were able to use less expensive land and still get their crops to market before spoilage made it impossible to sell them.

If the creation of an all-weather road made such a difference in land uses, imagine the impacts of the refrigerated aircraft now used to deliver loads of fresh flowers. Currently, many of the fresh flowers sold in US supermarkets come to the United States from the Netherlands via giant jet transport aircraft. This technology has significantly altered the importance of distance relative to the production of fresh flowers.

6.3 Agricultural Regions

There has always been a delicate balance between how much of the Earth’s surface can be used for agriculture and the ability to produce enough food to sustain a growing population. Climate, terrain, groundwater, and soil composition create limits on what and where crops can be produced without major human adaptations to the landscape. New technologies and scientific knowledge have helped to increase the world’s cultivated land significantly. However, spatial variations in land resources like rainfall and temperature zones are still the most significant factors in determining what land is suitable for specific crops and types of agriculture.

The world’s cultivated land has grown by 12 percent over the last 50 years, mostly at the expense of forest, wetland and grassland habitats. At the same time, the global irrigated land has doubled. The distribution of these land and water assets is unequal among countries. Although only a small part of the world’s land and water is used for crop production, most of the easily accessible and (thus economic) resources are under cultivation or have other ecologically and economically valuable uses. Therefore, the ability to expand more cultivated land is limited. Only parts of South America and sub-Saharan Africa still offer a scope for some expansion. At the same time, competition for water resources has also been growing to the extent that today, more than 40 percent of the world’s rural population is now living in water-scarce regions.

The total global land area is 13.2 billion hectares (ha). A hectare is a metric system area unit and widely used land measurement for agriculture and forestry; it equals to 10,000 square meters. Of this, 12 percent (1.6 billion ha) is currently in use for cultivation of crops, 28 percent (3.7 billion ha) is under forest, and 35 percent (4.6 billion ha) comprises grasslands and woodland ecosystems. Low-income countries cover about 22 percent of the land area, but they account for 38 percent of the global population.

Land use varies with climatic and soil conditions and human influences (Figure 5.12). Figure 5.13 further shows the dominant land use by region. Deserts prevail across much of the lower northern latitudes of Africa and Asia. Dense forests predominate in the heartlands of South America, along with the seaboards of North America, and across Canada, Northern Europe and much of Russia, as well as in the tropical belts of Central Africa and Southeast Asia. Cultivated land is 12 to 15 percent of the total land in each category.

Cultivated land is a leading land use (a fifth or more of the land area) in South and Southeast Asia, Western and Central Europe, and Central America and the Caribbean, but is less critical in sub-Saharan and Northern Africa, where cultivation covers less than a tenth of the area. In low-income countries, soils are often more deficient, and only 28 percent of the total cultivated land is suitable for high yield crops.

imageIt is also important to note that with overall growth in cultivated land, rain-fed croplands have declined slightly and irrigated cropland has more than doubled in the time between 1961-2008. This helps us to understand how humans have adapted the landscape for agricultural purposes.

Water resources available for irrigation are very unevenly distributed, with some countries having an abundance of water while others live in conditions of extreme scarcity or shortage of water. Also, even where water may appear abundant, much of it is not accessible or is very expensive to develop, or is not close to lands that can be developed for agriculture. Water scarcity has three dimensions: physical (when the available supply does not satisfy the demand), infrastructural (when the infrastructure in place does not allow for satisfaction of water demand by all users) and institutional (when institutions and legislation fail to ensure reliable, secure and equitable supply of water to users).

In some regions, particularly in the Middle East, Northern Africa, and Central Asia, countries are already using water resources more than what is available. The resultant stresses on ecosystems are increasingly apparent. It is now estimated that more than 40 percent of the world’s rural population lives in river basins that are, physically water, scarce.

Table 5.1: Types of Rainfed Production Systems and Regions

System Characteristics and Examples

Rain-fed agriculture: highlands

  • Low productivity, small-scale subsistence (low- input) agriculture; a variety of crops on small plots plus few animals.

Rain-fed agriculture: dry tropics

  • Drought-resistant cereals such as maize, sorghum, and millet. Livestock often consists of goats and sheep, especially in the Sudano-Sahelian zone of Africa, and in India. Cattle are more widespread in southern Africa and Latin America.

Rain-fed agriculture: humid tropics

  • Mainly root crops, bananas, sugar cane, and notably soybean in Latin America and Asia. Maize is the most important cereal. Sheep and goats are often raised by more impoverished farmers while cattle are held by wealthier ones.

Rain-fed agriculture: subtropics

  • Wheat (the essential cereal), fruits (e.g., grapes and citrus), and oil crops (e.g., olives). Cattle are the most dominant livestock. Goats are also essential in the southern Mediterranean, while pigs are dominant in China and sheep in Australia.

Rain-fed agriculture: temperate

  • Principal crops include wheat, maize, barley, rapeseed, sugar beet, and potatoes. In the industrialized countries of Western Europe, the United States and Canada, this agricultural system is highly productive and often combined with intensive, penned livestock (mainly pigs, chickens, and cattle).

At the same time, in more developed countries, urban and industrial demand, has been growing faster than agricultural demand. Whereas in less-developed countries agricultural use remains dominant, in Europe 55 percent of water is used by industry. Water stresses occur locally across the globe, but some entire regions are highly stressed, particularly the Middle East, the Indian subcontinent, and northeastern China. Sub-Saharan Africa and the Americas experience lower levels of water stress. The quality of water is also impacted when run-off returns to the environment. In general, increasing population and economic growth combined with little or no water treatment have led to more negative impacts on water quality. Agriculture, as the largest water user, is a significant contributor. Key pollutions include nutrients and pesticides derived from crop and livestock management.

Rain-fed agriculture depends on rainfall for crop production, with no permanent source of irrigation. Rain-fed agriculture produces about 60 percent of global crop output in a wide variety of production systems (Table 5.1). The most productive systems are concentrated in temperate zones of Europe, followed by Northern America, and rain-fed systems in the subtropics and humid tropics. Rain-fed cropping in highland areas and the dry tropics tend to be relatively low- yielding, and is often associated with subsistence farming systems. Evidence from farms worldwide shows that less than 30 percent of rainfall is used by plants in the process of cultivation. The rest evaporates into the atmosphere, percolates to groundwater or contributes to river runoff.

Agricultural Economics

We know that climate and terrain place physical limits on what can be grown in specific locations on Earth. However, we must also take into account the geographic nature of the choices farmers make when deciding what to plant. Once subsistence farming intensifies to the point of producing more food than it requires to feed a family or local community, it makes financial sense for farmers to sell their excess products. In this shift from substance to commercial agriculture farms need to be profitable; and the more profitable, the better, so farmers carefully choose the crops and animals they raise. These decisions, in turn, affect what we eat.

You might be thinking, “Farmers do not control what I eat. I eat what tastes good”, but opinions vary wildly on the issue of taste preference from country to country, and even within the countries. Taste preferences for food vary within and across ethnicities, and even house to house among people that would seem alike in almost every way. Still, some trends characterize regions, in the US, and around the world, many of these foodways have roots in the local geography of a place. It is often said, “you are what you eat,” but geographers might add the rejoinder “what you eat depends on where you eat.” Family traditions determine what people eat, but understanding the evolution of those traditions requires an analysis of the spatial contexts in which they evolved.

Our ethnic heritage explains much of our taste preferences. European immigrants to the US established most American foodways. Europeans living 300 years ago would have readily recognized many American dietary staples, such as beef, pork, chicken, bread, pasta, cheese, and milk, as well as a number of the fruits and vegetables we commonly eat. Modern Americans also copy foodways borrowed from the indigenous people of the Americas. Less prominent elements of American’s diet are traceable to Asia and Africa.

Eating is a daily ritual, and as such, it is a deeply ingrained cultural routine. What you like to eat is probably not that different from what your parents and grandparents like to eat. The same was true for your grandparents, giving dietary habits exceptional staying power. This fact is part of the reason behind our obesity crisis. Our lifestyle has changed as rapidly as technology, and the economy has evolved, but many of our foodways are stubbornly resistant to change. The diets that served our ancestors who were farmers or laborers engaged in strenuous daily activities, provides too many calories and fat for a generation working and living in the information age. Cultural lag is the term that describes the inability of cultural practices to keep pace with changes in technological advancement. Numerous behaviors exhibit cultural lag, and culturally conservative regions exhibit a higher degree of cultural lag than places with more progressive tendencies

A sizeable portion of the American diet is purely American. We have adopted several foodstuffs favored by Native Americans. Maize, better known in America as “corn,” is perhaps the most American part of our diet. Domesticated by the indigenous people of Mexico thousands of years ago, it has proven a versatile and hardy plant. It is so versatile that today much of the world eats maize in some fashion. Most Americans know maize mostly as sweet corn. Americans eat sweet corn like corn on the cob, but also canned, frozen and fresh “off the cob,” and in a variety of dishes.

Less well known are maize varieties are known as field corn, although it is far more common because of its great versatility. Field corn is too hard to eat raw, so we modify it. Some of it is processed into cornmeal or cornstarch, which we in turn use to make things like corn chips, tortillas, and sauces. We also consume a lot of corn syrup and high fructose corn syrup (HFCS) made from field corn. Corn syrups are used as a sweetener, thickeners, and to keep foods moist or fresh. HFCS is an inexpensive replacement for cane and beet sugars, and therefore is the most common sweetener used in processed foods and soft drinks.

Malnutrition and Obesity

Several scientists suspect corn sweeteners play a significant role in the obesity crisis in the United States, and elsewhere. Some critics argue that although it tastes nearly the same, the human body responds differently to HFCS than traditional sugars. They argue that since HFCS replaced cane sugar as the most common sweetener, a variety of health issues have appeared in the US and elsewhere. Of course, the corn industry disputes such charges. Since this is not a biology course, there is no reason to wade into a discussion of human metabolism, but it is appropriate to illustrate how geography partly explains why we use HFCS in such vast quantities.

Several reasons explain the use of HFCS, rather than granulated sugars, including cane sugar and beet sugar. Cost is the apparent reason, but why HFCS is cheaper has a lot to do with geography. First, corn grows well in much of the US, so farmers can flood the market and drive down prices. Sugar cane and sugar beets, on the other hand, are less well adapted to American climates. Sugar cane grows best in a rainy climate, and to be profitable requires a very long, warm growing season. Only Hawaii, parts of Texas, Louisiana and Florida can profitably produce sugar cane. Cane yield is highly dependent on climate, and only Hawaii’s climate is ideal in the US. Cane yields in Hawaii are triple those in Louisiana. Sugar beets are more widely grown in the US because they grow well in multiple climates. California and Minnesota both produce sugar beets. Half of the US granulated sugar production is made from beets. Climate and labor conditions outside the US make foreign sugar much cheaper than domestic sources.

The other main reason HFCS is far less expensive than granulated sugar is US government policies. First, the government provides massive subsidies to the corn industry, helping drive down the price of HFCS. At the same time, the US government provides special subsidies to cane sugar producers through tax breaks and incentives. The US government even buys sugar that farmers cannot sell at an above world market price. More importantly, the US government restricts sugar imports, mainly from Cuba, an otherwise cheap source of sugar for Americans. These trade protection policies help sugar farmers, but food processors and consumers wind up paying higher prices for cane sugar and sugar-sweetened foods than they would under free market conditions. As a result, food processors use HFCS.

The nearly $8 billion subsidies paid to corn farmers is four times greater than that paid to the sugar beet and cane industry. This has consequences. One is that there is a considerable surplus of corn. In 2014, there were about 1.63 billion bushels of corn left unsold. Some years it is higher. One side effect is that people eat only a tiny fraction of the field corn grown in the US directly. About half of the yearly field corn crop is used to make biofuels, particularly ethanol that is blended with gasoline by many petroleum companies. If you own a car, corn is probably in your gas tank; and your lungs if you live in a smoggy location. The other half of the corn crop becomes animal feed. Farmers use both the grain and the silage, to feed cattle. Farmers feed corn to chickens and hogs as well. Even cat and dog foods often have corn in it.

Exceptionally cheap corn helps make meat less expensive than many other types of food. College students on a budget already know that it is a lot cheaper to buy lunch at a local fast-food burger joint than a healthy green salad. Government policies also shape school lunch programs. Kids get cheap, often unhealthy, food, and return agribusiness benefits. In 2011, the US Congress even declared pizza sauce and ketchup “vegetables” for the sake of school lunches to help specific agribusiness interests. The inexpensiveness of unhealthy meats and grains increases the incentives for their consumption, often in the form of fast food. In impoverished regions of the US, fast food is more widely available than elsewhere. Spatially, we can track the impact of these agricultural policies on the geography of the United States.

The economics of agriculture does not just impact our waistline. They impact who farms the land. Small-scale farms are far more impacted by fluctuations in the price of their goods because they are often dependent on one specific product. Conversely, large-scale commercial farms can spread out their economic risk among several products, larger stock or even multiple locations, in the case of a devastating weather event, crop catastrophe, or price fluctuation. An example of this can be seen in dairy farming in the United States.

A significant transformation of dairy farming has reduced the number of farms by nearly 60 percent over the past 20 years, even as total milk production increased by one- third. Recent results from the Census of Agriculture and the Agricultural Resource Management Survey (ARMS) detail how and why the structure of dairy production has changed.

The mean herd size of dairy farms rose from 61 cows in 1992 to 144 in 2012, but most cows are now on farms that are much larger than average. The midpoint farm size is used to track cows; the midpoint shows the herd size at which half of all cows are in larger herds and half are in smaller herds. In 1992, the midpoint of 101 cows was not much larger than the mean, reflecting the fact that most cows were small and mid-size dairy farms. However, the midpoint rose sharply over the next two decades, to 900 cows by 2012, over six times larger than the mean herd size.

In the simplest terms, your milk is most likely coming from a large- scale commercial farm rather than your local family-owned dairy. Check out www.whereismymilkfrom.com

The economics of dairy farming primarily drives the shift to larger dairy farms. Average costs of production, per gallon of milk, are lower in larger herds because production and distribution are more efficient. These costs include the estimated costs of the farm family’s labor as well as resource costs.

The cost differences reflect differences in input use; on average, larger farms use less labor, capital, and feed per gallon of milk produced. This is known as economy of scale and is the reason for starting and maintaining small and mid-sized farming operations can be so difficult. A large dairy owner can make a deal with other farmers to purchase enormous amounts of corn, soybeans, and hay at a discount to feed their milk cows while a small-scale farmer is more likely to pay a higher retail price. In addition to the costs associated with running a commercial agricultural operation, small-scale dairy farmers are profoundly impacted by the price of milk.

Many factors influence milk prices in the United States, including state and federal programs designed to ensure that milk prices do not fall so low that dairy producers cannot cover the cost of production. Non-governmental organizations, such as dairy cooperatives, also play a role in determining minimum pricing. Based on August 2016 price estimates from USDA, U.S. farmers and ranchers again received about 17.4 cents for every $1 spent by consumers for food at the retail level. More than 80 cents per $1 went for marketing, processing, wholesaling, distribution and retailing. A producer’s share of a gallon of fat-free milk, selling for $3.99 at retail, was $1.47, or about 37 percent. Figure 5.22 is aimed at policymakers to change how prices are set for milk so that small-scale farmers can stay competitive with large-scale operations.

Spatial Geography of Food

The transformation of agriculture into large-scale agribusiness has created a complex system linking food production with consumers. Here is how we think of our modern food system:

Farmers/Growers   —-> A Miracle Occurs   —-> Consumers

This miraculous system, which causes food to appear in grocery stores is an illusion. Somehow, we imagine the farmer pulling his or her truck up behind the supermarket and unloading baskets of fresh fruits and vegetables or sides of beef and pork into the open arms of the retailer and his staff. Moreover, frankly, the bigger the supermarket, the more likely there will be signs, photos, and even wall-sized murals showing farmers and ranchers smiling as they offer their vegetables and fruit or stand with an arm around the neck of a sleek beef cow. In reality, the path our food takes to get to our plates is more like a messy game of hopscotch.

A geographer thinks of these complex supply chains in a global spatial context. Each stop along the way from food producers to consumers represents part of the agricultural landscape. So, how far does our food travel before it gets to our plates?

Consider the journey of a Washington apple. Washington State is one of the largest producers of apples in the United States (Figure 5.21) however, the processing of apples for juice and apple sauce occurs all across the country, with one of the largest operations being Knouse Foods in Pennsylvania. That means if you live next to an orchard in Wenatchee, Washington and you go to the local grocery store for applesauce, it is likely to have traveled about 5,300 miles from Washington to Pennsylvania and back again.

This is not the exception, but rather the rule, in our current food system. Shipping food long distances for processing and packaging, importing, and exporting foods that do not need to be imported or exported – these are standard practices in the food industry. According to one report, in 1996, Britain imported more than 114,000 metric tons of milk. Was this because British dairy farmers did not produce enough milk for the nation’s consumers? No, since the UK exported almost the same amount of milk that year, 119,000 tons.

Food has moved around the world ever since Europeans brought tea from China, but efficient modern transportation and bioengineering has made it more practical to bring food from distant places where labor costs and farm expenses may be cheaper. Nowadays, it is not only tropical foodstuffs such as sugar, coffee, chocolate, tea, and bananas that are shipped long distances to come to our tables but also fruits and vegetables that once grew locally, in household gardens and on small farms. An apple imported to Washington from New Zealand is often less expensive than an apple from the historic apple-growing county of Okanogan, just a few hours away from Seattle. Moreover, the global diffusion of mega-marts like Costco and Walmart have only accelerated this trend.

It is estimated that the average American meal travels about 1500 miles to get from the farm to plate. Why is this cause for concern? There are many reasons:

This long-distance, large-scale transportation of food consumes large quantities of fossil fuels. It is estimated that we currently put almost 10 kcal of fossil fuel energy into our food system for every 1 kcal of energy we get as food.

Transporting food over long distances also generates great quantities of carbon dioxide emissions. Some forms of transport are more polluting than others. Airfreight generates 50 times more CO2 than sea shipping. However, sea shipping is slow, and in our increasing demand for fresh food, food is increasingly being shipped by faster – and more polluting – means.

To transport food long distances, much of it is picked while still unripe and then gassed to “ripen” it after transport, or it is highly processed in factories using preservatives, irradiation, and other means to keep it stable for transport and sale. Scientists are experimenting with genetic modification to produce longer- lasting, less perishable produce.

Food Security

With all of this food being shipped around the world, the question must be asked, “Why are there still hungry people in the world?” That is a complex and highly debated question right now. First, we need to look at the production of food by global region, because it shows some notable patterns.

First, it is essential to understand the graph. The index of 100 refers to the base level of production in 1961. Therefore, any movement away from the base level can be seen as a percentage change. For example, world output has increased by 140 percent from 1961 to 1999. The vast majority of this increase is as a result of increases in Asia. In Asia, we can see almost a 75 percent increase in food production. In contrast, Africa shows a general decline of 10 percent by 1999. The graph also shows that food production is quite variable over time. Most regions, except for Asia, have experienced periods of increased output and periods of decline.

There are several essential points of reference. First, there are only a few net exporters of food; the central countries being USA, Canada, France, Germany, Poland, Brazil, China, and Australia, with some other South American and South East Asian economies also net exporters of food. The most striking pattern in the map is the reliance on almost the entire African continent on food imports.

Traditionally, developed countries as a whole have had a net surplus in agricultural trade. However, the agricultural trade balance of the emerging countries has gradually dwindled until, by the mid-1990s, it was more often negative than positive. Unfortunately, this overall trend masks a complex picture which varies from one commodity to another and from one country to another. The drastic decline in developing countries’ net surplus in sugar, oilseeds and vegetable oils, for example, reflects growing consumption and imports in several developing countries and the effects of protectionist policies in the major industrial countries. For commodities produced almost entirely in developing countries and consumed predominantly in the industrial countries, such as coffee and cocoa, slow growth in demand prevented the trade balance of the developing countries from improving. Fluctuating prices further contributed to the problem.

Globally, there is enough land, soil and water, and enough potential for further growth in crops, to make the necessary production possible. Harvest growth will be slower than in the past, but at the global level, this is not, and producers have satisfied sufficient market demand in the past. However, the concept of supply and demand does not represent the total need for food and other agricultural products worldwide because hundreds of millions of people lack the money to buy what they need or the resources to produce it themselves.

We can produce enough food in the world as a whole, but there will still be problems of food security at the household or national level. In urban areas, food insecurity usually reflects low incomes, but in poor rural areas, it is often inseparable from problems affecting food production. In many areas of the developing world, the majority of people still depend on local agriculture for food and livelihoods, but the potential of local resources to support further increases in production is minimal, as technology to produce more abundant crops is limited. Examples are semi-arid areas and areas with problem soils. In such areas, agriculture is often dependent on global policies and the ability to offer economical and technological aid.

Food Price Index

Food concerns can be monitored and addressed by analyzing the food price index, which the Food and Agriculture Organization of the United Nations states “is a measure of the monthly change in international prices of a basket of food commodities. It consists of the average of five commodity group price indices, weighted with the average export shares of each of the groups. There is great concern that globally, food prices are rising making it harder for families to purchase quality food, along with raising concerns for global food insecurity issues.

At the World Food Summit in 1996, the World Health Organization (WHO), defined food security as “when all people at all times have access to sufficient, safe, nutritious food to maintain a healthy and active life.” The WHO go on by saying, “food security is a complex sustainable development issue, linked to health through malnutrition, but also to sustainable economic development, environment, and trade.”

FAO estimates that around one billion people are undernourished and that each year, more than three million children die from undernutrition before their fifth birthday. Also, the physiological needs of pregnant and lactating women make them more susceptible to malnutrition and micronutrient deficiencies. Twice as many women suffer from malnutrition as men, and girls are twice as likely to die from malnutrition than boys. Maternal health is crucial for child survival – an undernourished mother is more likely to deliver an infant with low birth weight, significantly increasing its risk of dying.

Role of Women in Agriculture

In emerging countries, rural women and men play different roles in guaranteeing food security for their households and communities. While men grow mainly field crops, women are usually responsible for growing and preparing most of the food consumed in the home and raising small livestock, which provides protein.

Rural women also carry out most home food processing, which ensures a diverse diet, minimizes losses, and provides marketable products. Women are more likely to spend their incomes on food and children’s needs – research has shown that a child’s chances of survival increase by 20% when the mother controls the household budget. Women, therefore, play a decisive role in food security, dietary diversity, and children’s health.

However, gender inequalities in control of livelihood assets limit women’s food production. In Ghana, studies found that insecure access to land led women farmers to practice shorter fallow periods than men, which reduced their yields, income, and the availability of food for the household. In sub-Saharan Africa, diseases such as HIV/AIDS force women to assume more significant caretaking roles, leaving them less time to grow and prepare food. Women’s access to education is also a determining factor in levels of nutrition and child health. Studies from Africa show that children of mothers who have spent five years in primary education are 40 percent more likely to live beyond the age of five.

Having an adequate supply of food does not automatically translate into adequate levels of nutrition. In many societies, women and girls eat the food remaining after the male family members have eaten. Women, girls, the sick and disabled are the primary victims of this “food discrimination,” which results in chronic undernutrition and ill- health.

A phenomenon found in many regions and countries today is the trend towards the so-called “feminization of agriculture,” or the growing dominance of women in agricultural production and the concurrent decrease of men in the sector. This trend makes it more imperative than ever to take action to enhance women’s ability to carry out their tasks in agricultural production and their other contributions to food security. This development goes hand in hand with the increasing number of female-headed households around the world. A significant cause of both these developments is male-out migration from rural areas to towns and cities in their countries or abroad and the abandonment of farming by men for more lucrative occupations.

In Africa, where women have traditionally performed the majority of work in food production, agriculture is becoming increasingly a predominantly female sector. Economic policies favoring the development of industry, and the neglect of the agricultural sector, particularly domestic food production, have led to an exodus of rural people to the urban or mining areas, to seek income-earning opportunities in mines; large export-oriented commercial farms, fishing enterprises, and other businesses.

While there is still insufficient data to give exact figures on women’s contributions to agricultural production everywhere in the world, the collection of data is increasing. This data, together with field studies and gender analyses, make it possible to draw several conclusions about the extent and nature of women’s multiple roles in agricultural production and food security. If anything, women’s contributions to farming, forestry, and fishing may be underestimated, as many surveys and censuses count only paid labor. Women are increasingly active in both the cash and subsistence agricultural sectors and much of their work in producing food for the household and community consumption, as important as it is for food security, is not counted in statistics.

6.4 Population and Food Production

Population and Food Production

Recall that English economist Thomas Malthus (1766-1834) proposed that the world rate of population growth was far outrunning the development of food supplies. Malthus proposed that the human population was growing exponentially, while food production was growing linearly. Below is an example:

  • Today – 1 person, 1 unit of food
  • 25 years from now – 2 persons, two units of food
  • 50 years from now – 4 persons, three units of food
  • 75 years from now – 8 persons, four units of food
  • 100 years from now – 16 persons, five units of food

During Malthus’s time, only a few relatively wealthy countries had entered Stage 2 of the demographic transition model high population growth. He failed to anticipate that relatively emerging countries would have the most rapid population growth because of a medical revolution. Many social scientists and even environmentalists are strong supporters of Malthus’s hypothesis of the coming global food shortage and are taking it several steps further. Human population growth and consumption may be outstripping a wide variety of the earth’s natural resources, not just food production. Billions of people may soon be engaged in a search for food, water, energy, and resources. These days, technology is allowing us to convert food into a fuel called ethanol. In the United States, large amounts of corn are being used to create biofuel as a way to remove ourselves from our addiction to oil. This has caused global corn prices to rise dramatically. Wars and civil violence will increase in the coming years because of scarcities.

Others discredit Malthus because his hypothesis is based on the world supply of resources being fixed rather than flexible and expanding. Technology may enable societies to be more efficient with scarce resources or allow for the use of new resources that were once not feasible. Some believe population growth is not a bad thing either. A large population could stimulate economic growth and, therefore, the production of food.

Marxists believe that there is no direct connection between human population growth and economic development within an area. Social constructs of hunger and poverty are the result of unjust social and economic power structures through globalization, rather than because of human population growth.

So even with a global community of 7 billion, food production has grown faster than the global rate of natural increase. Better growing techniques, higher-yielding, and genetically modified seeds, and better cultivation of more land have helped expand food supplies globally. However, many have noted that food production has started to slow and level off. Without new technology breakthroughs in food production, the food supply will not keep up with population growth.

The third agricultural revolution, also known as the Green Revolution, has been in response to these fears of a Malthusian food crisis. The Green Revolution consists of improvements to agriculture brought about by the application of modern scientific methods to the development of new crop varieties and agricultural inputs. The technologies of the Green Revolution first made their mark in the United States, but the term is most commonly used about their extension to farmers in developing countries.

Taking up Green Revolution technology involves adopting a whole package of inputs — improved seeds, new fertilizers, and new pesticides and herbicides, all of which have been designed to work together. The improved seeds were created through selective breeding and hybridization. The fertilizers and pesticides are composed of artificial chemicals designed to provide just the nutrients that crops need and to target their main pests and weeds. The Green Revolution produced dramatic gains in crop productivity where it was implemented, in some cases doubling or even tripling yields. Norman Borlaug, the agronomist who was the guiding force behind the Green Revolution and one of its most prominent spokespeople, was widely hailed as a hero who saved millions from starvation and won the Nobel Peace Prize.

There are many critics of the Green Revolution. While acknowledging some of the gains in the total food supply, these critics argue that the Green Revolution has several critical shortcomings. The health critiques raise concerns about whether the Green Revolution crops are safe to eat. This concern is particularly salient with respect to genetically modified organisms (GMOs). While tests have generally shown GMOs to be safe to eat, critics worry that modified organisms could trigger adverse reactions in people, for example, if a person with a peanut allergy ate corn that had a peanut gene spliced into it. There is also concern that work on improving crops has focused on boosting the size and appearance of fruits, kernels, and more, at the expense of making them less nutritious. Finally, health may be impacted by the growing style of Green Revolution crops. The Green Revolution aggressively suppresses any organism in the field that could compete with the main crop. However, for many poor farmers, “weeds” are an important supplementary source of food. Ironically, adding vitamin A to rice through genetic modification is proposed as a solution when the vitamin A deficiencies that it will fix were caused in part by a loss of leafy green “weeds” to Green Revolution herbicides.

Environmental critiques raise questions about whether Green Revolution agriculture is good for the wider environment. There are several ways in which the environment could be affected. First, the successful use of Green Revolution technology often requires increased use of water. This can deplete water supplies in dry areas (and lead to demands for environmentally-disruptive dams to increase the water supply). Pesticides, herbicides, and fertilizers frequently run off the farm into streams, with adverse effects on downstream ecosystems. Green Revolution farming can also, in some cases, pollute and deplete the soil, meaning that the gains in productivity will not be sustainable. There are also concerns about the heavy use of pesticides and herbicides, leading to the evolution of chemical-resistant super-bugs and super-weeds. Green Revolution farms can further exacerbate the problems of mono-cropping, converting large areas to farms with very low biodiversity and thus increasing susceptibility to disasters (weather-related, pest infestations, etc.). In the case of GMOs, a major worry is that modified genes will spread beyond the field. Wind and insects can carry plant pollen into neighboring non-GMO fields and non-farm areas. If the plants that receive the pollen cross-breed with the GMOs, the modified gene may become established off-farm, with potentially ecological severe consequences depending on the nature of the gene.

Social critiques center on the economic system that farmers become a part of when they adopt Green Revolution technology. Traditional agriculture was largely self- contained. Farmers produced their inputs by saving seeds from previous harvests to plant next year, by collecting their natural fertilizers, and by using their household labor to till the fields. However, the improved seeds and the package of chemical inputs that make up the Green Revolution cannot be produced on the local farm. They have to be mass-produced by large agribusiness companies and then sold to farmers. Farmers then become dependent on companies like Monsanto to buy their inputs and sell their products. The contracts that farmers sign with these companies often put small farmers at a disadvantage. Depending on the arrangements made by the farmers, they may then become highly dependent on the international agricultural market — meaning that global shifts in prices for both inputs and farm products can determine their ability to make ends meet.

An emerging trend in agriculture, which is in some ways opposed to but in other ways parallel to the Green Revolution, is the rise of organic agriculture. Organic agriculture is agriculture that avoids the use of “artificial” chemical inputs and genetically modified crops. The organics movement originated as an attempt to avoid the problems arising from the Green Revolution by creating a farming system that works in harmony with the land. This original vision of organic agriculture is reflected, for example, in community supported agriculture programs, which usually practice organic farming. In community supported agriculture, customers buy a “share” or subscription at the beginning of the growing season, then receive a portion of whatever produce the farm manages to grow. This system is meant to spread the risks of farming between farmers and consumers, create a closer bond between the farmer and consumer, and make organic agriculture more profitable. As the popularity of organic food has grown, organics have become big business. Major corporations now coordinate the production of organic ingredients all over the world. Due to the diversity of techniques and differing demands of different crops, there remains much controversy over how well organic farming achieves its goals of reducing its ecological footprint and improving consumer nutrition.

6.5 Environmental Impact of Agriculture

No one argues with the understanding that agriculture, and increasingly aquaculture, are essential to supplying our food to sustain the world’s population. Farming is also the world’s largest industry, employing over one billion people and generating over one trillion dollars’ worth of food annually. Moreover, it is the most significant driver of habitat and biodiversity loss around the world.

Agricultural ecosystems provide essential habitats for many wild plant and animal species. This is especially the case for traditional farming areas that cultivate diverse species. However, rising demand for food and other agricultural products has seen the large-scale clearing of natural habitats to make room for intensive monocultures. Recent examples include the conversion of lowland rainforests in Indonesia to oil palm plantations, and of large areas of the Amazon rainforest and Brazilian savanna to soybean and cattle farms. This ongoing habitat loss threatens entire ecosystems as well as many species. Expanding palm oil plantations in Indonesia and Malaysia, for example, pose the most significant threats to endangered megafauna, including the Asian elephant, Sumatran rhinoceros, and tigers.

Aquaculture is also in direct competition with natural marine and freshwater habitats for space. For example, marine fish farms often need the shelter of bays and estuaries to avoid damage from storms and currents. Also, farmed fish need good water quality, frequent water exchange, and other optimal environmental conditions. However, these locations are also very often ideal for wild fish and other marine life. Some European fish farms have been placed in the migratory routes of wild salmon, while in Asia and Latin America, mangrove forests have been cleared to make space for shrimp farms.

On top of habitat loss due to clearing, unsustainable agricultural practices are seeing 12 million hectares of land lost each year to desertification. Desertification is land degradation in arid, semi-arid, and dry sub-humid areas resulting from climatic variations and human activities. Desertification is potentially the most threatening ecosystem change impacting livelihoods of the poor. Persistent reduction of ecosystem services as a result of desertification links land degradation in drylands to loss of human well-being.

When natural vegetation is cleared, and when farmland is plowed, the exposed topsoil is often blown away by the wind or washed away by rain. Erosion due to soy production, for example, results in Brazil losing 55 million tons of topsoil every year. This leads to reduced soil fertility and degraded land. Other significant crops that cause soil erosion include coffee, cassava, cotton, corn, palm oil, rice, sorghum, tea, tobacco, and wheat.

Water resources are also impacted by modern agriculture. Globally, the agricultural sector consumes about 70 percent of the planet’s accessible freshwater and many big food producing countries like the US, China, India, Pakistan, Australia, and Spain have reached, or are close to reaching, their renewable water resource limits.

The leading causes of wasteful and unsustainable water use are:

  • leaky irrigation systems
  • wasteful field application methods
  • cultivation of thirsty crops not suited to the environment.

Unsustainable water use can harm the environment by changing the water table and depleting groundwater supplies. Studies have also found that excessive irrigation can increase soil salinity and wash pollutants and sediment into rivers – causing damage to freshwater ecosystems and species as well as those further downstream, including coral reefs and coastal fish breeding grounds.

Soil carried off in rain or irrigation water can lead to sedimentation of rivers, lakes and coastal areas. The problem is exacerbated if there is no vegetation left along the banks of rivers and other watercourses to hold the soil. Sedimentation causes severe damage to freshwater and marine habitats, as well as the local communities that depend on these habitats. For example, people living in Xingu Indigenous Park in Brazil report declines in fish numbers. This trend is attributed to changes in the courses of waterways resulting from farming-related erosion and the silt deposition this causes. In Central America, plantation soil run-off ends up in the sea, where it affects the Meso-American Reef.

It is not just the eroded soil that is damaging: pesticides and fertilizers carried in rainwater, and irrigation runoff can pollute waterways and harm wildlife. The use of pesticides, fertilizers, and other agrochemicals has increased enormously since the 1950s. For example, the amount of pesticide sprayed on fields has increased 26-fold over the past 50 years.

These chemicals do not just stay in the fields they are applied to. Some application methods, such as pesticide spraying by airplane, lead to pollution of adjacent land, rivers or wetlands.  Pesticides often do not just kill the target pest. Beneficial insects in and around the fields can be poisoned or killed, as can other animals eating poisoned insects. Pesticides can also kill soil microorganisms. Also, some pesticides are suspected of disrupting the hormone messaging systems of wildlife and people, and many can remain in the environment for generations.

Unlike pesticides, fertilizers are not directly toxic. However, their presence in freshwater and marine areas alters the nutrient system, and in consequence the species composition of specific ecosystems. Their most dramatic effect is eutrophication, resulting in an explosive growth of algae due to excess nutrients. This depletes the water of dissolved oxygen, which in turn can kill fish and other aquatic life.

Food production is one of the primary causes of biodiversity loss through habitat degradation, overexploitation of species such as overfishing, pollution, and soil loss. Even though its environmental impacts are immense, the current food system is expected to expand rapidly to keep up with projected increases in population, wealth, and animal-protein consumption.

Sustainable Agriculture Movement

A growing movement has emerged during the past two decades to question the role of the agricultural establishment in promoting practices that contribute to these problems. Advocates argue that not only does sustainable agriculture address many environmental and social concerns, but it offers innovative and economically viable opportunities for growers, laborers, consumers, policymakers and many others in the entire food system.

The “food system” extends far beyond the farm and involves the interaction of individuals and institutions with contrasting and often competing goals including farmers, researchers, input suppliers, farmworkers, unions, farm advisors, processors, retailers, consumers, and policymakers. Relationships among these actors shift over time as new technologies spawn economic, social, and political changes.

Regarding food and agricultural policies, new federal, state, and local government policies are needed to simultaneously promote environmental health, economic profitability, and social and economic equity. For example, commodity and price support programs could be restructured to allow farmers to realize the full benefits of the productivity gains made possible through alternative practices. Tax and credit policies could be modified to encourage a diverse and decentralized system of family farms rather than corporate concentration and absentee ownership. Government and land-grant university research policies could be modified to emphasize the development of sustainable alternatives. Marketing orders and cosmetic standards could be amended to encourage reduced pesticide use.

Conversion of agricultural land to urban uses is a particular concern, as rapid growth and escalating land values threaten farming on prime soils. At the same time, the proximity of newly developed residential areas to farms is increasing the public demand for environmentally safe farming practices. Comprehensive new policies to protect prime soils and regulate development are needed, particularly in California’s Central Valley. By helping farmers to adopt practices that reduce chemical use and conserve scarce resources, sustainable agriculture research and education can play a crucial role in building public support for agricultural land preservation. Educating land use planners and decision- makers about sustainable agriculture is an urgent priority.

Rural communities are often among the poorest locations in the nation. The reasons for the decline are complex, but changes in farm structure have played a significant role. Sustainable agriculture presents an opportunity to rethink the importance of family farms and rural communities. Economic development policies are needed that encourage more diversified agricultural production on family farms as a foundation for healthy economies in rural communities. In combination with other strategies, sustainable agriculture practices and policies can help foster community institutions that meet employment, educational, health, cultural and spiritual needs.

Consumers can play a critical role in creating a sustainable food system. Through their purchases, they send strong messages to producers, retailers, and others in the system about what they think is essential. Food cost and nutritional quality have always influenced consumer choices. The challenge now is to find strategies that broaden consumer perspectives, so that environmental quality, resource use, and social equity issues are also considered in shopping decisions.

Source: UC Sustainable Agriculture Research and Education Program, University of California, Davis, CA

Chapter 5: Sustainable Development

This chapter will look at the geographic elements of industrialization and economic development, including the past and present patterns of industrialization, types of economic sectors, and the acquisition of comparative advantage and complementarity. We will analyze how models of economic development (e.g., Rostow’s stages of economic growth and Wallerstein’s world-systems theory) help to explain why the world is divided into a more developed economic core and a less-developed periphery with, in some cases, a semiperiphery between them.

The analysis of contemporary patterns of industrialization and their impact on development is another important focus. We will use measurements of development (e.g., gross domestic product per capita and the Human Development Index) as tools to understand patterns of economic differences. Additional

topics studied included Weber’s industrial location theory and accounts of economic globalization, which accent time-space compression and the new international division of labor.

Finally, we will examine the ways in which countries, regions, and communities must confront new patterns of economic inequality that are linked to the geographies of interdependence in the world economy. Relevant topics included the global financial crisis, the shift in manufacturing to newly industrialized countries, imbalances in consumption patterns, the role of women in the labor force, energy use, the conservation of resources, and the impact of pollution on the environment and quality of life.

Before you begin the module, examine and explore the resources below. These sites and links will help give you context to the materials covered in this module.

5.1 The Industrial Revolution

The Industrial Revolution began in England, which was by 1750, one of the wealthiest nations in the world and controlled an empire that covered one-quarter of the world’s landmass. It started with England’s textile industry, which was struggling to produce goods cheaper and faster for growing consumer markets. Making cloth, by hand, for pants, shirts, socks, bedspreads, and other domestic items had always required lots of skill and time.

As the population grew in England, more people needed textile goods. In the late 18th century, a series of innovations created by savvy businessmen and factory workers solved many of the difficulties in textile production. As the scale of production grew, the factory emerged as a centralized location where wage laborers could work on machines and raw material provided by capitalist entrepreneurs. Moreover, cotton led the way. In the 1700s, cotton textiles had many production advantages over other types of cloth. The first textile factory in Great Britain was actually for making silk, but since only wealthy people could afford the product, production remained very low. Cotton, on the other hand, was far less expensive. It was also stronger and more easily colored and washed than wool or linen.

By the late 18th century, steam power was adapted to power factory machinery, sparking an even more significant surge in the size, speed, and productivity of industrial machines. Heavy industries like ironworking were also revolutionized by new ideas, and new transportation technologies were developed to move products further and faster. Growing businesses soon outstripped the financial abilities of individuals and their families, leading to legal reforms that allowed corporations to own and operate businesses.

There were several factors that allowed England to lead the Industrial Revolution. Scholars may disagree, which was the most important. However, they agree that the confluence – a coming together – of many factors gave England an enormous commercial and technological head start over the rest of the world.

Nineteenth-century industrialization was closely associated with the rapid growth of European cities during the same period. Cities grew because of the influx of people desiring to take advantage of the factory jobs available in urban areas. Urbanization extended industrialization as factories were built to take advantage of urban workforces and markets.

Industrialization changed the relationship that existed between cities and their surrounding rural areas. In preindustrial times, cities consumed foodstuffs produced in rural areas but produced little that rural areas needed in return. As a result, some historians describe preindustrial cities as “economically parasitic.” Following the Industrial Revolution, cities became urgent centers of production and were able to offer a wide variety of manufactured goods to rural areas, becoming vital centers of production as well as consumption. Europe experienced the development of the major cities of its realm during this period. In England, for example, in 1800 only 9 percent of the population lived in urban areas. By 1900, some 62 percent were urban dwellers.

Factors Leading to the Industrial Revolution in England

Agricultural Revolution

  • Increased food production to support an increasing population.

Population Growth

  • More people from the countryside being freed up to work for wages in the new cities.
  • Increased demand for textile products.

Financial Innovations

  • Such as central banks, stock markets, and joint-stock companies – encouraged people, especially in Northern Europe, to take risks with investments, trade, and new technologies.

Enlightenment and the Scientific Revolution

  • Encouraged scholars and craftspeople to apply new scientific thinking to mechanical and technological challenges.

Navigable Rivers and Canals

  • Quickened the pace and cheapened the cost of transportation of raw materials and finished products.

Coal

  • Plentiful in England and Western Europe.
  • Used in enormous quantities as a source of power – particularly for the steam-powered machinery in textile factories and locomotives.

Iron Ore

  • When Englishman Henry Cort created a way to make iron cheaper and stronger, England no longer needed to import iron ore from other countries.
  • Essential to the development of new machines in factories and transportation.

Government Policies

  • Legal reforms that allowed corporations to own and operate businesses.
  • Patent laws allowed inventors to benefit financially from the “intellectual property” of their inventions.
  • Expanded the Navy to protect global trade.
  • Granting monopolies – exclusive rights – to companies who agreed to explore the world and find resources.

While industrialization alone cannot account for the rapid growth of the European population during the nineteenth century (this growth was underway before industrialization), it is believed to have been responsible for changing patterns of population density on the continent. Between 1750 and 1914, most industrialized nations (England, Belgium, France, Germany) also acquired the highest population densities. This correlation reflects not only the rapid urbanization of these countries but also the high population densities of their urban areas and the improved standards of living associated with industrializing economies.

Working in new industrial cities influenced people’s lives outside of the factories as well. As workers migrated from the country to the city, their lives and the lives of their families were utterly and permanently transformed. For many skilled workers, the quality of life decreased a great deal in the first 60 years of the Industrial Revolution. Skilled weavers, for example, lived well in pre-industrial society as a kind of middle class. They tended their gardens, worked on textiles in their homes or small shops, and raised farm animals. They were their bosses. However, after the Industrial Revolution, the living conditions for skilled weavers significantly deteriorated. They could no longer live at their own pace or supplement their income with gardening, spinning, or communal harvesting.

In the first sixty years or so of the Industrial Revolution, working-class people had little time or opportunity for recreation. Workers spent all the light of day at work and came home with little energy, space, or light to play sports or games. The new industrial pace and factory system were at odds with the old traditional festivals which dotted the village holiday calendar. Plus, local governments actively sought to ban traditional festivals in the cities. In the new working-class neighborhoods, people did not share the same traditional sense of a village community. Owners fined workers who left their jobs to return to their villages for festivals because they interrupted the efficient flow of work at the factories. After the 1850s, however, recreation improved along with the rise of an emerging middle class. Music-halls sprouted up in big cities. Sports such as rugby, cricket, and football became popular. Cities had become the places with opportunities for sport and entertainment that they are today.

There was a necessary trade-off in the Industrial Revolution for the working-class. Material standards of living were in some ways, improving more material goods were produced, so they were available at lower costs, and factories provided a variety of employment opportunities not previously available. At the same time, working conditions were often horrible, and the pay was terrible, and it was often difficult for unskilled workers to move to higher skill levels and escape the working class. The traditional protections of the medieval and early modern eras, such as guilds and mandated wage-and-price standards, were disappearing.

Gradually, very gradually, middle class, or “middling sort,” did emerge in industrial cities, mostly toward the end of the 19th century. Until then, there had been only two major classes in society: aristocrats born into their lives of wealth and privilege, and low-income commoners born in the working classes. However new urban industries gradually required more of what we call today “white collar” jobs, such as business people, shopkeepers, bank clerks, insurance agents, merchants, accountants, managers, doctors, lawyers, and teachers. One piece of evidence of this emerging middle class was the rise of retail shops in England that increased from 300 in 1875 to 2,600 by 1890. Another mark of distinction of the middle class was their ability to hire servants to cook and clean the house from time to time. Not surprisingly, from 1851 to 1871, the number of domestic servants increased from 900,000 to 1.4 million. This small but rising middle class prided themselves on taking responsibility for themselves and their families. They viewed professional success as a result of a person’s energy, perseverance, and hard work.

In this new middle class, families became a sanctuary from stressful industrial life. The home remained separate from work and took on the role of emotional support, where women of the house created a moral and spiritual safe harbor away from the rough-and-tumble industrial world outside. Most middle-class adult women were discouraged from working outside the home. They could afford to send their children to school. As children became more of an economic burden, and better health care decreased infant mortality, middle-class women gave birth to fewer children.

Ironically, life in the middle class still had its downside. Stuck in a new position in the middle of society, the new middle class was hostile both to the aristocracy and to the lower classes. They were angered by their political exclusion from power in a system that still favored aristocrats they felt they had the wealth and education to deserve a political voice. They also had contempt for the lower classes, particularly the growing mass of urban poor. In their lifestyles and political positions, they tried to separate themselves from this uneducated and politically powerless herd, with whom they had less and less culturally in common (and who often worked for them in their factories).

By the early twentieth century additional countries, usually culturally associated with Europe, began to industrialize, including Russia, Japan, other nations in Eastern and Southern Europe, Australia, and New Zealand. Britain and the other previously industrialized countries became highly urbanized. The last craft industries, such as shoemaking and glassmaking, became industrialized. The most developed countries, such as the United States, mass-produced consumer goods – such as dishwashers, furniture, and even houses – for the growing middle classes. The service sector grew and matured with jobs for teachers, waiters, accountants, lawyers, police, and clerks. Essential inventions included the assembly line, the automobile, and the airplane. Western countries and businesses typically controlled world trade and took direct or indirect control of critical industries in less developed countries, enriching themselves in the process.

The Industrial Revolution, an era that began in England at the end of the 18th century, has yet to end. Since the 1950s the so-called “Asian Tigers” (Hong Kong, Singapore, Taiwan, South Korea) rapidly industrialized by taking advantage of their educated and cheap labor to export inexpensive manufactured goods to the West. Other countries in Asia and the Americas, such as China, India, Brazil, Chile, and Argentina, began to develop key economic sectors for export in the global economy.

The world moved gradually toward global free trade. Western countries in Europe and North America turned increasingly to service and high technology economies as manufacturing moved to the cheap labor markets of developing countries. The important new inventions of this phase were the computer and the Internet. This era is now referred to as the “Post-Industrial age,” since the most developed countries focus on service jobs rather than manufacturing, called the “Information Age.” With only a few exceptions, most impoverished nations have not become wealthy in the fiercely competitive global market. There is an increasing wealth gap between more developed and less developed countries in the world.

Explaining the Industrial Landscape

Have you ever wondered why Detroit became the “Motor City,” known for automobile manufacturing in the United States? Why Pittsburg is known for steel production, and why Hollywood became the entertainment capital of the world? In the early years of the twentieth century, when cars were assembled by hand, and when many of their components were made of wood, automobile manufacturers were located in many different places. One brand of car was made in San Francisco, another was made in Massachusetts, and yet another was made in Indiana. By the end of World War I, Detroit was becoming the center of automobile production in America. In the early days of silent films, Flagstaff, Arizona, was the site for the production of several movies, because many of the early films were about life in the West. Within a few years, however, the film industry had abandoned Flagstaff in favor of the Los Angeles Basin of California. In colonial times, the steel production center of North America was in Massachusetts. During the last half of the 19th century, Pittsburg, Pennsylvania replaced Massachusetts as the steel center of America. Why did these changes take place? Of course, there are many variables that determine whether an industry will prosper; however, location is one of the most important. Over the years, geographers have focused on several fundamental industrial location theories to explain why businesses and industries are located in particular locations and predict which locations help a business succeed. Von Thünen made the first efforts to identify the factors that account for the locations of industries. His ideas gave rise to the subsequent work of German geographers such as William Lanyard and Alfred Weber who were instrumental in the development of Least Cost Theory.

Alfred Weber’s first significant work on industrial location theory was published in 1909 in where he predicted that industries would locate based on the places that would be the lowest cost to them. He took for granted that industries are naturally competitive and aim to minimize their costs and maximize their profits. Much like Von Thünen, Weber did not try to explain actual real-world locations, but instead concentrated on identifying those factors that influence all industrial-location patterns. According to Weber, three main factors influence industrial location: transport costs, labor costs, and agglomeration economies.

Transportation

Weber felt that transportation was the most substantial factor in determining the location and that industries wanting to locate where transportation costs are minimized must consider two issues: the distance of transportation to the market and the weight of the goods being transported. Regardless of the method (ship, rail, truck, air), transportation cost is determined by the weight of the goods being shipped and the distance they are being shipped. The heavier the goods and the farther the distance, the more expensive it is to ship.

In one scenario, the weight of the final product is less than the weight of the raw material going into making the product, the weight-losing industry. For example, in the copper industry, it would be costly to haul raw materials to the market for processing, so manufacturing occurs near the raw materials. Besides mining, other primary activities (or extractive industries) are considered material-oriented: timber mills, furniture manufacture, most agricultural activities, etc. Often located in rural areas, these businesses may employ most of the local population. As they leave, the locale area loses its economic base.

On the other, the final product is equally as heavy as the raw materials that require transport. Usually, this is a case of some ubiquitous raw material, such as water, being incorporated into the product. This is called the weight-gaining industry. This type of industry tends to build up near a market or raw material source and is sometimes called foot-loose industry. In some industries, like the heavy chemical industry, the weight of the raw material is less than the weight of finished product. These industries always grow up near a market.

Labor

Because labor costs vary from place to place, and because these differing labor costs are the product of variances in wage rates and worker efficiencies, Weber thought of labor as a distortion of the original transportation pattern that was driven by transportation costs. Accordingly, after finding the best location relative to transportation costs, he considered how labor costs influenced the location of factories and plants. To do this, Weber plotted the spatial variances of transportation costs to create a transport cost surface. He then contrasted regional labor costs with regional the pattern of transportation costs.

Weber noted that as transportation systems became more efficient, and hence less expensive to use, labor costs came to more heavily influence industrial locations. He also found that industries dominantly affected by labor costs tend to concentrate in a few places. Therefore, lower transportation costs tend to intensify the natural tendency of like industries to agglomerate in one location.

Agglomeration

Weber also employed a classification system based on local and regional factors. Local factors included the influences of agglomeration and deglomeration. Similar businesses typically gain an advantage when the cluster or agglomerate (centralize) in a specific location. Deglomeration is the tendency of industries to decentralize or disperse from a given location when rent becomes too expensive and impacts profits.

Weber argued that there are two significant ways in which firms benefit from agglomeration. In the first place, it could bring about the enlargement of a factory, thereby leading to more significant economies of scale. Additionally, agglomeration allows similar industries to benefit from being near one another. This is because they can share specialized facilities, services, and equipment. In his analyses, Weber considered only “pure” or “technical” agglomeration. He did not examine the impacts of “accidental” agglomeration (concentrations that occur for other than reasons associated with spatial economics).

In Weber’s basic industrial location model, there is only one specific market location, and one of the assumptions of this model is that all transactions take place on this site. Moreover, Weber assumed that there would be no limit to the quantity of the product that would be purchased at the specified price (in other words, in Weber’s model, the price of a good did not affect demand. Of course, Weber knew this did not reflect real-world conditions, but he made these assumptions in order to simplify the model. Other scholars, however, were convinced that, in making this assumption, Weber greatly limited the accuracy of his model. After all, demand is not confined to one single site but is instead scattered unevenly throughout a region.

Moreover, it is seldom true that buyers are confined to only one retail merchant. Instead, they usually have several choices and, if all else is equal, they will choose the closest establishment from which to make a purchase. Even so, better prices and services may offset the costs of distance. This is in keeping with the common advertisement slogan of automobile dealers located outside the boundaries of a city, “drive a little, save a lot!”

During the last years of the 20th century, developments in transportation diminished the relevance of Weber’s theory. In the first place, freight rates have increased at a faster rate than having the costs of raw materials, but relative transportation costs are declining. This means that the impacts of transportation costs on industrial location and market analysis are relatively less significant than they were at the beginning of the 20th century when Weber first articulated his theory. Third, natural resources are now less critical because smaller, lighter, and smarter products have replaced the more substantial products of the past. In particular, plastics and lighter materials made from soybeans, petroleum, and other fibers have replaced the use of steel and wood. As a result, furniture and appliances are lighter (and sometimes stronger). Even automobiles now use a great deal of plastic and other fibrous materials as a substitute for steel. It is far less costly to move petroleum through pipelines, or to ship plastics than it is to ship wood, iron ore, and steel.

Currently, labor tends to be the most important determinant of industrial location. This is particularly true for firms that produce expensive, high-tech goods. For most of these firms, transportation costs are of minor importance. In part, this is because high-tech goods are usually relatively light and small. This is nothing new, however. Long ago, the Swiss figured out that as a land-locked mountainous nation, they could not competitively ship their dairy products to foreign markets. Therefore, they processed liquid milk into far less bulky cheese and chocolate. They also realized that anything they manufactured should have a high value relative to its bulk and weight. Thus, instead of making automobiles or steam trains, they made timepieces. Even the Dutch, with access to excellent ports and water transportation, realized the benefits of shipping high-value, low-bulk products. Thus, they processed diamonds and focused on flower bulbs, cheese, and chocolate. In recent years, firms have developed many new and innovative ways in which to avoid transportation costs. For example, soft drink manufacturers do not ship full bottles of their products all over the world. Instead, they ship containers of syrup to local bottling plants where water is added to the syrup.

Core-Periphery Spatial Relationships

One key to understanding industrialization from a geographer’s perspective is thinking about core-periphery spatial relationships at both a local and global scale. On a local scale, there is generally a core area, sometimes known as the central business district and a hinterland, a German term meaning “the land behind.”

The hinterland is more sparsely populated than the core and is often where goods sold in the core are manufactured. It might include rural farmland, for example. The core, on the other hand, is the commercial focus for the area where most goods and services are exchanged. The hinterland relies on the central city to sell its goods, but similarly, the city relies on the hinterland to produce raw materials. Consider where the hinterland is located around your closest city; the hinterland is characteristically rural, while the core is urban.

The city of Walla Walla in southeastern Washington is an excellent example of this. Walla Walla has a population of about thirty thousand people and is the only significant town in its county. Walla Walla, with a prestigious college, a community college, the state prison, a regional hospital, and retail services, serves as a core hub for the surrounding periphery. The hinterland of Walla Walla has an agricultural economy based on the production of onions, wine grapes, asparagus, and ranching are typical of a peripheral region. The city of Walla Walla has the political, economic, and educational power that serves the people of its local area.

Globally, we can apply the hinterland-city model to an understanding of a global core and a global periphery. The core areas are places of dominance, and these areas exert control over the surrounding periphery. Core areas are typically more developed and industrialized, whereas the periphery is more rural and generally less developed. Unlike the interactions between the city and the hinterland, the economic exchange between the core and periphery is characteristically one-sided, creating wealth for the core and patterns of uneven development.

Brain drain also happens on an international level – that is, students from periphery countries might go to college in core countries, such as the United States or countries in Europe. Many international college graduates do not return to their poorer countries of origin but instead, choose to stay in the core country because of the employment opportunities. This is especially true in the medical field. There is little political power in the periphery; centers of political power are almost always located in the core areas or at least dominated by the core cities. The core areas pull in people, skills, and wealth from the periphery. Lack of opportunities in the periphery pushes people to relocate to the core.

However, these interactions do sometimes contribute to economic stability in the periphery. Some argue that it benefits the core countries to keep the periphery peripheral; in other words, if the periphery can remain underdeveloped, they are more likely to sell cheap goods to the core. This generates more wealth for core areas and contributes to their continued influence and economic strength.

The periphery countries and the core countries each have unique characteristics. Peripheral locations are providers of raw materials and agricultural products. In the periphery, more people earn their living in occupations related to securing resources: farming, mining, or harvesting forest products. For the workers in these occupations, the profits tend to be marginal, with fewer opportunities to advance. In the periphery, there is a condition known as brain drain, which describes a loss of educated or professional individuals. Young people leave the peripheral areas for the cities to earn an education or to find more advantageous employment. Few of these individuals share their knowledge or success with their former community.

5.2 Economic Geography

The Economics of Geography

It is easier to understand why people move from rural to urban, from the periphery to core, from Mexico to the United States when one begins to understand the global economy. Economic conditions are connected to how countries gain national income, opportunities, and advantages. One way of acquiring wealth is simply by taking someone else’s wealth. This method has been common practice throughout human history: a group of armed individuals attacks another group and takes their possessions or resources. This is regularly practiced through warfare. Unfortunately, this pillage-and-plunder type of activity has been a standard way of gaining wealth throughout human history.

The taking of resources by force or by war is frowned upon today by the global economic community, though it still occurs. The art of piracy, for example, is still practiced on the high seas in various places around the globe, particularly off the coast of Somalia.

The main methods countries use to gain national income are based on sustainable domestic income models and value-added principles. The traditional three areas of agriculture, extraction/mining, and manufacturing are a result of primary and secondary economic activities. Natural resources, agriculture, and manufacturing have been traditionally targeted as the means to gain national income. Postindustrial activities in the service sector, including tertiary, quaternary, and quinary economic activities, have exploded in the past seventy-five or so years.

Services constitute over 50 percent of income to citizens in low-income nations. The service economy is also crucial to growth, for instance, it accounted for 47 percent of economic growth in sub-Saharan Africa over the period 2000 – 2005; industry contributed 37 percent and agriculture 16 percent in the same period. This means that recent economic growth in Africa relies as much on services as on natural resources or textiles, despite many of those countries benefiting from trade preferences in primary and secondary goods. As a result, employment is also adjusting to the changes, and people are leaving the agricultural sector to find work in the service economy. This job creation is particularly useful as often it employs low-skilled labor in the tourism and retail sectors, thus benefiting the poor and representing an overall net increase in employment.

Places around the world have sometimes been named after the methods used to gain wealth. For example, the Gold Coast of western Africa received its label because of the abundance of gold in the region. The term breadbasket often refers to a region with abundant agricultural surpluses. Another example is the Champagne region of France, which has become synonymous with the beverage made from the grapes grown there. The Banana Republic earned their name because their large fruit plantations were the primary income source for the large corporations that operated them. Places such as Copper Canyon and Silver City are examples of towns, cities, or regions named after the natural resources found there.

The United States had its Manufacturing Belt, referring to the region from Boston to St. Louis, which was the core industrial region that generated wealth through heavy manufacturing for the more significant part of the nineteenth and twentieth centuries.

Countries with few opportunities to gain wealth to support their governments often borrow money to provide services for their people. The national debt is a significant problem for national governments. National income can be consolidated into the hands of a minority of the population at the top of the socioeconomic strata. These social elites can dominate the politics of their countries or regions. The elites may hold most of a country’s wealth, while at the same time, their government might not always have enough revenues to pay for public services. To pay for public services, the government might need to borrow money, which then increases that country’s national debt. The government could have a high national debt even when the country is home to many wealthy citizens or a growing economy. Taxes are a standard method for governments to collect revenue. If economic conditions decline, the amount of taxes collected can also decline, which could leave the government with a shortfall. Again, the government might borrow money to continue operating and to provide the same level of services. Political corruption and the mismanagement of funds can also cause a country’s government to lack revenues to pay for the services it needs to provide its citizens. The National debt, defined as the total amount of money a government owes, is a growing concern across the globe.

Many governments have problems paying their national debt or even the interest on their national debt. Governments whose debt has surpassed their ability to pay have often inflated their currency to increase the amount of money in circulation, a practice that can lead to hyperinflation and, eventually, the collapse of the government’s currency, which could have serious adverse effects on the country’s economy. In contrast to the national debt, the term budget deficit refers to the annual cycle of accounting of a government’s excess spending over the number of revenues it takes enduring a given fiscal year.

The Geography of Economics

The Industrial Revolution, which prompted the shift in population from rural to urban, also encouraged market economies, which have evolved into modern consumer societies. Various theories and models have been developed over the years to help explain these changes. For example, in 1929, the American demographer Warren Thompson developed the demographic transition model (DTM) to explain population growth based on an interpretation of demographic history.

In the 1960s, economist Walt Rostow adapted Warren Thompson’s demographic transition model to outline a pattern of economic development that has become one model for growth in a global economy. Rostow’s model described the five stages of growth in the economic modernization of a country:

The human development index (HDI) was developed in 1990 and is used by the United Nations Development Program to measure a standard of human development, which refers to the widening opportunities available to individuals for education, health care, income, and employment. The HDI incorporates variables such as standards of living, literacy rate, and life expectancy to indicate a measure of well-being or the quality of life for a specific country.

The human development approach, developed by the economist Mahbub Ul Haq, is anchored in the Nobel laureate Amartya Sen’s work on human capabilities, often framed regarding whether people can “be” and “do” desirable things in life.

Examples include:

  • Beings: well-fed, sheltered, healthy
  • Doings: work, education, voting, participating in community life.

Freedom of choice is central to the approach: someone choosing to be hungry (during a religious fast say) is quite different to someone hungry because they cannot afford to buy food.

Ideas on the links between economic growth and development during the second half of the 20th Century also had a formative influence. Gross Domestic Product (GDP) and economic growth emerged as leading indicators of national progress in many countries, yet GDP was never intended to be used as a measure of wellbeing. In the 1970s and 80s, development debate considered using alternative focuses on going beyond GDP, including putting greater emphasis on employment, followed by redistribution with growth, and then whether people had their basic needs met. These ideas helped pave the way for human development (both the approach and its measurement).

One of the more notable achievements of the human development approach has been to ensure a growing acceptance of the fact that monetary measures, such as GDP per capita, are inadequate representations of development. This measure of human development remains a simple unweighted average of a nation’s longevity, education, and income and is widely accepted in development discourse. Over the years, however, some modifications and refinements have been made to the index. Indeed, the critics of the HDI and their concerns have stimulated, and continue to stimulate, adjustments to the index and the development of companion indices, which help paint a broader picture of global human development.

The HDI emphasizes that people and their capabilities should be the ultimate criteria for assessing the development of a country, not economic growth alone. The HDI can also be used to question national policy choices, asking how two countries with the same level of GNI per capita can end up with different human development outcomes. These contrasts can stimulate debate about government policy priorities.

Jobs can be classified into three major types of sectors, which greatly influence the economics, standards of living, trade, and even social classes within a society. The first is called the primary sector, which are jobs directly related to the extraction of the Earth’s natural resources (e.g., forestry, raw materials, or agriculture). In the secondary sector, jobs are focused on manufacturing raw materials from the primary sector to usable products. The tertiary sector provides goods and services to people in exchange for payment. These types of jobs include lawyers, doctors, educators, banking, retail, athletes, and others.

It is probably apparent that the majority of the jobs in more developed countries (MDCs) are tertiary. There are primary and secondary sector jobs in countries like the United States, but the driving economic force is in the tertiary sector. MDCs are also more productive than LDCs, not because they work harder, but because of access and use of technology. In economics, productivity is the value of a product compared to the amount of labor.

MDCs also can invest more money and resources because of their economies. Thus their people tend to be more educated and healthier; children are more likely to survive, and adults tend to live longer than those in LDCs. Probably the two most important or essential components to have a nation’s developmental status begin to rise is through education and health care. There is a direct correlation to development and education: the more developed a nation, the more educated the population. One of the best indicators of a nation’s level of development is its literacy rate, the percent of people who can read or write.

In most developed countries, the literacy rate is usually around 98 percent, whereas, in emerging countries, the literacy rate is roughly 60 percent. The impact of this is that books are written for people in MDCs, and scientific advances tend to occur in these countries. Regarding percentage, least developed countries spend more of their GDP on education than most developed countries need to. In LDCs, the children going to school often have outdated books and not written in their primary language. Often in LDCs, more schools are private than public because the government cannot fund them. Outside religious groups and nonprofit organizations fund many of these schools.

Access to health care mirrors literacy statistics globally. However, geographers always want to look at these issues from different scales to understand if the patterns at a global scale hold at a regional or national level.

Other measures of development can be utilized to help geographers understand patterns of social and economic differences at a variety of scales. For example, looking at Gross Domestic Income (GDI) per capita gives a global view of the economic status of nations. North America, Northern Europe, Australia, and Japan have relatively stable economies and tend to be political world leaders. Interestingly, Saudi Arabia has a high GDI but is surrounded by countries with weaker economies. What confluence of factors might account for this phenomenon?

At this scale, a geographer might think that a country like Spain, with a strong GDI, also has a healthy economy. However, Spain has struggled to recover from the worldwide recession of 2008 and continues to have large pockets of the population who are chronically unemployed.

At this regional level, we can make some conclusions about the location of the unemployed, which create other questions that can be answered from a geographical spatial perspective, such as:

  • Are unemployed people who are living in cities also in poverty?
  • What kinds of education levels exist among people in those areas?
  • What kinds of social services, if any, are needed in those areas?

As you can see, this type of questioning can help us understand different patterns of social and economic development, as well as influence public policy.

5.3 Human Development Index

Defining Human Development

As a geographically literate scholar and citizen, you should be following current events around the world. If you do this, you will undoubtedly hear many discussions about development. You might listen to conversations of some countries that are “developing” and other countries that are “developed.” You might also understand terms like “First World” and “Third World.” You will also hear about how well development in the United States or other countries is going at any given time. Finally, you may listen to discussions about specific types of development, such as sustainable development. However, what does all this mean?

It turns out that “development” does not have one single, simple definition. There are multiple definitions and multiple facets to any one definition. There are also numerous, competing opinions on the various understandings of what “development” is. Often, “development” is viewed as being a good thing, and it is easy to see why. People in “developed” countries tend to have longer lives, more comfortable housing, more options for careers and entertainment, and much more. However, whether or not “development” is proper is ultimately a question of ethics. Just as there are multiple views on ethics, there are various views on whether or not “development” is proper. Later in this module, we will see some cases in which “development” might not be considered to be good.

The simplest and most common measures for development are those based on monetary statistics like income or gross domestic product (GDP, which measures in monetary terms how much an economy is producing). These monetary statistics are readily available for countries and other types of places across the world and are very convenient to work with. Likewise, it is easy to find a good map of these statistics, such as this one of GDP.

However, statistics like income and GDP are controversial. One can have a high income or GDP and low quality of life. Put, there is more to life than money. Furthermore, monetary statistics often overlook essential activities that do not involve money, such as cooking, cleaning, raising children, and even subsistence farming. These activities are usually performed by women, so a focus on monetary statistics often brings significant underestimates of the contributions of women to society. Finally, high incomes and GDPs are often associated with sizeable environmental degradation. From an ecocentric ethical view, that is a problem.

Another way of looking at development is one based on health statistics such as life expectancy or child mortality. These statistics show another facet of development. In many cases, those with much money also have better health. However, this trend does not always hold. Take a look at this life expectancy map:

A third way of looking at development is one based on end uses. End-uses are the ultimate purposes of whatever our economies are producing. For example, the end uses of agriculture are proper nutrition, tasty eating experiences, and maybe a few other things like the socializing that occurs during meals. The end applications of the construction of buildings involve things like having places for us to be in that comfortable, productive, and beautiful. For transportation, end users are in the areas they want to be.

Take a look at the following undernourishment map: How does this map compare to the GDP and Life Expectancy map? What patterns are similar? Is there anything different? While most of the world’s undernourished live in low-income countries, is there an exception?

A focus on end uses us a different perspective on development than a focus on money. One can have much money, but little of end uses. For example, a poorly designed city can force us to spend a lot of money on transportation, and we will still be stuck in traffic a lot. Alternatively, major environmental catastrophes often lead us to conduct much economic activity to clean things up, which can increase GDP. Meanwhile, one can have many ends uses without much money. For example, people can grow their food and have a delicious, nutritious diet without being affluent at all.

At the core of this discussion of development is one very fundamental question: What is it that we ultimately care about as a society? If we ultimately care about money, then the monetary statistics are good representations of development, and we should be willing to make sacrifices for other things to get more money. Alternatively, if end users are what we ultimately care about, then it is essential to look beyond monetary statistics and consider the systems of development that bring us the end uses that we want.

What is Development Today?

Hans Rosling is a Swedish demographer and teacher who has gained global fame through lively videos about global demographics, in particular at the TED conferences. If you are not already familiar, TED is an excellent resource of entertaining and informative talks from a great variety of people. Here is a TED talk from Rosling (20:35):

Rosling makes several essential points in this video:

  • Many of us have misperceptions about global demographic data, such as child mortality.
  • The variation within regions (such as sub-Saharan Africa) and within countries can be more significant than the variation between different regions or countries.
  • The divide between the more-developed and less-developed countries no longer exists. Instead, there is a continuum of development around the world with no gap in the middle.
  • Quality visualization is essential for understanding and communicating demographic data.

Now, let us take a look at the map of GDP per capita, of course, bearing in mind the limitations of the GDP statistic.

A few points are worth making about this map. First, the map shows GDP per capita, i.e., per person. Per capita statistics are usually more helpful for showing what is going on in a place. Recall the map of world GDP from the previous page. That map would show, for example, that China has a much larger GDP than, say, Switzerland. However, that is because China has a much larger population than Switzerland, not because China has reached a more advanced level of development. Most people would consider Switzerland to be more developed than China.

Second, the wealthier areas are North America, Western Europe, Australia, New Zealand, Japan, South Korea, and a few countries in the Middle East. These are the countries that are commonly considered to be “developed.” The rest of the countries are frequently considered to be “developing.” However, there is no clear divide between “developed” and “developing” visible on this map. Instead, there are countries at all points along the continuum from “developed” to “developing.”

Third, there are a few places on the map that are colored gray. These are places where no data is available. Usually, there is a compelling reason for data as essential as GDP to be unavailable. The map here uses data from the International Monetary Fund (IMF), so the gray represents places that the IMF has no data for. Here are probable reasons why some data and information is unavailable for this map: Greenland is not an independent country but is a territory of Denmark. French Guiana (in northern South America) is also not an independent country but is a territory of France. Western Sahara is a disputed territory fighting for independence from Morocco. Somalia and Zimbabwe have dysfunctional governments and probably did not report data to the IMF. Finally, Cuba and North Korea are not part of the IMF. GDP statistics are available for most of these regions from sources other than the IMF.

World Development Throughout History

There is one more point to consider about the GDP map shown earlier: It only shows one point in time. The map tells us something about development around the world today, but it does not explain how we got here. Even the Rosling video, which shows an animation over time, does not offer much in the way of explanation. This leaves out the critical question: Why is it that some countries are more developed, or at least have more money, than others?

Understanding the patterns of development we see today requires understanding the history of development around the world. Historical geography is the study of the historical dimensions of our world and is very important here. It turns out that certain aspects of the environment have played essential roles in the history of development on Earth. This is an ancient story, and it is worth starting at the beginning: at the origin of agriculture. Agriculture is an important starting point for development because the increased food supplies enable larger populations and enable some people to devote their time to tasks other than producing food. This labor specialization is necessary for the diverse other human activities required for development.

Agriculture originated independently in several regions around the world. In the map below, the green areas are regions where agriculture originated, and the arrows show directions that agriculture spread from its areas of origin.

However, all agriculture is not equal. Some agriculture is more productive than others. Likewise, some of these regions where agriculture originated are likely to develop more successfully than others. Key factors include the region’s growing conditions (including temperature, precipitation, latitude, and soils) and the types of plants and animals available for planting and domestication. Many regions had good growing conditions, but of all the regions in the world, one had abundant plants and animals to use. That region is the Fertile Crescent, which is located in the Middle East, as seen on the map above.

Environmental Determinism

The idea that the outcomes of civilization were determined entirely by environmental factors is known as environmental determinism. This hypothesis states that the physical environment for a particular region of the world predetermines the economic and social development trajectory of societies and nation-states.

This idea has been heavily critiqued. Even though ecological factors like plants and animals for agriculture can help explain some significant patterns in development, such as why advanced civilization developed in Eurasia but not in Papua New Guinea, it cannot explain everything. For example, it cannot explain the significant differences in development found today between adjacent countries such as the Dominican Republic (more prosperous) and Haiti (poorer) or South Korea (more affluent) and North Korea (more deprived). The distinction between the Dominican Republic and Haiti is even visible from space. Environmental determinism assumes that the environment determines all development and difference. Still, some patterns, like what we observe between the Dominican Republic and Haiti, are not explainable by environmental factors alone.

In this image, Haiti is on the left, and the Dominican Republic is on the right. This part of Haiti is almost completely deforested, as is much of the rest of the country, but the deforestation ends abruptly at the political border. From our systems perspective, this is humanity impacting the environment, not the environment affecting humankind. What is essential to understand is that the patterns of development that we see have both environmental and social causes. The environment can explain some of why advanced civilization emerged in Eurasia instead of elsewhere. Still, only social factors can explain why, for example, the Dominican Republic is richer than Haiti or South Korea is richer than North Korea. In other words, environmental resources can contribute to development trajectories, just like many other geographic factors such as culture, climate, topography, and proximity to major waterways. However, no single one of those components is ever the determining factor.

Environmental determinism came to prominence in the early twentieth century, but its popularity declined over time. This is partly due to its shortcomings, and also a recognition that it was often used as a justification for colonial conquest and slavery. In contrast to the unidirectional conceptualization of human-environment relationships, environmental possibilism arose as a mild notion where the environmental constraints are still recognized. Still, the freedom and capability of humans to change and structure the environment are highlighted. Environmental determinism and possibilism represented geographers’ first attempts at generalizing what accounts for the pattern of human occupation of the Earth’s surface in modern times.

Development’s Downsides

Thus far, in the module, we have seen several examples in which development has increased health and quality of life. However, development can also reduce the health and quality of life. Often, when development has these downsides, it is for reasons related to the environment. When development impacts the environment in ways that harm certain groups of people, it raises issues of environmental justice.

First, let us consider some connections between economic development, human health, and justice by completing the following reading assignment:

The fact that poor, and often minority, populations are more likely to live within proximity to facilities that have adverse health effects has helped establish the environmental justice movement. Research on environmental justice has shown that political and economic systems structure the conditions that contribute to poor health and help explain variations within societies in the rates of non-communicable chronic diseases such as diabetes or cancer.

Within the United States, the environmental justice movement has worked to show how the byproducts of development, such as chemical factories, waste facilities, and toxic chemicals, create hazardous conditions for people living near them. Here is one example of environmental justice in the United States; watch this video about Camden in New Jersey (4.5 minutes):

However, environmental justice is not just a domestic American issue. It is also a global issue. The globalized nature of our economy and our environment causes pollutions and other environmental indignities to become concentrated in particular world regions. Quite often, those regions are the regions of the poorest and least powerful of the world’s people. This can be seen in the following video on e-waste (or electronic waste) in Accra, Ghana’s capital city (6 minutes):

When you no longer want an electronic device that you own, what do you do with it? Where does it end up? Does it end up causing harm to other people? Who are these people? Do they deserve to be harmed by your e-waste?

Moreover, what can you do about it? These are all difficult questions raised by our ownership of electronic devices. Furthermore, similar questions are raised by other items that we own and the activities that we pursue.

Finally, it is noteworthy that environmental justice is not only about which populations suffer from the burdens of economic development (also known as environmental bads), but also about who has access to environmental goods: that contribute to human health. For example, poor communities and populations of color are often denied access to parks, open space, full-service grocery stores, and hospitals. The environmental justice movement, therefore, has expanded to ask critical questions about which human populations suffer the burdens of economic development, and which benefit the most from it.

Human Development Index

To analyze the world based on various levels of development, the term development must be defined. Development is the process of improving the material conditions of people through the diffusion of knowledge and technology. All nations lie somewhere in the range of more developed countries (MDC) to less developed countries (LDC).

A nation’s development level is based on the United Nation’s Human Development Index (HDI), which focuses on economic, social, and demographic development. More specifically, the HDI focuses on a nation’s gross domestic product (GDP) for economics, literacy rates and education for social factors, and life expectancy for demographics. A country’s gross domestic product is the total market value of all officially recognized final goods and services produced within a country in a year. GDP per capita is often considered an indicator of a country’s standard of living.

Standard of Living

Standard of living refers to the level of wealth, happiness, comfort, and material goods, necessities available to a specific socioeconomic class in a particular geographic area. The standard of living includes factors such as income, quality and availability of employment, class disparity, poverty rate, quality and affordability of housing, hours of work required to purchase necessities, GDP, inflation rates, affordability to quality health care, quality and availability of education, life expectancy, incidence of disease, cost of goods and services, infrastructure, national economic growth, economic and political stability, political and religious freedom, environmental quality, to name a few. It is interesting to note that the United States is not the highest on the HDI. The U.S. ranks 3rd because it is lower in education standards and life expectancy than most developed nations.

Jobs can be classified into three major types of sectors, which greatly influence the economics, standards of living, trade, and even social classes within a society. The first is called the primary sector, which are jobs directly related to the extraction of the Earth’s natural resources (e.g., forestry, raw materials, or agriculture). In the secondary sector, jobs are focused on manufacturing raw materials from the primary sector to usable products. The tertiary sector provides goods and services to people in exchange for payment. These types of jobs include lawyers, doctors, educators, banking, retail, athletes, and others.

It is probably apparent that the majority of the jobs in developed countries (MDCs) are tertiary. There are primary and secondary sector jobs in countries like the United States, but the driving economic force is in the tertiary sector. MDCs are also more productive than LDCs, not because they work harder, but because of access and use of technology. In economics, productivity is the value of a particular product compared to the amount of labor needed to make it. Value added is the gross value of the product minus the cost of raw materials and energy.

Access to Quality Education

MDCs can invest more money and resources because of their economies. Thus their people tend to be more educated, healthier, children are more likely to survive, and adults tend to live longer than those in LDCs. Probably the two most important or essential components to have a nation’s developmental status to begin to rise is through education and health care. There is a direct correlation to development and education. The more developed a nation, the more educated the population. One of the best indicators of a nation’s level of development is its literacy rate, the percent of people who can read or write. In MDCs, the literacy rate is usually around 98 percent, whereas, in LDCs, the literacy rate is about 60 percent. Its impact on this is that books are written for people in MDCs, and scientific advances tend to occur in these countries. Compared to LDCs, MDCs spend less of their GDP on education because their GDP’s are so high. A small amount of a developed nation’s GDP can have a higher monetary value than large amounts coming from the GDP of a less developed nation. In terms of percentage, LDCs spend more of their GDP on education than MDCs need to. In LDCs, the children going to school often have outdated books and not written in their primary language. Often in LDCs, more schools are private than public because the government cannot fund them. Outside religious groups and nonprofit organizations support many of these schools.

Access to Health Care

People are often healthier in MDCs than LDCs because of diet and healthcare. Regarding food, people in MDCs tend to have more access to calories, nutrients, and protein. However, there is a dark side to this as well. Many developed countries are now experiencing major obesity issues. More people in the world are obese than are hungry. It is not just about consuming too much food; many nutrition experts would also question the quality of the protein and nutrients from this food that more and more is in the form of trans fats.

With access to healthcare, MDCs tend to invest more in public health than possible in LDCs. This is done at the governmental, private, business, and individual levels. In MDCs, there is a much lower ratio of nurses or doctors to patients than in LDCs. Because of this investment in health, the life expectancy in MDCs is much higher. Men tend to live ten years longer than those compared to LDCs; women can expect to live 13 years longer in MDCs than in LDCs. There is a gender issue related to this too. In MDCs, men tend to live ten years longer than women LDCs. However, higher life expectancy comes with a price too. People tend to work longer into their life, preventing the advancement of younger generations. The longer life expectancy through retirement also means that social programs must support an aging population.

Children also tend to have a higher survival rate, called infant mortality rates, in MDCs than in LDCs. In MDCs, the survival rate of children is near 99 percent, whereas in LDCs, the rate is around 94 percent. Children tend to have higher mortality rates in LDCs because of malnutrition, starvation, dehydration, disease, and lack of access to health services and professionals.

5.4 Social and Economic Inequality

Gender Inequality

The population pyramid provided shows that there are slightly more women than men on the planet. In the world of work, there continue to be pronounced imbalances across genders, reflecting local values, social traditions, and historical gender roles. Unpaid care work includes housework, such as preparing meals for the family, cleaning the house and gathering water and fuel, as well as work caring for children, older people and family members who are sick—over both the short and long term. Across most countries in all regions, women work more than men. Women are estimated to contribute 52 percent of global work, men 48 percent.

Of the 59 percent of work that is paid, mostly outside the home, men’s share is nearly twice that of women – 38 percent versus 21 percent. The picture is reversed for unpaid work, mainly within the home and encompassing a range of care responsibilities: of the 41 percent of work that is unpaid, women perform three times more than men – 31 percent versus 10 percent. Hence the imbalance—men dominate the world of paid work, women that of unpaid work. Unpaid work in the home is indispensable to the functioning of society and human well-being: yet when it falls primarily to women, it limits their choices and opportunities for other activities that could be more fulfilling to them.

Occupational segregation has been pervasive over time and across levels of economic prosperity. In advanced and developing countries, men are over-represented in crafts, trades, plant and machine operations, and managerial and legislative occupations. Women tend to be over-represented in mid-skill occupations such as clerks, service workers, and shop and sales workers.

Even when doing similar work, women can earn less – with the wage gaps generally most significant for the highest-paid professionals. Globally, women earn 2 – percent less than men. In Latin America, top female managers earn, on average, only 53 percent of top male managers’ salaries. Across most regions, women are also more likely to be in “vulnerable employment,” working for themselves or others in informal contexts where earnings are fragile, and protections and social security are minimal or absent.

As a method to measure development progress around the world, the United Nations has created the Millennium Development Goals. The Millennium Development Goals (MDGs) are the world’s time-bound and quantified targets for addressing extreme poverty in its many dimensions-income poverty, hunger, disease, lack of adequate shelter, and exclusion while promoting gender equality, education, and environmental sustainability. They are also fundamental human rights regarding health, education, shelter, and security.

To measure Goal 3, Gender equity and empowering women, the United Nations uses a Gender Inequality Index (GII). The index uses a variety of methods to determine the inequality of females compared to males, including labor, reproductive health, and empowerment. The higher the number that a region receives demonstrates, the greater the inequality in that region. Some nations have severe gender inequalities, meaning that women have nearly no legal, social, or economic rights even when they are head of their household. Many argue that if the world focused on gender equality of females, most of our social, economic, and environmental problems would be significantly minimized.

Many societies are experiencing a generational shift, particularly in the educated middle-class households, towards greater sharing of care work between men and women. Legislation and targeted policies can increase women’s access to paid employment. Access to quality higher education in all fields and proactive recruitment efforts can reduce barriers, particularly in fields where women are either underrepresented or where wage gaps persist.

Policies can also remove barriers to women’s advancement in the workplace. Measures such as those related to workplace harassment and equal pay, mandatory parental leave, equitable opportunities to expand knowledge and expertise, and measures to eliminate the attrition of human capital and expertise can help improve women’s outcomes at work.

Paid parental leave is crucial. More equal and encouraged parental leave can help ensure high rates of female labor force participation, wage gap reductions, and better work-life balance for women and men. Many countries now offer parental leave to be split between mothers and fathers.

Gender Equality Index

Currently, no social scientist or governmental agency like the United Nations has found a country where women are treated equally to men. To determine the equality, or inequality, of women in nations, the Gender Inequality Index (GII) is used. The index uses a variety of measures to determine the inequality of females compared to males, including labor, reproductive health, and empowerment. The higher the number that a region receives demonstrates, the greater the inequality in that region.

Some nations have severe gender inequalities, meaning that women have nearly no legal, social, or economic rights even when they are head of their household. Many argue that if the world focused on gender equality of females, most of our social, economic, and environmental problems would be significantly minimized. Here are some statistics about women:

In the 21st Century, gender inequity, as Sheryl WuDunn states in her TED Talk “Our Century’s Greatest Injustice?” is the moral and ethical issue of our time. Why have so many females disappeared out of the human population? Watch the video on the right to find out.

In terms of reproductive health, maternal mortality ratio and adolescent fertility rates are determined. The maternal mortality ratio is a measure of the number of women who die giving birth per 100,000 births. The adolescent fertility rate is a measurement of the number of births per 1,000 women between the ages of 15 through 19. In LDCs, women are more likely to die during labor and have children during their adolescent years. Reproductive health is an essential indicator of gender inequality because women tend to have fewer rights, including access to health care, where the gender-related development index is high.

Several organizations around the world are working on empowering females from young to old through a range of social and economic policies. The ultimate goal of these organizations like Half the Sky, CARE, and The Girl Effect, to name a few, is to empower women so that they might have legal, economic, social, and health rights.  Empowerment is something that can be tracked and monitored because it focuses on two critical indicators of gender inequality. One is to measure the percentage of seats held by women in a nation’s federal government. This is a measurement of political and economic power women have or do not have in a country.

Just take, for example, the United States Congress in 2019. Currently, women make up 25 percent of the Senate and 23 percent of the House of Representatives. Contrast that to the face that women make up 51 percent of the United States population. This direct influence policies towards women, about women, and for women in the United States. Also, as of 2019, no woman has ever served as President or Vice President, and the only one has served as the Speaker of the House, which is the third most powerful position in our federal government.

Economic Slavery

There is also a darker side to international trade, and that exists in the black market. On the poaching side, animals ranging from mountain gorillas, rhinoceroses, and African elephants are being slaughtered for parts to be sold.

There are also natural resources such as diamonds, gold, stonery, textiles, and more that are either mined or assembled using mass slavery. The products are sold to MDCs to fund local and regional civil wars like in Uganda, Angola, Sierra Leon, and the Democratic Republic of Congo. Many times, people of all age groups who try to escape slavery are often raped, have body parts amputated alive and with no medicine, or killed all to fear, power, and control. The TED Talk “Photos that Bear Witness to Modern Slavery” and the two videos on mining cobalt are powerful, but disturbing witnesses of how slavery is used all around the world to fund political conflicts and wars. To learn more about how “how many slaves work for you,” check out your slavery footprint.

Ending Global Poverty

It can be said that issues such as literacy rates, life expectancy, natural increase, and infant mortality rates have improved in LDC. However, the gap between development and income is only getting wider. Only one-fifth of the world’s population lives in MDC, but those same nations consume five-sixths of the world’s resources. If all 7 billion people on the planet lived the lifestyle of the average American, it would require three planets! Currently, there are over 1 billion people on the earth living in what is called extreme poverty.

In the book by James Rubenstein, Cultural Landscape: An Introduction to Human Geography (2010), “The United Nations recently placed the contrast in spending between MDCs and LDCs in picturesque terms: Americans spend more per year on cosmetics ($8 billion) than the cost of providing schools for the 2 billion in the world in need of them ($6 billion), and Europeans spend more on ice cream ($11 billion) than the cost of providing a working toilet to the 2 billion people currently without one at home ($9 billion).”  To put U.S. consumer spending in context with the cost of providing basic needs for those in the developing world, Americans spent $20 billion in 2007 on Black Friday–the day after Thanksgiving and the biggest shopping day of the year in the United States. Consider how redirecting the funds used for one day of shopping in the U.S. could do much to eradicate extreme poverty.

Currently, over $1 billion people live in extreme poverty, a term used to define people who live on less than $1.25 a day for food, water, and shelter. For LDCs to develop, they first need to improve their gross national product (GNP) dramatically. Once income starts flowing, that money needs to be invested in other HDI factors such as social and other economic factors. The goal of the self-sufficiency model is to focus on reducing poverty than increasing wealth and creating wealth classes. Investment in a state’s infrastructure and economic structure is spread equally so that all benefit. The idea of “local first” outweighs globalization. The problem with this model is that it can quickly become inefficient because the government protects local industries from outside forces. Also, it requires a sizeable governmental fingerprint to administer the controls and conditions to distribute the wealth equally.

5.5 Globalization and International Trade

Before we begin a discussion about why nations trade, it would be helpful to take a moment to consider the character and evolution of trade. It is important to keep in mind, first, that although we frequently talk about trade “between nations,” the vast majority of international transactions today take place between private individuals and private enterprises based in different countries. Governments sometimes sell things to each other, or individuals or corporations in other countries, but these comprise only a small percentage of world trade.

Trade is not a modern invention. International trade today is not qualitatively different from the exchange of goods and services that people have been conducting for thousands of years. Before the widespread adoption of currency, people exchanged goods and some services through bartering—trading a certain quantity of one good or service for another good or service with the same estimated value. With the emergence of money, the exchange of goods and services became more efficient.

Developments in transportation and communication revolutionized economic exchange, not only increasing its volume but also widening its geographical range. As trade expanded in geographic scope, diversity, and quantity, the channels of trade also became more complex. Individuals conducted the earliest transactions in face-to-face encounters. Many domestic transactions, and some international ones, still follow that pattern. However, over time, the producers and the buyers of goods and services became more remote from each other.

A wide variety of market actors, individuals and firms, emerged to play supportive roles in commercial transactions. These “middlemen,” wholesalers, providers of transportation services, providers of market information, and others, facilitate transactions that would be too complex, distant, time-consuming, or broad for individuals to conduct face-to-face efficiently.

International trade today differs from economic exchange conducted centuries ago in its speed, volume, geographic reach, complexity, and diversity. However, it has been going on for centuries, and its fundamental character, the exchange of goods and services for other goods and services or money, remains unchanged.

That brings us to the question of why nations trade. Nations trade a lot, but it is not quite as obvious why they do so. Put differently, why do private individuals and firms take the trouble of conducting business with people who live far away, speak different languages, and operate under different legal and economic systems, when they can trade with fellow citizens without having to overcome any of those obstacles?

It seems evident that if one country is better at producing one good and another country is better at producing a different good (assuming both countries demand both goods) that they should trade. What happens if one country is better at producing both goods? Should the two countries still trade? This question brings into play the theory of comparative advantage and opportunity costs.

The everyday choices that we make are, without exception, made at the expense of pursuing one or several other choices. When you decide what to wear, what to eat for dinner, or what to do on Saturday night, you are making a choice that denies you the opportunity to explore other options.

The same holds for individuals or companies producing goods and services. In economic terms, the amount of the good or service that is sacrificed to produce another good or service is known as opportunity cost. For example, suppose Switzerland can produce either one pound of cheese or two pounds of chocolate in an hour. If it chooses to produce a pound of cheese in a given hour, it forgoes the opportunity to produce two pounds of chocolate. The two pounds of chocolate, therefore, is the opportunity cost of producing the pound of cheese. They sacrificed two pounds of chocolate to make one pound of cheese.

A country is said to have a comparative advantage in whichever good has the lowest opportunity cost. That is, it has a comparative advantage in whichever good it sacrifices the least to produce. In the example above, Switzerland has a comparative advantage in the production of chocolate. By spending one hour producing two pounds of chocolate, it gives up producing one pound of cheese, whereas, if it spends that hour producing cheese, it gives up two pounds of chocolate.

Thus, the good in which comparative advantage is held is the good that the country produces most efficiently (for Switzerland, it is chocolate). Therefore, if given a choice between producing two goods (or services), a country will make the most efficient use of its resources by producing the good with the lowest opportunity cost, the good for which it holds the comparative advantage. The country can trade with other countries to get the goods it did not produce (Switzerland can buy cheese from someone else).

The concepts of opportunity cost and comparative advantage are tricky and best studied by example: consider a world in which only two countries exist (Italy and China) and only two goods exist (shirts and bicycles). The Chinese are very efficient in producing both goods. They can produce a shirt in one hour and a bicycle in two hours. The Italians, on the other hand, are not very productive at manufacturing either good. It takes three hours to produce one shirt and five hours to produce one bicycle.

The Chinese have a comparative advantage in shirt manufacturing, as they have the lowest opportunity cost (1/2 bicycle) in that good. Likewise, the Italians have a comparative advantage in bicycle manufacturing as they have the lowest opportunity cost (5/3 shirts) in that good. It follows, then, that the Chinese should specialize in the production of shirts and the Italians should specialize in the production of bicycles, as these are the goods that both are most efficient at producing. The two countries should then trade their surplus products for goods that they cannot produce as efficiently.

A comparative advantage not only affects the production decisions of trading nations, but it also affects the prices of the goods involved. After the trade, the world market price (the price an international consumer must pay to purchase a good) of both goods will fall between the opportunity costs of both countries. For example, the world price of a bicycle will be between 5/3 shirt and two shirts, thereby decreasing the price the Italians pay for a shirt while allowing the Italians to profit. The Chinese will pay less for a bicycle and the Italians less for a shirt than they would pay if the two countries were manufacturing both goods for themselves.

In reality, of course, trade specialization does not work precisely the way the theory of comparative advantage might suggest, for several reasons:

  • No country specializes exclusively in the production and export of a single product or service.
  • All countries produce at least some goods and services that other countries can produce more efficiently.
  • A lower income country might, in theory, be able to produce a particular product more efficiently than the United States can but still not be able to identify American buyers or transport the item cheaply to the United States. As a result, U.S. firms continue to manufacture the product.

Generally, countries with a relative abundance of low-skilled labor will tend to specialize in the production and export of items for which low-skilled labor is the predominant cost component. Countries with a relative abundance of capital will tend to specialize in the production and export of items for which capital is the predominant component of cost.

Many American citizens do not fully support specialization and trade. They contend that imports inevitably replace domestically produced goods and services, thereby threatening the jobs of those involved in their production.

Imports can indeed undermine the employment of domestic workers. We will return to this subject a little later. From what you have just read, you can see that imports supply products that are either 1) unavailable in the domestic economy or 2) that domestic enterprises and workers would be better off not making so that they can focus on the specialization of another good or service.

Finally, international trade brings several other benefits to the average consumer. Competition from imports can enhance the efficiency and quality of domestically produced goods and services. Also, competition from imports has historically tended to restrain increases in domestic prices.

  • Name a product/business where labor would be the comparative advantage for a developing country.
  • Name a product/business where capital would be the comparative advantage for a rich country.
  • Name a product/business where natural resources would be a comparative advantage.

Global Interdependence

The tremendous growth of international trade over the past several decades has been both a primary cause and effect of globalization. The volume of world trade increased twenty-seven-fold from $296 billion in 1950 to $8 trillion in 2005. Although international trade experienced a contraction of 12.2 percent in 2009, the steepest decline since World War II, trade is again on the upswing.

As a result of international trade, consumers around the world enjoy a broader selection of products than they would if they only had access to domestically made products. Also, in response to the ever-growing flow of goods, services, and capital, a whole host of U.S. government agencies and international institutions have been established to help manage these rapidly developing trends.

Although increased international trade has spurred tremendous economic growth across the globe, raising incomes, creating jobs, reducing prices, and increasing workers’ earning power, trade can also bring about economic, political, and social disruption.

Since the global economy is so interconnected, when large economies suffer recessions, the effects are felt around the world. One of the hallmark characteristics of the global economy is the concept of interdependence. When trade decreases, jobs, and businesses are lost. In the same way that globalization can be a boon for international trade; it can also have devastating effects. Activities such as the choice of clothes you buy have a direct impact on the lives of people working in the nations that produce

There are several elements that are responsible for the expansion of the global economy during the past several decades: new information technologies, reduction of transportation costs, the formation of economic blocs such as the North American Free Trade Association (NAFTA), and the reforms implemented by states and financial organizations in the 1980s aimed at liberalizing the world economy.

Trade liberalization, or deregulation, has become a ‘hot button’ issue in world affairs. Many countries have seen great prosperity thanks to the disintegration of trade regulations that had otherwise been considered a harbinger of free trade in the recent past. The controversy surrounding the issue, however, stems from enormous inequality and social injustices that sometimes comes with reducing trade regulations in the name of a bustling global economy.

Given the dislocations and controversies, some people question the importance of efforts to liberalize trade and wonder whether the economic benefits are outweighed by other unquantifiable negative factors such as labor exploitation.

With globalization, competition occurs between nations having different standards for worker pay, health insurance, and labor regulations. Corporations benefit from lower labor costs found in developing regions, thanks to free-trade agreements and a new international division of labor. A worker in a high-wage country is thus increasingly struggling in the face of competition from workers in low-wage countries. Entire sectors of employment in developed countries are now subject to this growing international competition, and unemployment has crippled many localities.

The outcome has been an international division of labor in all sectors of the economy. In particular, manufacturing is increasingly being contracted out to lower- cost locations, which are often found in developing countries with no minimum wage and few environmental regulations.

In excellent example of international division of labor can be found in the clothes-making industry. What was once a staple industry in most developed Western economies has now been relocated to developing countries in Central America, Eastern Europe, North Africa, Asia, and elsewhere.

International Development Models

Self-Efficiency Model of Development

There are two models of economic development that play off each other as a way to. The first model is called the Self-Efficiency Model of Development, which encourages domestic development of goods and resources and discourages foreign influence and investment. Between 1990 to 2000, this was the primary form of economic development until globalization became the dominant force. What makes this model of development competitive to international trade models is that governments create barriers through the form of tariffs on imports, which makes them more expensive and less economically competitive with their local businesses. New businesses are nurtured until they and economically sustainable and competitive enough to compete with businesses abroad.

This model of development prides itself in an equal distribution of resources to a nation’s people and businesses over foreign entities and investments. But there are several critics of this form of development because they argue that this model protects inefficient businesses and does not reward competitive and highly efficient ones, requires a sizeable bureaucratic government to administer this model of development and limit abuse and corruption, and does not receive the benefits of rewarding foreign corporations that could provide goods and services to countries with limited resources.

Modernization Model of Development

The other model for economic development is through international trade. W.W. Rostow proposed in the 1950s the idea of a five-stage model of development that competes with the self-sufficiency model. In a report by Peter Kasanda, Rostow’s Modernization Theory of Development implies that nations should use local resources and industries to sell scarce or needed resources globally through international trade. The money that comes back to the country would increase the nation’s GDP, which could then be used to improve the development of infrastructure, invest in education and healthcare, and ultimately improve a country’s standard of living. The following is the 5-stage model of how progress and development might occur for a country:

Traditional Society – This primary sector is determined for societies that have little economic development and a high percentage of people active in family-scale subsistence agriculture. Most of the money for development goes toward religious or military activities.

Economic Growth – Key investments in core structures of an existing economy to expand its development. Structured invests in mining and large-scale agriculture and with technology to enhance the efficiency of existing infrastructure. The goal is to invest in the overall structure of the nation’s economy, so the production of goods can begin to occur.

Economic Takeoff – Investment and development lead to expanded, but limited activity in mining, textile, and food production along with continued improvements and investment in modern technology. A key indicator of the “takeoff” stage is when the people in the country become more driven by economic development rather than traditional activities. Is should also be noted that this is often when concerning issues of slavery and sweatshops begin to surface if not appropriately handled.

Drive to Economic Maturity – During this period, society is driven by modern technological advances over most other areas of the economy. Technology drives production and efficiency throughout all parts of the economy. It is at this time when a local economy becomes an international economic player. This stage is often said to be an extension of the “takeoff” stage, but more expansive than limited in scope.

Age of Mass Consumption – This final stage of economic development occurs when an economy shifts from a secondary sector of manufacturing toward the tertiary sector of services. The economic status of the nation’s society also becomes driven by mass consumption of disposable goods.

The theory of international trade has become the preferred way to improve economic development. The reason is that most nations cannot produce all the goods and resources they require. So if nations can focus on specific goods and services to export, they can return the purchase of the goods and services they need.

In 1995, the World Trade Organization (WTO) was created to represent 97 percent of the world-trading establishment.  It is through the WTO that nations can negotiate with each other international trade restrictions, governmental subsidies, and tariffs on exports.  The WTO also has the power to act as an international court to enforce international agreements.

Liberals and conservatives have highly attacked the WTO. Liberals believe that too many actions or rulings are done undemocratically and behind closed doors. They also believe the organization focuses more on the rights of corporations rather than poorer nations. Conservatives believe that no international organization has the right to dictate the choices of sovereign nations. Learn more about the World Trade Organization.

5.6 Sustainable Development

Transforming the Economic Landscape

In nearly every corner of the world, from Mumbai to Madrid, one cannot enter a café or walk down the street without seeing someone talking, texting, or surfing the Internet on their cell phones, laptops or tablet PCs. Information Technology (IT) has become ubiquitous and is changing every aspect of how people live their lives.

IT is a driving factor in the process of globalization. Improvements in the early 1990s in computer hardware, software, and telecommunications significantly increased people’s ability to access information and economic potential. These developments have facilitated efficiency gains in all sectors of the economy. IT drives the innovative use of resources to promote new products and ideas across nations and cultures, regardless of geographic location. Creating efficient and effective channels to exchange information, IT has been the catalyst for global integration.

Globalization accelerates the change in technology. Every day it seems that new technological innovation is being created. The pace of change occurs so rapidly many people are always playing catch up, trying to purchase or update their new devices. Technology is now the forefront of the modern world, creating new jobs, innovations, and networking sites to allow individuals to connect globally.

The First Industrial Revolution used water and steam power to mechanize production. The Second used electric power to create mass production. The Third used electronics and information technology to automate production. Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.

Many argue that the Fourth Industrial Revolution has the potential to raise global income levels and improve the quality of life for populations around the world. To date, those who have gained the most from it have been consumers able to afford and access the digital world; technology has made possible new products and services that increase the efficiency and pleasure of our personal lives. Ordering a cab, booking a flight, buying a product, making a payment, listening to music, watching a film, or playing a game – any of these can now be done remotely.

The digital economy permeates all aspects of society, including the way people interact, the economic landscape, the skills needed to get a good job, and even political decision-making. Our emerging digital economy has the potential to generate new scientific research and breakthroughs, fueling job opportunities, economic growth, and improving how people live their lives.

These changes are happening all around us. In Kenya, mobile data is being used to identify malaria infection patterns and identify hotspots that guide government eradication efforts. Vehicle sensor data from delivery trucks, combined from mapping data analytics, has enabled companies to save millions of gallons of fuel and reduce emissions by the equivalent of taking thousands of cars off the road for a year. Farmers from Iowa to India are using data from seeds, satellites, and sensors to make better decisions about what to grow and how to adapt to changing climates.

How people connect with others, with information, and with the world is being transformed through a combination of technologies. These technologies will help us solve increasingly sophisticated problems, while big data will assist us in complex decision-making.

The sharing economy is a model in which people and organizations connect online to share goods and services. It is also known as collaborative consumption or peer-to-peer exchange. Two of the best-known examples of the sharing economy are Uber (transportation) and Airbnb (housing).

The blockchain is a digital “ledger” technology that allows for keeping track of transactions in a distributed and trusted fashion. It replaces the need for third-party institutions to provide trust for financial, contract, and voting activities. Bitcoin and other digital currencies are some of the most well-known examples of applications of blockchain technology.

In the future, technological innovation could lead to long-term gains in efficiency and productivity. Transportation and communication costs are predicted to drop, with logistics and global supply chains becoming more effective, the cost of trade will diminish, which should open new markets and drive economic growth. At the same time, as the economists Erik Brynjolfsson and Andrew McAfee have pointed out, the revolution could yield greater inequality, particularly in its potential to disrupt labor markets. As automation substitutes for labor across the entire economy, the net displacement of workers by machines might exacerbate the gap between returns on capital and returns to labor. We cannot foresee at this point, which scenario is likely to emerge, and history suggests that the outcome is likely to be some combination of the two.

In addition to being a key economic concern, inequality represents the most significant societal concern associated with the Fourth Industrial Revolution. The largest beneficiaries of innovation tend to be the providers of intellectual and physical capital – the innovators, shareholders, and investors – which explains the rising gap in wealth between those dependent on capital versus labor. Technology is, therefore, one of the main reasons why incomes have stagnated, or even decreased, for a majority of the population in high-income countries: the demand for highly skilled workers has increased while the demand for workers with less education and lower skills has decreased. The result is a job market with a strong demand at the high and low ends, but a hollowing out of the middle.

It is also important to remember that development is not evenly distributed over time and space. There are still many people around the world who have not yet realized the benefits delivered by previous industrial revolutions. Around 1.2 billion people do not have reliable access to energy. Another 2.3 billion do not have clean water and sanitation. More than 4 billion do not have access to the internet. Here, the Fourth Industrial Revolution could serve as a formidable accelerator of social and economic inclusion, particularly for the developing world. Recently the World Economic Forum identified five innovations which have the potential to impact the lives of smallholder farmers positively:

Improved access to electricity to increase efficiency and reduce food loss

Electricity is hardly an innovation, but there are still many people – almost two-thirds of sub-Saharan Africa, for example – who lack access. Even where energy infrastructure exists, the cost can often be a barrier. Access to affordable, reliable, and sustainable energy enables smallholders to improve efficiencies in land preparation, planting, irrigation, and harvesting. It also allows using specific methods for storing, cooling, and preserving goods. The ability of smallholder farmers to participate in global food systems depends on their access to electricity.

Increased internet connectivity to access information and knowledge to improve productivity on their farms

For many of us, the internet is a fundamental part of everyday life. However, over 4 billion people – more than 55 percent of the world’s population – remain unconnected to the web.

The vast majority of smallholder farmers live in remote areas, where good, fast internet connectivity reaches less than 30 percent of the population. Women constitute almost half of the agricultural labor force in developing countries, yet they are less likely to access the internet than men in the same communities.

If this “digital divide” were closed, smallholder farmers could access information and knowledge related to weather, rainfall, or market demand, allowing them to grow and harvest food more efficiently. Timing has increasingly become a key source of competitiveness, and access to real-time information is crucial. To be genuinely transformational, internet access must be reliable, affordable, and secure.

Mobile devices and platforms connect smallholder farmers to markets

Connectivity is not only about access to information – it is also about access to services. For example, mobile banking can give smallholder farmers access to formal financial services such as banking and loans, which they all too often lack. Take the example of Trringo: this smartphone app is being hailed as the Uber for tractors thanks to how it has disrupted India’s farm equipment renting process.

Investing in a mobile phone as an agricultural tool has perhaps become the single most strategic decision by a smallholder farmer, and we need to make sure we are doing everything we can to facilitate such smart investments.

Unique identifiers improve data about farmers, for farmers

Unique identifiers are commonly used in the developed world. When you log on to Amazon or Netflix, the site knows who you are and makes personalized recommendations based on what you have purchased or viewed before. However, data about smallholder farmers in developing economies are primarily based on samples and extrapolations and is thus unreliable or incomplete.

With unique identifiers, businesses could offer tailored services, policy-makers could make more informed decisions, and knowledge institutions could make better assessments of farmers’ circumstances.

For example, the eWallet system in Nigeria has allowed the government to identify and deliver input subsidies directly to farmers based on personal and biometric information provided by smallholder farmers. As with all innovations, this technology is not a silver bullet. For unique identifiers to improve farmers’ lives, data systems must be able to guarantee that data remains anonymous for the privacy and security of individuals.

Geospatial analysis to help farmers make informed decisions

Geospatial technologies can help both policy-makers, and individual farmers assess, monitor, and plan the use of their natural resources. If smallholder farmers had access to foundational technologies – like electricity, the internet, and mobile phones – then they too could use geospatial analysis to make decisions about the management of their farms and other assets. In this realm, FAO and Google are partnering to make geospatial tracking and mapping products more accessible.

If geospatial technologies were easy to download and use, a smallholder in Colombia could discover the distance to the nearest river, or a farmer in Malawi could use sensors to more efficiently manage their farm.

Some of the technologies we have discussed here are hardly new, so it might seem odd to see them on a list of innovations that could transform the lives of smallholders. However, for these farmers, access and adoption of technology are not automatic.

It is, therefore, our duty to ensure smallholder farmers are not left behind in the Fourth Industrial Revolution. A robust digital infrastructure is crucial for smallholders to access and create tools that empower them to make decisions about their farms and businesses. As innovation evolves, let us continue to question how the benefits of technology are being shared and how these benefits can nurture the smallholder farmers who feed the world.

United Nations Millennium Development Goals

There is broad global support for addressing extreme poverty because of its implications in regards to global and local economics, environmental protection, geopolitical stability of governments, and humanitarian efforts. In 2002, the United Nations created the Millennium Development Goals (MDGs) to reduce extreme poverty in half by 2015 proactively. The MDGs are broken down into eight smaller goals, each with a specific target or mission to accomplish.

  • Goal 1: Eradicate Extreme Poverty and Hunger
  • Goal 2: Achieve Universal Primary Education
  • Goal 3: Promote Gender Equality and Empower Women
  • Goal 4: Reduce Child Mortality
  • Goal 5: Improve Maternal Health
  • Goal 6: Combat HIV/AIDS, Malaria, and Other Diseases
  • Goal 7: Ensure Environmental Sustainability
  • Goal 8: Global Partnership for Development

The following is the TED Talk abstract for Bono’s TED Talk on global poverty. “Human beings have been campaigning against inequality and poverty for 3,000 years, but this journey is accelerating. Bono ’embraces his inner nerd’ and shares inspiring data that shows the end of poverty is in sight … if we can harness the momentum.  Bono, the lead singer of U2, uses his celebrity to fight for social justice worldwide: to end hunger, poverty, and disease, especially in Africa. His nonprofit ONE raises awareness via media, policy, and calls to action.” Some may ask, why Bono? He is just a millionaire “rock star” who does not know a thing about poverty issues. It turns out that Bono is extremely active in humanitarian issues and one of the most significant philanthropists in the world. He started the ONE Campaign to help inform, educate, and advocate about extreme poverty. He has also studied under one of the economic leaders on global poverty, Dr. Jeffrey Sachs, the Director of the Earth Institute at Columbia University and one of the original leaders of the MDGs.

Sustainable Development Goals

The ideas behind sustainable development can be traced back to early works of scholars such as Rachel Carson’s Silent Spring (1962), Garret Hardin’s Tragedy of the Commons (1968), and Paul Ehrlich’s Population Bomb (1971). Despite different focuses of these classic works related to population and environment, all raised public concerns over environmental problems from human activities and highlighted the importance of systems thinking.

Some tremendous efforts and notable achievements have been made towards sustainable development, but human civilization is currently unsustainable. The basic idea of unsustainable development is that there are some lifestyles and human activities we are doing today that are not sustainable long-term. Much of our development depends on natural resources that either cannot be replaced or that are not being replaced as fast as we are depleting them. Some major examples are:

  • Fossil fuels (oil, coal, and natural gas) used for energy
  • Freshwater supplies used for irrigation and drinking
  • Minerals used for manufacturing
  • Trees used for construction and fuel
  • Fish used for food

Each of these resources is becoming increasingly scarce. We cannot continue using them as we do today. Either we will need to shift away from them on our own, or shortages will force us to change our ways.

There are other reasons why some aspects of contemporary development may be considered unsustainable. Development is changing the global climate system and affecting biodiversity in ways that could have very perilous consequences. We will learn about these topics towards the end of the course, but, for now, just note that if we try to continue with development as we have been, then the ensuing changes to climate and biodiversity could eventually prevent us from maintaining our state of development. Finally, as we saw on the previous page, development even today is not necessarily something to be desired. On the other hand, development involves much of what is important to us and thus is not something we can easily walk away from. Achieving development that is both desirable and sustainable is an important goal for our lives and our society.


Chapter 4: Political Borders, Boundaries, and Governments

In the chapter on population, we discussed how and why population growth exploded in the 20th century. Recall that as nations evolve from Stage 1 to Stage 2, death rates plummet, but birth rates are maintained at their current levels, causing an explosion in population. Europe and America were the first nations to enter Stage 2, but today no nation is in Stage 1. Most nations are still in Stage 2, and the result of this is that the human population is now over 7.5 billion. Our numbers are expected to peak at 9 billion by 2100.

The 20th century was also the deadliest century, in terms of war, in human history.  This century experienced two world wars, multiple civil wars, genocides in Rwanda (Tutsis and moderate Hutus), Sudan, Yugoslavia, and the Holocaust that decimated the Jewish population in Europe during WWII. In addition to WWI and WWII, this century experienced the Korean War, the Vietnam War, the Cold War, and the first Gulf War. Additionally, this century saw regional and civil conflicts such as those experienced in the Congo (6 million people died), as well as an upsurge in child soldiers and modern slavery.

4.1 Defining Nation-States

Organization and Control

Political geography is the study of how humans have divided up the surface of the Earth for purposes of management and control. Looking beyond the patterns on political maps helps us to understand the spatial outcomes of political processes and how political processes are themselves affected by spatial features. Political spaces exist at multiple scales, from a kid’s bedroom to the entire planet. At each location, somebody or some group seeks to establish the rules governing what happens in that space, how power is shared (or not), and who even has the right to access those spaces. This is also known as territoriality.

Many people have tried to exert control over the physical world to exert power for religious, economic, or cultural reasons. Scholars have developed many theories of how political power has been expressed geographically as leaders and nations vie to control people, land, and resources. In the late 1800s and early 1900s, scholars developed many theories about how political power is expressed geographically. These theories have been used to both justify and work to avoid conflict.

Organic Theory

The Organic Theory states that nations must continually seek nourishment in the form of gaining land to survive in the same way that a living organism seeks nourishment from food to survive. As a result, it implies that if a nation does not seek out and conquer new territories, it will risk failing because other nations also behave organically. This is akin to the law of the jungle – eat or be eaten.

Hitler was a proponent of organic theory and used Raztel’s term Lebensraum or “living space” as justification for Germany’s behavior during World War II. He claimed that if Germany did not grow in this way, it would fall victim again to the rest of Europe and eventually the world as it did during the First World War.

Heartland Theory

Heartland Theory, also known as “The Geographic Pivot of History” theory, Mackinder thought that whoever controlled Eastern Europe, the heartland, would control the world. The idea is that the heartland is a pivot point for controlling all of Asia and Africa, which he referred to as the World Island. Why was the heartland so crucial at this time? Eastern Europe is abundant in raw materials and farmland, which are needed to support a vast army that could then control the coasts and water ports that make international trade possible.

Both Hitler and the USSR believed this was possible, but both failed because they did not foresee the rise of other world powers such as the United States and China. Nor did they know that military technology would soon advance far beyond tanks and ground troops to include nuclear weapons, high-tech missiles, and drone airplanes.

Rimland Theory

According to Spykman’s Rimland Theory, Mackinder’s “lands of the outer rim” were the key to controlling Eurasia and then the world. He theorized that because the Rimland contains most of the world’s people as well as a large share of the world’s resources, it was more important than heartland. The Rimland’s defining characteristic is that it is an intermediate region, lying between the heartland and the marginal sea powers. As the amphibious buffer zone between the land powers and sea powers, it must defend itself from both sides, and therein lies its fundamental security problems.

Politically, Spykman called for the consolidation of the Rimland countries to ensure their survival during World War II. With the defeat of Germany and the emergence of the USSR, Spykman’s views were embraced during the formulation of the Cold War American policy containing communist influence.

The State of States

Independent states are the primary building blocks of the world political map. A state (also called a nation or country) is a territory with defined boundaries organized into a political unit and ruled by an established government that has control over its internal and foreign affairs. When a state has total control over its internal and foreign affairs, it is called a sovereign state. A location claimed by a sovereign state is called a territory. According to the United Nations, in 2016, the world had 193 nations; however, many of those nations dispute their boundaries.

Some nations are stateless. This means that there are groups of people who share a collective identity and history, but who have no parcel of land that they fully control. The Palestinians are perhaps the world’s best-known stateless nation, owed to their long struggle with Israeli Jews – some of whom, until 1948, belonged to the previously best-known nation without a state.

Federalism is a system of government with one, strong, central governing authority as well as smaller units, such as states. If the central government grows too strong, then federalism comes closer to a unitary state, where the governing body has supreme authority and dictates how much power the units are allowed to have. In those places like Egypt, France, and Japan, where nationalist feelings are strong, and there are many centripetal forces like language, religion, and economic prosperity uniting people, a unitary state makes much sense. Unitary systems work best where there is no strong opposition to central control. Therefore, the political elite in a capital city (like Paris or Tokyo) frequently have outsized power over the rest of the country. Fights over local control are minimal, and the power of local (provincial) governments is relatively weak.

Many countries have an underdeveloped sense of nationhood and therefore are better suited to use a Federalist style of government where power is geographically distributed among several subnational units. This style of governance makes sense when a country is “young” – and is still in the process of nation-building or developing a common identity necessary to the establishment of a unified nationality. Federations may also work best when nations have multi-ethnic or multi-national countries. Rather than break into multiple smaller states, a country can choose to give each of its ethnicities or nationalities some measure of political autonomy. If they want to speak their language or teach their specific religion in the local schools, then the central government allows local people to make those decisions. The central government in a federal system focuses on things like national defense, managing interstate transportation, and regulating a common currency. The U.S. began as a federalist system.

Occasionally, a particularly troublesome provincial region or ethnicity will result in a sort of compromise situation, or devolution, in which a unitary system, like China, will grant a special exemption to one region or group to allow that location semi-autonomy or greater local control. Puerto Rico (United States) and Hong Kong (China) are excellent examples. However, there are many dozens of other similarly self-governing regions around the globe, most with names designating their status. This process is often beneficial to the unitary nations to prevent political instability and conflict; however, it can be withdrawn by the central government at any time.

The hostile fragmentation of a region into smaller, political units is called Balkanization. This is often the result of unresolved centrifugal forces pulling the nation apart from within, such as economic disparity and ethnic or religious conflicts. The term Balkanization refers to an area that was known as the Ottoman Empire, and it occupied the area where we have current countries like Bulgaria, Albania, and  Serbia. Nowadays, we use this term to refer to any country that breaks apart to form several countries or several states, usually the consequence of civil war or ethnic cleansing as was seen in Armenia and Azerbaijan, Bosnia and Herzegovina and Croatia and Yugoslavia.

The United States has had a challenging time resolving whether it wants to pursue a unitary or federal style government. This question has been one of the central political issues in the U.S., since even before the War for Independence. Initially, the United States was organized as a confederation, a loosely allied group of independent states united in a common goal to defeat the British. Operating under the Articles of Confederation from roughly 1776-1789, the new and decentralized country found itself challenged to do simple things like raise taxes, sign treaties with foreign countries, or print a common currency because the central government (Congress) was so very weak. The Constitution that the U.S. Government, operates under today was adopted to help create a balance of powers between the central government headquartered in Washington DC, and the multiple state governments. Initially, states continued to operate primarily as separate countries. This is why, in the United States, the word state is used to designate major subnational government units, rather than the word province, as is common in much of the world. In our early history, Americans thought they were living in “The United Countries of America.”

The idea or concept of a state originated in the Fertile Crescent between the Persian Gulf and the Mediterranean Sea. The first ancient states that formed during this time were called city-states. A city-state is a sovereign state that encompasses a town and the surrounding landscape. Often, city-states secured the town by surrounding it with walls, and farmlands were located outside of the city walls. Later, empires formed when a single city-state militarily controlled several city-states.

The agrarian revolution and the Industrial Revolution were powerful movements that altered human activity in many ways. Innovations in food production and the manufacturing of products transformed Europe, and in turn, political currents were undermining the established empire mentality fueled by warfare and territorial disputes. The political revolution that transformed Europe as a result of various actions that focused on ending continual warfare for the control of territory and introducing peaceful agreements that recognized the sovereignty of territory ruled by representative government structures. Various treaties and revolutions continued to shift the power from dictators and monarchs to the general populace. The Treaty of Westphalia in 1648 and those that followed helped establish a sense of peace and stability in Central Europe, which had been dominated by the Holy Roman Empire and competing powers. The Holy Roman Empire, which was centered on the German states of Central Europe from 962–1806, should not be confused with the Roman Empire, which was based in Rome and ended centuries earlier. The French Revolution (1789–95) was an example of the political transformation taking place across Europe to establish democratic processes for governance.

The concept of the modern nation-state began in Europe as a political revolution laid the groundwork for a sense of nationalism: a feeling of devotion or loyalty to a specific nation. The term nation refers to a homogeneous group of people with a common heritage, language, religion, or political ambition. The term state refers to the government; for example, the United States has a State Department with a Secretary of State. When nations and states come together, there is a true nation-state, wherein most citizens share a common heritage and a united government.

European countries have progressed to the point where the concept of forming or remaining a nation-state is a driving force in many political sectors. To state it plainly, most Europeans, and to an extent every human, want to be a member of a nation-state where everyone is alike and shares the same culture, heritage, and government. The result of the drive for nation-states in Europe is Italy for Italians, a united Germany for Germans, and France for the French, for example. The truth is that this ideal goal is challenging to come by. Though the political boundaries of many European countries resemble nation-states, there is too much diversity within the nations to consider the idea of creating a nation-state an actual reality.

After the concept of the nation-state had gained a foothold in Europe, the ruling powers focused on establishing settlements and political power around the world by imposing their military, economic, political, and cultural influence through colonialism. Colonialism is the control of previously uninhabited or sparsely inhabited land. Europeans used colonialism to promote political control over religion, extract natural resources, increase economic influence, and to expand political and military power. The European states first colonized the New World of the Americas, but later redirected their focus to Africa and Asia. This colonial expansion across the globe is called imperialism.

Imperialism is the control of territory already occupied and organized by an indigenous society. These two factors helped to spread nationalism around the globe and have influenced modern political boundaries.

The Shape of States

While not the only factor in determining the political landscape, the shape of a state is important because it helps determine potential communication internally, military protection, access to resources, and more. Find the example listed on a political map and try to find one other state that has the same physical shape.

Compact states have relatively equal distances from their center to any boundary, much like a circle. They are often regarded as efficient states. An example of a compact state would be Kenya.

Elongated states have a long and narrow shape. The major problem with these states is with internal communication, which causes isolation of towns from the capital city. Vietnam is an example of this.

Prorupted states occur when a compact state has a portion of its boundary extending outward exceedingly more than the other portions of the boundary. Some of these types of states exist so that the citizens can have access to a specific resource, such as a large body of water. In other circumstances, the extended boundary was created to separate two other nations from having a common boundary. An example of a prorupted state would be Namibia.

Perforated states have other state territories or states within them. A great example of this is Lesotho, which is a sovereign state within South Africa.

Fragmented states exist when a state is separated. Sometimes large bodies of water can fragment a state. Indonesia is an example of a fragmented state.

Landlocked states lack a direct outlet to a major body of water, such as a sea or ocean. This becomes problematic specifically for exporting trade and can hinder a state’s economy. Landlocked states are most common in Africa, where the European powers divided up Africa into territories during the Berlin Conference of 1884. After these African territories gained their independence and broke into sovereign states, many became landlocked from the surrounding ocean. An example here would be Uganda.

Boundaries

Boundaries are often divided into two categories: (1) natural – following the course of a physical feature such as a river or ridgeline; (2) artificial – drawn by humans. However, so-called natural boundaries are still products of human choice — why establish that river, rather than this other one, as the boundary? Moreover, the political border may persist even after the physical feature, which created the original boundary has changed its location. Thus, the boundaries of states bordering the Mississippi River are fixed to the river’s old course, though the location of its meanders has changed.

Boundaries play a critical role in how people interpret the world around them and can often be sources of conflict at all scales, from two neighbors arguing over where a fence should be placed to nation-states laying claim to parts of (or sometimes all) other sovereign nations. The Atlantic has an article titled, “The Case for Getting Rid of Borders – Completely” that argues that morally and ethically, people should have more equal rights no matter which nation-state they belong to.

It is important to look at how political boundaries are created, determined, and occasionally redrawn. Consider the case of Kashmir, a territory disputed between India and Pakistan. Within India, publishers are required to show Kashmir as part of India. In 2011, the Indian government ordered the Economist magazine to remove or cover such a map in 28,000 copies of its May edition which were for sale in India. Even well-known multi-national companies like Google Maps are censored if they show the area as “disputed.” This means that Indians grow up always seeing Kashmir as a part of their country, of equal standing with undisputed states like Tamil Nadu or Assam. Any proposal to recognize Pakistani control over part or all of Kashmir would then provoke severe resistance from the Indian populace. Maps outside the disputant countries commonly show both boundaries, noting their disputed status. However, this compromise is not neutral, as it sends a message that both claims are equally legitimate. Imagine, for example, if Canada announced a claim to Washington State, and maps published outside North America began showing that state as a disputed territory.

Another interesting question comes up when learning about boundaries, “Who owns the sea?” A maritime boundary is a conceptual division of the Earth’s water surface areas. As such, it usually defines areas of exclusive national rights over any natural resources within that boundary. A maritime boundary is delineated at a particular distance from the coastline. Although in some countries, the United Nations Convention on the Law of the Sea defines the boundary of international waters.

Controversies about territorial waters tend to encompass two dimensions: (a) territorial sovereignty, which is a legacy of history, and (b) relevant jurisdictional rights and interests in maritime boundaries, which are mainly due to differing interpretations of the law of the sea. Many disputes have been resolved through negotiations, but not all.

For instance, The Strait of Juan de Fuca is the wide waterway stretching from the Pacific Ocean on the West to the San Juan Islands on the east, with Vancouver Island to the north and the Olympic Peninsula to the south. This strait remains the subject of a maritime boundary dispute between Canada and the United States. The dispute is only over the seaward boundary extending 200 miles (320 km) west from the mouth of the strait. Both governments have proposed a boundary based on the principle of equidistance, but with different base point selections, resulting in small differences in the line. Also, the government of British Columbia has rejected proposals by the United States, instead of arguing that the Juan de Fuca submarine canyon is the appropriate “geomorphic and physio-geographic boundary.” The resolution of the issue should be simple but has been hindered because it might influence other unresolved maritime boundary issues between Canada and the United States around the Gulf of Maine.

Theories of a State

State Formation and the Centralization of Power

Today we take it for granted that different societies are governed by different states, but this has not always been the case. Since the late nineteenth century, virtually the entirety of the world’s inhabitable land has been parceled up into areas with more or less definite borders claimed by various states. Earlier, quite large land areas had been either unclaimed or uninhabited, or inhabited by nomadic peoples who were not organized as states. In fact, for most of human history, people have lived in stateless societies, characterized by a lack of concentrated authority, and the absence of significant inequalities in economic and political power.

The first known states were created in Ancient Egypt, Mesopotamia, India, China, the Americas (e.g., Aztec civilization, Inca civilization). Most agree that the earliest states emerged when agriculture and writing made it possible to centralize power durable. Agriculture allowed communities to settle and also led to class division: some people devoted all their time to food production, while others were freed to specialize in other activities, such as writing or ruling. Thus, states, as an institution, were a social invention. Political sociologists continue to debate the origins of the state and the processes of state formation.

Most political theories of the state can roughly be classified into two categories. The first, which includes liberal or conservative theories, treats capitalism as a given and concentrates on the function of states in a capitalist society. Theories of this variety view the state as a neutral entity distinct from both society and the economy.

Marxist Theory

Marxist theory, on the other hand, sees politics as intimately intermingled with economic relations, and emphasizes the relationship between economic power and political power. Marxists view the state as a partisan instrument that primarily serves the interests of the upper class. Marx and Engels were clear that communism’s goal was a classless society in which the state would have “withered away. ” For Marxist theorists, the role of the non-socialist state is determined by its function in the global capitalist order. Marx’s early writings portrayed the state as “parasitic,” built upon the superstructure of the economy and working against the public interest. He believed that the state mirrored societal class relations, that it regulated and repressed class struggle, and that it was a tool of political power and domination for the ruling class.

Anarchism

Anarchism is a political philosophy that considers states immoral and instead promotes a stateless society, anarchy. Anarchists believe that the state is inherently an instrument of domination and repression, no matter who is in control of it. Anarchists believe that the state apparatus should be dismantled entirely and an alternative set of social relations created, which would be unrelated to state power.

Pluralism

Pluralists view society as a collection of individuals and groups competing for political power. They then view the state as a neutral body that enacts the will of whichever group dominates the electoral process. Within the pluralist tradition, Robert Dahl developed the theory of the state as a neutral arena for contending interests. He also viewed governmental agencies as merely another set of competing interest groups. The pluralist approach suggests that the modern democratic state acts in response to pressures that are applied by a variety of related interests. Dahl called this kind of state a polyarchy. Pluralism has been challenged on the ground that it is not supported by empirical evidence.

Hydraulic Civilization

According to one early theory of state formation, the centralized state was developed to administer large public works systems (such as irrigation systems) and to regulate complex economies. This theory was articulated by German American historian Karl August Wittfogel in his book 1957 Oriental Despotism. Wittfogel argued that most of the earliest states were formed in hydraulic civilizations, by which he meant civilizations where leaders controlled people by controlling the water supply. Often, these civilizations relied on complex irrigation systems that had to be centrally managed. The people, therefore, had good reason to give control to a central state, but in giving up control over the irrigation system, they also gave up control over their livelihoods and, thus, the central state gained immense control over people in general. Although Wittfogel’s theory is well known, it has also been criticized as inaccurate. Modern archaeological and anthropological evidence shows that many early societies were not as centralized, despotic, or unequal as the hydraulic theory would suggest.

Coercion, War, and the State

An alternative theory of state formation focuses on the rise of more modern nation-states and explains their rise by arguing they became necessary for leveraging the resources necessary to fight and defend against wars. Sociologist Charles Tilly is the best-known theorist in this tradition. Tilly examined the political, social, and technological change in Europe from the Middle Ages to the present and attempted to explain the unprecedented success of the nation-state as the dominant form of state on Earth. In other words, instead of asking (like Wittfogel) where the very first states came from, Tilly asked where the types of states with which we are most familiar came from, and why they became so prevalent.

According to Tilly’s theory, military innovation in pre-modern Europe (especially gunpowder and mass armies) made war extremely expensive. As a result, only states with a sufficient amount of capital and a large population could afford to pay for their security and ultimately survive in a hostile environment. Thus, the modern states and their institutions (such as taxes) were created to enable war-making.

Rationalization and Bureaucracy

Another theory of state formation focuses on the long, slow, process of rationalization and bureaucratization that began with the invention of writing. The Greeks were the first people known to have explicitly formulated a political philosophy of the state, and to have rationally analyzed political institutions. In Medieval Europe, feudalism furthered the rationalization and formalization of the state. Feudalism was based on the relationship between lord and vassal, which became central to social organization and, indeed to the state organization. The Medieval state was organized by Estates, or parliaments in which key social groups negotiated with the king about legal and economic matters. Since then, states have continued to grow more rational and bureaucratic, with expanding executive bureaucracies, such as the extensive cabinet system in the United States. Thus, states have evolved from relatively simple but powerful central powers to sophisticated and highly organized institutions.

4.2 Political Identities

Separatist Movements

Occasionally people within a country find themselves unable to agree on the rules under which they can all live peaceably. When this happens, a separatist movement is likely to ensue. Often separatist movements revolve around questions of control over religious practice, language, or other cultural questions. Usually, it is a minority group, often living in a peripheral region of the country that is the offended party ready to break away from the majority group living in the country’s hearth or core region.

Thousands of separatist movements have marked world history, and hundreds of separatist groups are active today. Even within prosperous Europe, dozens of ethnic groups (nations) would like to break away to establish their nation-state in Europe alone. In principle, Americans and American foreign policy support the right to self-determination, which is essentially the right of a group of people to control the political system of the territory in which they live. Indeed, the United States itself was born of a rebellion by separatists living in a marginalized, peripheral region of the British Empire. American colonists’ rallying cry for self-determination was “no taxation without representation.” For many years, Scotland has debated its inclusion in the United Kingdom (England, Wales, Northern Ireland, and Scotland).

Scottish people, many of whom are resentful of the dominance of their more numerous English neighbors, held a parliamentary vote in late 2014 to decide the question, “Should Scotland be an independent country?” Ultimately, the Scots voted to stay part of the United Kingdom but to keep Scotland in the United Kingdom; the English gave into several demands by Scottish separatists for additional autonomy from the British (English) control. Fast forward to June 2016 when the U.K. shocked the world by voting to leave the European Union; Scottish separatists took advantage of the political and social instability to renew their call for independence and self-determination.

Politics and Identity

Separatist movements do not always arise from perceived differences in identity. Just as often, the real difference is economical, but those who would lead a group to rebel rarely admit this basic fact. The American Civil War was less a fight over identity as it was over the control over rules governing slavery and the economics of slavery. Both sides of the conflict identified as American, but Southerners believed control should be local, and most Northerners believed that some of that local control regarding slavery, should be a matter of national control.

Perhaps the most interesting thing about civil wars and separatist movements is that often those who suffer the most gain the least when fighting breaks out. As was the case in the American Civil War, the vast majority of soldiers from the South owned no slaves, and stood to gain from wage competition in the labor market upon emancipation. It was the elite Southerners that needed slavery. So how is it that people without much to fight for can be convinced to fight?

Some of the answers lie in the ability of people in power to manipulate the opinions of segments of a population effectively. Populist politicians often convince people that their individual or their groups’ problems are the results of unfair treatment by another group. Sometimes, these arguments are legitimate and can be supported by fact; other times, there is insufficient evidence to justify rebellion or secession.

It is often nearly impossible to determine precisely whose interests a secessionist group represents. Sometimes, secession movements are led by a small political elite that claims the right to represent a more substantial majority. However, the elite may not be representative of the majority of the people, and their motives may be strictly personal (wealth, power). This is why the United States’ foreign policy finds questions of self-determination, especially perplexing. Our government has yet to find a consistent response to those groups who desire to control their territory. In some cases, the U.S. has supported the rights of subnational groups to create a new country. The Clinton administration largely supported the dissolution of Yugoslavia into multiple new countries.

In other instances, the U.S. has worked with groups trying to exercise that right. Take, for example, the Kurdish people, an ethnic minority in northern Iraq, eastern Turkey and northwestern Iran. The Kurds have a separate language, history and identity from the Iraqis, Iranians and the Turks with whom they share space. Many Kurdish nationalists argue that there should be a new nation-state called Kurdistan. It would seem the Kurds have a legitimate argument, and there have been several Kurdish insurrections over the years. Each time though, Kurdish rebellions have been met with violence by the governments of Turkey, Iraq, and Iran. The U.S. government supported some measures of Kurdish autonomy in Iraq and Iran, but not in Turkey, presumably because that country is a strategic ally of the U.S.

Terrorism

Terrorism is proving to be an enduring global threat, because modern terrorist groups have become more lethal, networked, and technologically savvy. Today, groups such as the Islamic State of Iraq and Syria (ISIS) and al-Qa’ida can control land and hold entire cities hostage. This power mainly stems from their ability to generate revenue from numerous criminal activities with almost complete impunity.

During the time of the 11 September 2001 attacks on the World Trade Center and the Pentagon, al-Qa’ida numbered around 300 mujahedeen in Afghanistan with the support of the Taliban. Fifteen years later, two global terrorist groups have emerged, transforming the global threat landscape – al-Qa’ida and ISIS. At the end of 2015, ISIS controlled 6-8 million people in an area the size of Belgium, and maintained a force of between 30,000- 50,000 fighters while attracting the most significant number of foreign fighters in history.

Currently, al-Qa’ida and ISIS are escalating their attacks in an intense rivalry for global prowess and international reach while competing for affiliates worldwide. With its determination to govern and control territories in the Middle East, Africa, and Asia, ISIS is currently a more significant threat than al-Qa’ida. It represents a three- dimensional threat: a core situated in Iraq and Syria, ISIS regional affiliates, and ISIS online. This constellation has spawned ISIS-inspired foreign fighters, ISIS self-inspired radicalized cells, ISIS affiliates, and, most importantly, ISIS criminal financing operations. As will be shown, ISIS criminal networks and operations are supported by all three dimensions.

Since ISIS declared its caliphate in June 2014, ISIS core, regional affiliates, and inspired groups have carried out more than 4,000 attacks in 28 countries. ISIS’s geographic presence has grown exponentially since it hit the world stage in 2014. ISIS has a total of 30 self- proclaimed wilayats or provinces, ten of which are outside of ISIS’s core base in Syria and Iraq. These include regional affiliates in Algeria, Egypt, Libya, Nigeria, Saudi Arabia and Yemen, as well as allied affiliates in Afghanistan and Pakistan. ISIS in Afghanistan consists of former members of the Afghan Taliban, the Haqqani Network, and the Islamic Movement of Uzbekistan (IMU) and it is supported by Jamaat Ul Dawa al Quran (JDQ). These groups have generated millions annually from narcotics trafficking and illegal extraction of precious stones and timber. As former members continue to splinter off, ISIS is thus not only generating an income from its wiliyats, but also through criminal markets of other groups. ISIS is actively making links to Southeast Asian terror groups as well. Home to 62 percent of the world’s Muslims, the Asia Pacific region offers ISIS not only a new base to establish power, but also new avenues of revenue to exploit.

Al-Qa’ida similarly operates on a franchise model, with offshoots in Africa and Asia, and it is developing new relationships with groups in the Caucasus, India, and Tunisia. Al-Qa’ida is also working towards territorial control, and in the Arabian Peninsula (AQAP) continues to have a strong presence in Yemen and remains the group’s greatest direct threat to the United States.

The opportunistic ability for criminal-terrorist groups to take over geographic areas is due to collapsing state power and conflict in the Middle East and North Africa. The instability brought on after the wake of the Arab Spring, which led to hundreds of thousands of people trying to escape to Europe, further undermined state control challenging the authoritarian order in six Arab states. Four states — Libya, Iraq, Syria, and Yemen – are failing or partially failing, leading to chronic conflict, lawlessness, and extreme poverty in the region. This has created an opportunity for radical religious extremists, terrorists, and criminal groups to prosper. Several states in the region can now no longer entirely control and contain criminality and violent terror within their borders.

States worldwide are being challenged by criminal-terrorist networks, especially in prisons, urban areas, and cyberspace. Prisons have become a place where terrorists and criminals meet, plan, plot, and recruit. The most prominent example was Abu-Bakhr al- Baghdadi, the leader, and self- declared caliph of ISIS, who spent formative time at Camp Bucca, a US-controlled prison in Iraq, where he met Samir Abd Muhammad al-Khlifawi, a former colonel in the intelligence service of Saddam Hussein’s air defense forces, who was the architect of the ISIS strategy for the takeover of towns, focusing heavily on surveillance and espionage. The Iraqi government estimates that 17 of the 25 most crucial ISIS leaders spent time in U.S. prisons in Iraq, planning the creation of ISIS and its ideology.

In the West, prisons have also become a networking and learning environment where terrorists and criminals can share an ideology and build networks. A large percentage of terrorist recruits, some estimates are as high as 80 percent, have criminal records varying from petty to serious crimes. The recruitment of criminals provides terrorists with the skill sets needed to succeed: a propensity to carry out violent acts, the ability to act discreetly, and access to criminal markets for weapons, and bomb-building resources. A study on extremists who plotted attacks in Western Europe found that 90 percent of the cells were involved in income-generating criminal activities, and a half was entirely self-financed: only one in four received funding from international terrorist organizations.

For Islamist extremist groups, the prison has become a vital recruitment location. They especially target young petty criminals with Middle Eastern backgrounds. The Charlie Hebdo attackers Amedy Coulibaly and Cherif Kouachi, for example, met in prison. There, they also met al-Qa’ida’s top operative in France, Djamel Beghal, who served time for attempting to bomb the U.S. Embassy in Paris in 2001. Abdelhamid Abaaoud, the mastermind of the Paris plot, as well as his co-conspirator Salah Abdeslam, also followed a trajectory from petty crime to armed robbery, both ending up in prison, where they met and were radicalized by Fouad Belkacem, the former leader of the Brussels terrorist recruiting organization Sharia4Belgium.

State power is also progressively being weakened in large cities and ports. Urban centers harbor lawless enclaves that are exploited by criminals, terrorists, militants, and bandits. In so-called feral cities, such as Mogadishu, Caracas, Ciudad Juárez, and Raqqa, governments have lost their ability to govern or maintain the rule of law. To build up more resilience in cities, the U.N. launched the Strong Cities Network (SCN) in September 2015.

While terrorists have created insecurity in the real world for decades, there has been a significant paradigm shift for the last 15 years: terrorists are now engaged in the world’s greatest open space, the internet. ISIS’s growing global influence marks the first time in history that a terrorist group has held sway in both the real and virtual worlds. Cyberspace has become a new domain for violence. It is used to project force with videos of torture and assassinations as well as to recruit.

In cyberspace, extremist groups’ greatest success is their ability to use propaganda in a strategic way to entice fighters and followers. ISIS uses the digital world to create an idealized version of itself, a reality show that is designed to find resonance and meaning among its diverse supporters. For the adventure seeker, it broadcasts its military power and violence; for those looking for a home, job, refuge, religious fulfillment, or meaning in life, it uses this medium to present an idyllic world by depicting the caliphate as a peaceful, benevolent state committed to helping the poor. ISIS maintains a successful media wing, Al-Furqan, which includes over 36 separate media offices. Together, they produce hundreds of videos, as well as Roumiay (formerly Dabiq), ISIS’s online propaganda magazine. A study by RAND found that ISIS supporters sent over six million tweets from July 2014-May, 2015.

More than 40,000 foreign fighters from over 120 countries have flooded into Syria since the start of the country’s civil war, including 6,900 from the West, the vast majority of whom joined ISIS. The group is dependent on recruits from Europe for significant funding. It advises aspiring fighters to raise funds before leaving to join ISIS. European recruits’ moneymaking schemes include petty theft, as well as defrauding public institutions and service providers. British foreign fighters committed large-scale fraud by pretending to be police officers and targeting U.K. pensioners for their bank details, earning more than US$1.8 million before being apprehended.

ISIS has also been successful at using cybercrime to fund itself. It advises fighters on how to transfer funds through money service businesses, pre-paid debit cards, Apple Wallet, informal money transfer systems (hawala), and Dark Wallet, a dark web app that claims to anonymize bitcoin transactions. ISIS also instructs its followers to use the internet to acquire weapons. Cells planning attacks in Europe and ‘lone wolves’ are increasingly turning to the dark web to obtain weapons: 57 people were arrested in France in 2015 for buying firearms over the internet.

The recent increase in global terrorism can be explained by several factors that have converged: war, religious and ethnic conflict, corrosive governments, weak militaries, failing states, and the growth of information technology. However, one of the most important developments is the increasing collaboration between criminal and terrorist networks. While political motives drove criminals used to focus only on revenue generation and terrorists, we are currently witnessing a convergence of terrorism and crime. These new hybrid groups are driven by both, revenue generation and political motives, resulting in criminal and terrorist groups with historically unprecedented resources and transgressive aims. The consequence of this expanding threat can be measured by how terrorist groups have increased their sphere of influence worldwide. Below are other examples of terrororism that is occuring around the world.

Geospatial Intelligence

Geospatial technology is used heavily in geopolitical conflicts within the National Geospatial-Intelligence Agency, National Security Agency, and the Department of Homeland Security, and the Central Intelligence Agency (CIA) to name a few. The video on the right is a section of the Geospatial Revolution, Episode 3 that focuses on war and conflict.

Geospatial technology can also be used for humanitarian efforts as a way to end conflict or monitor situations before they escalate. One organization, called the Satellite Sentinal Project, was created by The Enough Project and the largest private satellite imagery corporation called Maxar (formally called Digital Globe). The organization was first used satellite imagery from satellite imagery and Google Earth to monitor potential humanitarian conflicts along the border of Sudan and the newly created South Sudan. Now it is using satellite imagery to track poachers who use the money from the black market to fund civil wars like the Lord’s Resistance Army (LRA).

4.3 International Relations

The study and practice of international relations have led international relations scholars to suggest different ways that states might and should behave about their neighbors around the world.

Theories of International Relations

Realism

Realism suggests that states should and do look out for their interests first. Realism presumes that states are out for themselves first and foremost. The world is, therefore, a dangerous place; a state has to look out for No. 1 and prepare for the worst. When George W. Bush convinced the U.S. Congress that he should send in U.S. soldiers into Iraq in 2003 and take out Saddam Hussein, this was realism in action. Realism suggests that international relations are driven by competition between states, and states therefore do and should try to further their interests. What matters, then, is how much economic and military power a state has. When your neighbor misbehaves, you cannot call the police.

Classical realists, say this is just human nature. People, by nature, are at some level greedy and insecure and behave accordingly. So even if you are not greedy and insecure, you have to behave that way, because that is the game. Structural realists say it is more about how the world is organized—an anarchic system creates the Hobbesian state of nature, referring to the 16th century English philosopher who justified the existence of the state by comparing it to a somewhat hypothetical “state of nature,” a war of all against all. So states should seek peace, but prepare for war.

This tends to make national security look like a zero-sum game: Anything I do to make myself more secure tends to make you feel less secure, and vice versa. A realist might counter that a balance of power between states, in fact, preserves the peace, by raising the cost of any aggression to an unacceptable level.

Realists argue that war, at some point, is inevitable. Anarchy persists, and it is not going away anytime soon.

Liberalism

Liberalism suggests in fact, states can peacefully co-exist, and that states are not always on the brink of war. Liberal scholars point to the fact that despite the persistence of armed conflict, most nations are not at war most of the time. Most people around the world do not get up and start chanting “Death to America!” and trying to figure out who they can bomb today. Liberalism argues that relations between nations are not always a zero-sum game. A zero-sum game is one in which any gain by one player is automatically a loss by another player. My gains in security, for example, do not make you worse off, and your gains in anything do not make me worse off. The liberal theory also points to the fact that despite the condition of anarchy in the world, most nations are not at war, most of the time. So the idea that international relations must be conducted as though one were always under the threat of attack is not necessarily indicative of reality.

There are different flavors of liberalism. Liberal institutionalism puts some faith in the ability of global institutions to eventually coax people into getting along as opposed to going to war. Use of the United Nations, for example, as a forum for mediating and settling the dispute, will eventually promote respect for the rule of international law in a way that parallels respect for the law common in advanced democracies. Liberal commercialism sees the advance of global commerce as making less likely. War is not very profitable for most people, and it is not suitable for the economy. Liberal internationalism trades on the idea that democracies are less likely to make war than are dictatorships, if only because people can say no, either in legislatures or in elections. Consider that public protest in the U.S. helped end U.S. involvement in Vietnam – that kind of thing does not always happen in non-democratic states although it can. Argentina’s misadventures in Las Malvinas – the Falkland Islands – led to protests that brought down a longstanding military dictatorship and restored democracy to the nation in 1982. Together, these three are sometimes called the Kantian triangle, after the German philosopher Immanuel Kant (1724–1804), who outlined them in a 1795 essay, Perpetual Peace.

The liberal argument that states can learn to get along is somewhat supported by the work of Robert Axelrod’s publication, The Evolution of Cooperation, who used an actual experiment involving a lot of players and the prisoner’s dilemma game to show how people and perhaps states could learn to cooperate. The prisoner’s dilemma is a relatively simple game that is useful for understanding various parts of human behavior. In this game, you have two players, both prisoners. Each player has two choices: Defect to the authorities and rat out the other player in exchange for a reduced sentence, or cooperate with the other player and go free. If the players each defect they get 1 point apiece; if they cooperate, they get 3 points apiece. If, however, one player cooperates and the other defects, the defector gets 5 points, and the cooperator gets zero.

Given that set of constraints, in a realist world, both players defect and score only 1 point each. The best result would be for both to cooperate, go free, and generate the most points between them. In the Axelrod experiment, the game was iterated or repeated, so that in a round-robin featuring dozens of players, each player played the other player multiple times. The players were all notable game theorists, and each devised a particular strategy in an attempt to win the game. What Axelrod found was the player in his experiment who used a strategy called “tit-for-tat” won. Tit-for-tat began by cooperating and then did whatever the other player did last time in the next round. In a repeated game, which certainly describes relations between states, players eventually learned to cooperate. Axelrod cites real-world examples of where this kind of behavior occurred, such as the German and Allied soldiers in the trenches of World War I, who agreed at various times not to shoot each other, or to shell incoming shipments of food. As the soldiers came to understand that they would be facing each other for some time, refraining from killing each other meant that they all got to live.

Constructivism

Constructivism is another and also interesting way of looking at international relations. It may tell us more about why things are happening the way they do, but somewhat less about what we should do about it. Constructivism argues that culture, social structures, and human, institutional frameworks matter. Constructivism relies in part on the theory of the social construction of reality, which says that whatever reality is perceived to be, for the most part, people have invented it. Of course, if the theory were entirely true, then the very idea of the social construction of reality would also be socially constructed, and therefore potentially untrue. To the extent that reality is socially constructed, people can make choices. Hence the constructivist argument is, in part, that while the world system is indeed a form of anarchy, that does not demand a realist response to foreign policy. People can choose to otherwise. So constructivists might argue that the end of the Cold War between the U.S. and the Soviet Union was at least in part a decision by Soviet President Mikhail Gorbachev to change his thinking. He attempted then to ratchet down tensions with the U.S. and to liberalize Soviet society. The fact that the Soviet Union promptly disintegrated does not change that.

Feminism

Realism, liberalism, and constructivism may be the three most prominent theories of international relations, but they are by no means the only ones or the most important. Feminist scholars look at international relations through the prism of gender relations, noting that for much of human history, women have been relegated to a sideline role in politics and government. This is not wise: More than half the people in the world are women.

Nonetheless, males have dominated both the study and practice of international relations, but feminist scholars note that women’s roles as wives, mothers, and workers have made all of that possible. Also, a female perspective on foreign policy might be different. Feminist theory sometimes argues that having more women in positions of power could change things, as women may be more likely to believe peace through international cooperation is possible.

Feminist international relations theory has variants, of course. Liberal feminism wants to ensure that women have the same opportunities in society as do men, so that means liberal in the broader sense of general support for democratic capitalism. Critical feminism, on the other hand, sees capitalism as the source of women’s oppression, and seeks to create new structures for society. Cultural or essentialist feminism stresses the differences in how women view and think about the world. It argues that women’s approach to the world would be more likely to bring peace and avoid conflict.

As usual, there is probably some kernel of truth in all of these ideas, and places where we could find cases that contradict these notions. Clearly, for example, women tend to be less involved in violent crime, and women in some parts of the world are being sold into slavery and prostitution, where their lives are primarily controlled by men. On the other hand, it was a female politician, former British Prime Minister Margaret Thatcher, who marshaled her country’s military to go to war with Argentina and reclaim the Falkland Islands in 1982. However, while history is full of valiant female warriors and influential leaders – from the Trung sisters and Trieu Thi Trinh of Vietnam, to Joan of Arc, and Queen Elizabeth I – they are much less common than are men famous for their conquering exploits. Moreover, the women warriors, generally, are famous for having defended their homelands as opposed to conquering somebody else’s. While some men have felt threatened by the rise of feminism in the last 60 years, it is an opportunity to look at the world in a slightly different way, perhaps shedding some light on why things happen the way they do.

Neo-Marxism

Neo-Marxists look at international relations through the perspective of our old friend Karl Marx. Remember that Marx saw the world in terms of its productive relations, so that how we organize production determines social and political relations as well. The neo-Marxist theory applies this to international relations, and tends to argue that capitalism drives states to compete and attempt to dominate each other.

For example, under the variant known as Marxism-Leninism, named after the Russian revolutionary leader, Vladimir Ilyich Lenin (1870–924), world relations are defined by the desire for industrial nations to develop both sources of raw materials and markets for finished products (what Lenin called the core and the periphery). Lenin was writing at a time when most of Africa had been carved into colonies by the European powers, and the British Empire still stretched from Africa to India to Hong Kong, so there was some evidence for what he was saying. The collapse of the Soviet empire and China’s turning away from purely Marxist economics has taken some of the steam out of the Marxian railroad of history, and we may not agree with Marx and Lenin’s suggestion that a socialist dictatorship is a necessary step on the road to nirvana. However, it could be wrong to reject their analysis altogether. Economic problems and conflicts do continue to inform international relations, and states to continue to try to acquire raw materials as well as markets for finished goods. China, for example, is investing heavily in Africa to lock up supplies of minerals for its growing manufacturing sector. The Chinese are not always the best employers. To the extent that they mistreat African workers, the states where this happens will face the competing demands of a big country that is paying them much money for resources, and the needs of its citizens who work for the Chinese.

Neo-Marxists might point to this an example of where liberal commercialism is just the capitalist class protecting its own. China is nominally still a communist state, but its economic system is much more a sort of state-sponsored capitalism. Capitalism, Neo-Marxists argue, in its relentless quest for rising profits, leads to the degradation and impoverishment of workers. The realist explanation of U.S. policy about Central America is that the U.S. propped up right-wing dictatorships there because they opposed communism. The other explanation was that U.S. commercial interests, such as the United Fruit Company, pushed to maintain their stranglehold on the banana industry. This helped lead, for example, to a CIA-sponsored coup in Guatemala 1954. The company had convinced the U.S. government that the democratically elected Guatemalan president was pro-Soviet. What is known for sure is that he was promising to redistribute land to Guatemalan peasants, which would have threatened the company’s monopoly on the banana trade.

In the view of neo-Marxist analysis, the Cold war was about the threat to U.S. business interests. The same would be true for the first and second Gulf Wars, with the U.S. fighting Iraq in part to preserve access to Middle Eastern oil. The United States intervened when Iraq invaded Kuwait much more quickly than it intervened in the former Yugoslavia, when Serbs were killing Bosnian Moslems in much higher numbers than Iraqis were killing Kuwaitis. Neo-Marxism also is a realist in its orientation, since it presumes that conflict and potential between states is the reality of international affairs. However, in their eyes, that conflict is driven by the conflict between business interests and workers.

International Institutions

Even as the Cold War dragged on, the nations of the world created international forums for attempting to address disputes between nations. World War I, the war to end all wars, as it was known at the time, prompted the victors to create an international body known as the League of Nations. At its peak, it included 58 nations, and created several forums for addressing political and economic issues. It lasted from 1920 to 1942, and suffered immediately from the failure of the United States to join. The U.S. became somewhat isolationist following World War I, the end of which created only an uneven peace and seemed to foster as many problems as it solved.

Nonetheless, the league represented the high point of interwar idealism, built on a belief that nations could talk instead of shoot, and that diplomacy would solve more problems than would bombs. Despite its best intentions, it was mostly powerless, and the member nations failed to act when Italy invaded Italy unprovoked in 1935. The league effectively collapsed with the start of World War II.

Following the end of the war, however, the nations gathered to try it again, creating the United Nations in 1947. The U.N., headquartered in New York City, declared its support in its charter for a broad range of human rights, and attempted to provide a multilateral forum for talking things out. Although every member nation gets one vote, a certain number of decisions must be funneled through the 15-member Security Council, which consists of five permanent members, including the United States, France, China, the Russia Federation (formerly the Soviet Union), and the United Kingdom. The other ten members are elected by the General Assembly to two-year terms, with each region of the globe represented on the council.

The five permanent members each have veto power, and can block action by the council. Also, since the members are often taking what can only be described as a realist perspective on their approach to foreign policy, Russia may seek to block concerted action in war-torn Syria, where it has interests, just as the U.S. will block U.N. resolutions to condemn Israel’s handling of the Palestinian question. Which is, in case you have missed it, whether there will ever be a fully sovereign Palestinian state. The Security Council’s permanent membership is overwhelmingly white and western. One suggestion has been to add Brazil, India, Germany and Japan (sometimes called the G-4) as permanent members, plus perhaps one African and one Arab state. The existing permanent members have not exactly jumped on that bandwagon, as doing so would reduce their power on the council. The U.S. supports adding Japan and perhaps India; the Chinese oppose adding Japan. Great Britain and France have supported adding the entire G-4.

The U.N., through its member nations and its various branches, has had some success. Member nations have contributed combat troops for peacekeeping missions, which attempt to separate belligerent groups in one country or region to forestall all-out war. It has in fact, since its inception, negotiated 172 peace settlements that have prevented all-out war in various parts of the world. U.N.-led efforts, via the World Health Organization, to stamp out various diseases have met with some success, a few nations will object to efforts to end deadly diseases such as smallpox. U.N. cultural efforts have probably also helped preserve important historical sites all over the world, and have at least underscored the importance of preserving some of our shared past. So while the U.N. has not managed to end the war, it has not been an abject failure.

One of the essential documents that came from the United Nations is called the Declaration of Human Rights (http://www.un.org/en/documents/udhr/).  Based on the United States Bill of Rights, this declaration declares what rights humans have throughout the world no matter what nation they are a citizen of.

The U.N. includes the International Court of Justice, which has been used to settle disputes between nations. It has 15 justices elected from the U.N. General Assembly, and while the Security Council can enforce its decisions, council members may also veto that action. Consequently, the court has acted with mixed success. In 1984, for example, the court ruled that U.S. efforts in Nicaragua, in fact, violated international law; the U.S. ignored the decision. In other instances, the court has been able to help solve border disputes between nations. Special courts also have been established by the U.N. to try war criminals from conflicts in Rwanda and the former Yugoslavia.

Other international organizations have had some impact globally, particularly in economic areas. The World Bank and the International Monetary Fund have attempted to spur economic developments and end poverty, with decidedly mixed results. Critics abound on both the left and the right. Conservative critics say they waste too much money; liberal and left critics say it merely helps cement the economic dominance of the western world. Sometimes they fund projects that make sense, such as wastewater treatment projects around the world, while at other times, they support efforts, like digging a canal to flood a seasonal river in Africa to produce fish in the desert, manage only to produce the most expensive fish in the world. Similarly, the World Trade Organization (WTO), which is a forum for resolving trade disputes and for encouraging open trade, is neither all good nor all bad.

Not every intergovernmental organization (IGO) is global in scope. The world is peppered with regional organizations, ranging from the European Union (EU) to the African Union.

The EU is particularly noteworthy. It grew out of the end of World War II, beginning with a customs union to ease trade between Belgium, the Netherlands, and Luxembourg. From there it grew into trade agreements over coal and steel, to the European Common Market, and finally to the EU in 1993. It now has 27 member states in a political and economic union. While not quite the United States of Europe, it does have an elected parliament with the ability to make some common law for the entire group, and a common currency, the euro. Travel and trade over national borders are considerably eased, and crossing from one EU state to another is now little more complicated than crossing from one U.S. state to another.

No other intergovernmental organization is quite that extensive. For example, ASEAN, the Association of Southeast Asian Countries, has ten member states and focuses on promoting economic development and shared expertise and resources. The North Atlantic Treaty Organization (NATO) is a relic of the Cold War. Initially created to help forestall Soviet aggression in Europe, it remains a mutual defense pact between the U.S., Canada and much of Europe. An attack on one member is regarded as an attack on all, so that the U.S. response to 9.11 was in fact at NATO response.

To the extent that international institutions work at all, it is because nations adhere to what the institutions say. While a hard-line realist perspective would encourage ignoring the U.N. or the WTO, a liberal perspective would suggest that nations go along if only because it is in their interest for others to do the same. A nation cannot very well expect another nation to observe the rule of law if it does not do so itself. International law, therefore, works because of reciprocity—each state expects the others to behave the same way, so it adheres to the law to encourage others to do the same.

The United Nations

The United Nations (UN), headquartered in New York City in 1949, is an international organization whose stated aims are facilitating cooperation in international law, international security, economic development, social progress, human rights, and achievement of world peace. The UN was founded in 1945 after World War II to replace the League of Nations, to stop wars between countries, and to provide a platform for dialogue. It contains multiple subsidiary organizations to carry out its missions.

Replacing the League of Nations

The League of Nations failed to prevent World War II (1939–1945). Because of the widespread recognition that humankind could not afford a third world war, the United Nations was established to replace the flawed League of Nations in 1945. The League of Nations formally dissolved itself on April 18, 1946, and transferred its mission to the United Nations: to maintain international peace and promote cooperation in solving international economic, social, and humanitarian problems.

Creation of the United Nations

The earliest concrete plan for a new world organization was begun under the aegis of the U.S. State Department in 1939. Franklin D. Roosevelt first coined the term ‘United Nations’ as a term to describe the Allied countries. The term was first officially used on January 1, 1942, when 26 governments signed the Atlantic Charter, pledging to continue the war effort.

On April 25, 1945, the UN Conference on International Organization began in San Francisco, attended by 50 governments and several non-governmental organizations involved in drafting the United Nations Charter. The UN officially came into existence on October 24, 1945, upon ratification of the Charter by the five then-permanent members of the Security Council – France, the Republic of China, the Soviet Union, the United Kingdom, and the United States – and by a majority of the other 46 signatories. The first meetings of the General Assembly, with 51 nations represented, and the Security Council, took place in London in January 1946. Since then, the UN’s aims and activities have expanded to make it the archetypal international body in the early 21st century.

UN Peacekeeping

The United Nations Peacekeeping began in 1948. Its first mission was in the Middle East to observe and maintain the ceasefire during the 1948 Arab-Israeli War. Since then, United Nations peacekeepers have taken part in a total of 63 missions around the globe, 17 of which continue today. The peacekeeping force as a whole received the Nobel Peace Prize in 1988.

Though the term “peacekeeping” is not found in the United Nations Charter, the authorization is generally considered to lie in (or between) Chapter 6 and Chapter 7. Chapter 6 describes the Security Council’s power to investigate and mediate disputes, while Chapter 7 discusses the power to authorize economic, diplomatic, and military sanctions, as well as the use of military force, to resolve disputes. The founders of the UN envisioned that the organization would act to prevent conflicts between nations and make future wars impossible; however, the outbreak of the Cold War made peacekeeping agreements extremely difficult due to the division of the world into hostile camps. Following the end of the Cold War, there were renewed calls for the UN to become the agency for achieving world peace, and the agency’s peacekeeping dramatically increased, authorizing more missions between 1991 and 1994 than in the previous 45 years combined.

During the Cold War

Throughout the Cold War, the tensions on the UN Security Council made it challenging to implement peacekeeping measures in countries and regions seen to relate to the spread or containment of leftist and revolutionary movements. While some conflicts were separate enough from the Cold War to achieve consensus support for peacekeeping missions, most were too deeply enmeshed in the global struggle.

Though the UN’s primary mandate was peacekeeping, the division between the US and USSR often paralyzed the organization, generally allowing it to intervene only in conflicts distant from the Cold War. In 1956, the first UN peacekeeping force was established to end the Suez Crisis; however, the UN was unable to intervene against the USSR’s simultaneous invasion of Hungary following that country’s revolution. In 1960, the UN deployed United Nations Operation in the Congo (UNOC), the most significant military force of its early decades, to bring order to the breakaway State of Katanga, restoring it to the control of the Democratic Republic of the Congo by 1964.

The UN Peacekeeping Force in Cyprus (UNFICYP), begun in 1964, attempted to end the conflict between the ethnic Greeks and Turks on the island and prevent wider conflict between NATO members Turkey and Greece. A second observer force, UNIPOM, was also dispatched, in 1965 to the areas of the India-Pakistan border that were not being monitored by the earlier mission, UNMOGIP, after a ceasefire in the Indo-Pakistani War of 1965. Neither of these disputes was seen to have Cold War or ideological implications.

There was one exception to the rule. In the Mission of the Representative of the Secretary-General in the Dominican Republic (DOMREP), 1965–1966, the UN authorized an observer mission in a country where ideological factions were facing off. However, the mission was only initiated after the US intervened unilaterally in a civil war between leftist and conservative factions. The US had consolidated its hold and invited a force of the Organization of American States (dominated by US troops) to keep the peace. The mission was approved mainly because the Americans presented it as fait accompli and because the UN mission was not a full peacekeeping force. It included only two observers at any time and left the peacekeeping to another international organization. It was the first time the UN operated in this manner with a regional bloc.

The UN also assisted with two decolonization programs during the Cold War. In 1960, the UN sent ONUC to help facilitate the decolonization of the Congo from Belgian control. It stayed on until 1964 to help maintain stability and prevent the breakup of the country during the Congo Crisis. In West New Guinea from 1962 to 1963, UNSF maintained law and order while the territory was transferred from Dutch colonial control to Indonesia.

After the Cold War

With the decline of the Soviet Union and the advent of perestroika, the Soviet Union drastically decreased its military and economic support for several “proxy” civil wars around the globe. It also withdrew its support from satellite states and one UN peacekeeping mission, UNGOMAP, was designed to oversee the Pakistan–Afghanistan border and the withdrawal of Soviet troops from Afghanistan as the USSR began to refocus domestically. In 1991, the USSR dissolved into 15 independent states. Conflicts broke out in two former Soviet Republics, the Georgian–Abkhazian conflict in Georgia and civil war in Tajikistan, which were eventually policed by UN peacekeeping forces, UNOMIG, and UNMOT respectively.

With the end of the Cold War, several nations called for the UN to become an organization of world peace and do more to encourage the end to conflicts around the globe. The end of political gridlock in the Security Council helped the number of peacekeeping missions increased substantially. In a new spirit of cooperation, the Security Council established more substantial and more complex UN peacekeeping missions. Furthermore, peacekeeping came to involve more and more non-military elements that ensured the proper operation of civic functions, such as elections. The UN Department of Peacekeeping Operations was created in 1992 to support the increased demand for such missions. Several missions were designed to end civil wars in which competing sides had been sponsored by Cold War players.

The end of the Cold War in the early 1990s changed the foreign policy equation radically. Gone, or at least significantly reduced, was the nuclear standoff between the United States and the Soviet Union. It has been replaced by a somewhat multipolar world, in which the United States is the dominant military power, but finds itself among competing for power centers in Europe, China, India, and Russia, with radical change occurring in the Middle East and North Africa, potential conflicts with Iran, and the threat of global terrorism a reality since the tragedies of 9–11.

So while this is a world still defined by anarchy, it is not a world that appears to sit on the edge of some version of World War III. The issues that define foreign policy may have more to do with resource allocation and environmental protection than with negotiating a nuclear standoff. So the end of the Cold War coincided with and perhaps accelerated the rise of other organizations who are now players in the field of international relations. While some of these institutions grew out of the end of World War II, their role in the world perhaps been magnified since the 1990s.

Globalization and the Political Landscape

The question of modern world politics exists in the context of globalization: politically, economically, and culturally. In response to the acceleration of interdependence on a worldwide scale, both between human societies and between humankind and the environment, several entities designed to facilitate cooperation among world nations have been created. This “global governance” may also be used to name the process of designating laws, rules, or regulations intended for a global scale.

Global governance is not a world government, and even less democratic globalization. Global governance would not be necessary; was there a world government. The definition is flexible enough to apply whether the subject in general (e.g., global security and order) or specific (e.g., the World Health Organization’s Code on the Marketing of Breast Milk Substitutes). Therefore, global governance is thought to be an international process of consensus-forming, which generates guidelines and agreements that affect national governments and international corporations or supranational.

The idea of global governance began to take shape early in the twentieth century. International relations became a high priority as the world rebounded from two world wars. The question of the day was, “Can the world survive World War III?” To address this question, the United Nations was formed shortly after World War II.

Issues of war are not the only things addressed within a global governance context. Other objectives which are addressed by global cooperative organizations are economics (World Bank, International Monetary Fund, World Trade Organization), environmental management (United Nations Environmental Program, Intergovernmental Panel on Climate Change), and science and technological advances (World Trade Organization, United Nations Educational, Scientific and Cultural Organization).

Some organizations are opposed to global governance because they perceive it as an excuse for world leaders to spread capitalism despite the cost to human rights. They believe that international agreements and global financial institutions, such as the International Monetary Fund (IMF) and the World Trade Organization, undermine local decision-making. Corporations that use these institutions to support their own corporate and financial interests can exercise privileges that individuals and small businesses cannot, including the ability to move freely across borders, extract natural resources, and take advantage of human resources (such as low wages and child labor).

In light of the economic gap between rich and emerging countries, anti-globalists claim that free trade without measures to protect the environment and the health and wellbeing of workers will merely increase the power of industrialized nations and cause the decline of many developing nations. Specifically, corporations are accused of seeking to maximize profit at the expense of work safety conditions and standards, labor hiring and compensation standards, environmental conservation principles, and the integrity of national legislative authority, independence, and sovereignty.

Right or wrong, globalization is a fact of life. For example, consider the creation of the “global” scale. It is common now to think about problems having “global” significance and to look for policies to be implemented at a “global” level to solve them. However, the global scale did not exist until the age of European exploration, beginning in the late 1400s. Rapid advances communication, transportation, technology, health, and science, all uniquely human creations, have led people to increasingly see the world as an abstract sphere that can be fought over and divided up. The COVID-19 pandemic is an excellent example of globalization and how the world had to address a common “enemy.”

4.4 Challenges to Nation-States

In the world in which we live, the globe is divided up into sovereign nations. Remember that a sovereign state is one in which the state in the form of the government is the highest earthly power – there is no place to appeal a decision of the state except the state itself. So a sovereign state has defined borders that are respected by its neighbors, and control over its territory. In this part of the discussion, when we use the term “the state,” we mean a sovereign nation, not a political subdivision such as a U.S. or Mexican state. States in federal systems such as the U.S. and Mexico are formally referred to as sovereign states, but they are still ultimately dominated by national governments.

Moreover, this is where the challenges of international relations begin. In much of our discussion of politics, it is presumed that the state holds power and uses it as the people who control the state see fit. The power may be divided into different branches and levels of government, or not divided; through mechanisms such as elections, different people may assume power and state policies may change as a result of those elections. This presumption of a kind of state and a kind of allocation of power casts the study and practice of politics in a particular light. There is a way to resolve disputes; ultimately, somebody has the power to say yes or no and, absent violent revolution; everybody has to go along. However, in a world of genuinely sovereign states, which recognize no higher authority than themselves, the system is best described as anarchy.

A sovereign state is said to be the ultimate authority within its boundaries, borders that are respected by its neighbors. The government is legitimate in the eyes of the citizens, who generally obey the law. The United States is a sovereign nation; so are France and Indonesia. Most of the 195 recognized nations on earth are, in fact, sovereign nations.

Somalia, on the east coast of Africa, is not quite. The nation is currently divided into three parts. First is the former legitimate government of Somalia, which controls very little of the country, mostly in the south, and is beset by various warlords and religious factions. In the middle is a functioning state calling itself Puntland, which does not seek independence from Somalia but, at this point, might as well be. In the north is a state calling itself Somaliland, which is mainly functioning as a sovereign nation although few other countries currently recognize it as such.

This world of sovereign states came together in a treaty called the Peace of Westphalia in 1648. That treaty ended the 30 Years War, literally a three-decade-long conflict between Catholic and Protestant rulers and their subjects that tore apart what is now Germany and caused widespread suffering across Europe. Throughout history, people have found creative and largely pointless reasons for killing each other. However, the upshot of the treaty was that states had a right to order their affairs, in this case, the most northern, Protestant principalities of Germany and what was then called the Holy Roman Empire. The treaty, in effect, created the notion of sovereignty as an acknowledged fact of international law and diplomacy, and the Europeans exported the idea from there to the rest of the world.

European colonialism, as when the European nation-states carved up Africa at the end of the 1800s, forced sovereignty onto sometimes disparate groups of people that had previously been more or less sovereign nations in their parts of the continent. Only two African states – Liberia, which had been carved out earlier in the century by freed American slaves, and Ethiopia, which had been successfully fending off invaders for a thousand years—survived the onslaught. Although Africa had long been home to several substantial kingdoms and empires, the Europeans by the late 1800s had taken a technological leap forward that allowed them to conquer the continent in a few decades. The redrawing of the African map lumped together with groups of people who had previously been part of different states, creating political challenges when the Europeans were forced out after World War II.

A world comprising sovereign states means that there is no overarching world power that can tell them what to do. Why not, then, a world government to sort everything out? First, most if not all the sovereign states would have to agree, and both political leaders and ordinary citizens tend to dislike having someone else tell them what to do. The farther away from that someone is, the less they like it. Visions of black helicopters and invading U.N. troops were the stuff of many Americans’ paranoid nightmares in the 1970s and 1980s, despite the lack of any reality to this fear. Even if such a government could be established, the variety and diversity of the world would make it very difficult to rule, even in a highly democratic state. A world government would have to keep control and settle local and regional disputes, becoming, in the process, as despotic as the states it replaces, if not more so.

So, what we are left with are a lot of sovereign states, and a world system that is based on that single fact. Moreover, as there is no referee or overarching power, one state can erase another, as when Prussia and Russia effectively erased Poland, once the most significant state in Europe, from the map in 1795. The Poles, and their language, culture, and traditions remained, but the Polish state did not reappear until 1918. This does not mean that a state can act without consequence. When Iraq invaded Kuwait in 1990, states from around the world united in the effort to drive the Iraqis out and re-establish Kuwaiti sovereignty. Later in the same decade, Europeans and Americans joined to end ethnic cleansing in what was then Yugoslavia. So no state operates in a vacuum.

What remained of Poland after its 18th-century partition, and what most defines a place such as Somalia today, is a nation. In the precise terminology of international relations, a state has defined borders, but a nation has a cultural, linguistic, or ethnic similarity among a group of people. A nation is a sense of community among a group of people; that group of people may want to control themselves politically and become a nation as well. So, for example, the Kurds, of whom around 30 million live in the Middle East, are a nation but not a state. They are divided chiefly between Turkey, Iraq, Syria, Iran, comprising the largest single ethnic group in the world without its state. Kurdish separatists have fought for independence in Turkey, and all but carved out a sovereign state in the north of Iraq. However, at the moment, the Kurds remain a nation, and not quite a state.

Sometimes, we speak of a nation-state, an entity that combines elements of both these things. The United States, perhaps alone among the states of the world, is a nation based on ideology rather than an ethnicity. Still, the U.S. is sometimes given to nationalism, a sense of how to act and think, a sense of right and wrong, and a sense of separateness from others that includes a sentimental attachment to one’s homeland. Americans are not unique in this regard, but do tend to exhibit it more than others. This is sometimes called American exceptionalism, or the belief that the United States is unlike other states and in fact, has a unique destiny in the world. All states are unique in their ways. Whether the U.S. has a unique role to play is for you to decide.

Sometimes the system is dominated by a hegemon—a single state that is powerful enough to exert some influence on world politics. Hegemony means leadership or dominance of one person or state over others. In the case of international relations, Great Britain exercised a degree of global hegemony in the 1800s; the United States has exercised a similar role in the late 20th and early 21st centuries. However, a hegemon is not all-powerful, and the price of maintaining hegemony can be very high. Consequently, states are either striving for hegemony, or a balance of power, so that no hegemon arises. The anarchic system is world politics is, in fact, anti-hegemonic, as it resists attempts by anyone power to take over the whole world.

States interact through diplomacy, international law, and war. The Prussian military strategist Carl von Clausewitz (1780–1831) referred to war as “War is merely the continuation of policy by other means.” Clausewitz was not completely a warmonger, so his famous quote probably should not be taken to mean that he thought it was OK to go on the warpath. However, in contemporary international politics, war can be seen as the failure of policy, given the extraordinarily high cost of modern warfare.

To that end, states often prefer to find other ways to solve disputes. For that reason, states pay some attention to international law, which seeks to constrain the behavior of states. International law exists through treaties and agreements negotiated by states, and through rule-making mechanisms in multinational agencies and groups. They also attempt, through diplomacy, to try to convince other states to make choices that will be beneficial to the state, the region, or the world. Diplomacy works when both sides are rational, in the sense that they each have some understanding of their self-interest.

Israel and Palestine

The story of the Israel and Palestine conflict goes back thousands of years and is rooted in religious and cultural differences. However, today’s modern conflict is more than just about religion; it is about water, natural resources, land use, infrastructure, and Israeli settlements. Many would argue that Israel began following World War II, when the United Nations partitioned Palestine into Israeli and Palestinian states. Others, especially Jews, believe the story goes back further to early biblical times. They claim that god thousands of years ago gave them the land.

There is a growing debate around the world about needs to be done to end the conflict between the people of Israel and Palestine. There are three options: 1) create a “two-state solution” where the Israeli people keep most of Israel, but give the Palestinians the West Bank and possibly Gaza Strip, 2) integrate Palestinians into Israel and legal citizens which would make them the majority within Israel, 3) keep a segregation between Israelis and Palestinians as it currently exists and be considered an apartheid by the global community.

Collective Military Force

A collective military force is what arises when countries decide that it is in their best interest to pool their militaries in order to achieve a common goal. The use of collective military force in the global environment involves two primary concepts: collective security and collective defense. These concepts are similar but not identical.

Collective Security

Collective security can be understood as a security arrangement, regional or global, in which each state in the system accepts that the security of one is the concern of all, and agrees to join in a collective response to threats to, and breaches of, the peace. Collective security is more ambitious than collective defense in that it seeks to encompass the totality of states within a region or indeed globally, and to address a wide range of possible threats.

Collective security is achieved by setting up an international cooperative organization, under the auspices of international law. This gives rise to a form of international collective governance, albeit limited in scope and effectiveness. The collective security organization then becomes an arena for diplomacy.

The UN and Collective Security

The UN is often provided as the primary example of collective security. By employing a system of collective security, the UN hopes to dissuade any member state from acting in a manner likely to threaten peace, thereby avoiding any conflict.

Collective Defense

Collective defense is an arrangement, usually formalized by a treaty and an organization, among participant states that commit support in defense of a member state if it is attacked by another state outside the organization.

NATO and Collective Defense

The North Atlantic Treaty Organization (NATO) is the best known collective defense organization. Its now-famous Article V calls on (but does not fully commit) member states to assist another member under attack. This article was invoked after the September 11 attacks on the United States, after which other NATO members assisted in the US War on Terror in Afghanistan. As a global military and economic superpower, the US has taken charge of leading many of NATO’s initiatives and interventions.

Benefits and Drawbacks to Collective Defense

Collective defense entails benefits as well as risks. On the one hand, by combining and pooling resources, it can reduce any single state’s cost of providing adequately for its security. Smaller members of NATO, for example, have leeway to invest a more significant proportion of their budget on non-military priorities, such as education or health, since they can count on other members to come to their defense, if needed.

On the other hand, collective defense also involves risky commitments. Member states can become embroiled in costly wars in which neither the direct victim nor the aggressor benefit. In the First World War, countries in the collective defense arrangement known as the Triple Entente (France, Britain, Russia) were pulled into war quickly when Russia started full mobilization against Austria-Hungary, whose ally Germany subsequently declared war on Russia.

Modern Influences on the Political Landscape

It has been argued that the fall of communism and the dissolution of the Soviet Union in 1991 caused the most substantial geopolitical upheaval since World War II, dramatically changing the political map and the world balance of power. The disbanding of Cold War alliances led to the creation of 15 independent states including, Armenia, Kazakhstan, Russia, and Ukraine. In the past twenty-five years, these sweeping geopolitical changes resulted in a dramatic shift from military power to economic power. For example, Russia lost significant economic power after the fall of the Soviet Union. However, oil is an abundant natural resource in Russian, and as the price of oil increases, the Russian economy has begun to rebound. This rebound has provided vast amounts of money to rebuild their infrastructure, military, and economy and has thus dramatically improved their influence in the world.

As Russia’s economy has grown, so has the desire to reunite many former USSR states under Russian rule. In 2014, during civil unrest in Ukraine, Russia moved troops into the Crimean Peninsula, telling the world community that it was to protect the nation’s cultural and economic interests in the region. Considering the conflict from a spatial perspective makes it easier to understand why this region is so important to Russia.

Located on the Northern Coast of the Black Sea, Crimea was a Russian territory until 1954 when it was given to Ukraine by Soviet leader Nikita Khrushchev in an attempt to distribute natural resources more equitably in the USSR equally. When the Soviet Union broke up more than thirty years later, Crimea became part of the newly- independent Ukraine rather than Russia. In 2014, it was reported that nearly 60 percent of the population on the Crimean Peninsula still spoke Russian and considered themselves to be ethnic Russians.

Language and culture are only part of the story. Consider this:

  • The Crimean Peninsula has been home to Russia’s the Black Sea naval fleet since the 18th century.
  • The small waterway between Crimea and the Russian mainland is the only access to the Azov Sea, the western heart of Russia’s oil and natural gas distribution to Europe.

Additionally, Russia’s annexation of Crimea has thrown a spotlight on other disputed regions whose unresolved status could be a spark for conflict in the region (Region 4.4). Transnistria is a slim sliver of Moldova that split away from the country as the Soviet Union collapsed and has effectively been a Russian and Ukrainian speaking enclave ever since.

Transnistria residents aspire to join Russia. The Moldovan government has already warned Russia not to attempt a Crimean-style. Other hot spots include Abkhazia, which broke away from Georgia in 1993. South Ossetia has been the subject of an unresolved conflict with Georgia since 1992 and provided Russia justification for a short war with Georgia in 2008. Ethnic Armenians have controlled Nagorno-Karabakh since 1994, despite being claimed by Azerbaijan, and the presence of Russia’s 102nd Military Base in Armenia prompts speculation that Russia could again intervene there. Ethnicity trumps nationality in these areas, and the legacy of mixed communities hitherto part of the Russian Imperial and Soviet empires is coming back to haunt international relations.

Another modern influence on the political landscape comes from the rise in democratic governments. In a democracy, most governments draw up functional regions called electoral districts (or voting districts) to determine who may vote for whom, which a specific government office represents areas and which laws govern the actions of which regions. The smallest American electoral region is the precinct, which, at least in urban areas, is roughly “your neighborhood,” usually consisting of a few city blocks. Citizens may vote only in the precinct assigned to their home address, and this precinct is typically part of multiple, larger, nested electoral districts, like wards, townships, counties, congressional districts, and states. Most of the time, electoral districts have roughly the same number of people in each equivalent district. So for example, in 2011, each of California’s 80 State Assembly Districts had between 461,000 and 470,000 people. Each district has almost the same population as its neighbor. Efforts are made to keep all such districts similarly sized, so when a district loses or gains population, the boundaries must be redrawn to ensure even representation and avoid over, or underrepresentation called malapportionment.

Every ten years, after the decennial U.S. Census is completed, the U.S. Constitution requires electoral districts must be redrawn following the census results. This process, known as political redistricting, involves a great deal of geographic strategizing, and the outcome of this process fundamentally shapes American politics. In most U.S. states, the state legislature controls the redistricting process, and this fact opens the process to unfair political practices. The reason why the political redistricting process is so essential is that elections are heavily influenced by how the boundaries of electoral districts are drawn. Political groups that control the placement of boundaries are far more likely to control who gets elected, which laws get passed, and how tax money is collected and spent.

Each redistricting cycle, politicians in many locations, are accused of purposefully constructing political district boundaries to favor one group (Democrats, Latinos, labor unions, gun advocates, e.g.) over another. The construction of unfair districts is called gerrymandering. The odd term, “Gerrymander,” comes from a newspaper story that characterized the unfair redistricting map of South Essex County in Massachusetts in 1812. The map of the redrawn districts strongly favored Massachusetts’ governor at the time, Elbridge Gerry. The shape of one district was so distorted that reporters suggested it looked like a salamander, thus providing the two words that became the halves of the term used today to describe the process of creating unfair political districts.

There are several different strategies that politicians use to gerrymander districts. Where there is little cooperation between political parties (or other interest groups), politicians may pursue strategies that aggressively seek to limit the political influence of opposition groups.

If the opposition (or ethnic minority) party is small enough, then the controlling group may draw lines through the minority areas, minimizing the opposition’s ability to influence the outcome of elections in as many regions as possible. This process, called cracking, has commonly been used to divide inner-city ethnic minority groups into multiple districts, each dominated numerically by whites. If the opposition grows too numerous to split, then group controlling the redistricting process may draw district lines so that the opposition is dominant in a few districts, or even a single district to minimize the power of the opposition in the overall system. That strategy is called packing. Even a statistical minority can control power by carefully packing the majority group into cleverly drawn district boundaries.

There are dozens of other techniques by which one group can control the political power of others through manipulating election boundaries. However, it is likely that the most common unfairly drawn electoral district is the so-called sweetheart gerrymander drawn up cooperatively by incumbents from opposing political parties in order to help maintain the status quo. This involves drawing up safe districts, which favor one party over the other, ensuring maintenance of the status quo and nearly guaranteeing uncompetitive general elections – the primary elections may still be competitive. The most controversial type of districts are those based on race, and whether minority groups benefit or are harmed by minority-majority districts.

Due to the perceived negative issues associated with gerrymandering and its effect on competitive elections and democratic accountability, numerous countries have enacted reforms making the practice either more difficult or less effective. Countries such as the U.K., Australia, Canada, and most of those in Europe have transferred responsibility for defining constituency boundaries to neutral or cross-party bodies.

Under these systems, an independent, and presumably objective, commission is created specifically for redistricting, rather than having the legislature do it. This is the system used in the United Kingdom, where the independent boundary commissions determine the boundaries for constituencies in the House of Commons and the devolved legislatures, subject to ratification by the body in question (almost always granted without debate). A similar situation exists in Australia, where the independent Australian Electoral Commission and its state-based counterparts determine electoral boundaries for federal, state, and local jurisdictions.

To help ensure neutrality, members of a redistricting agency may be appointed from relatively apolitical sources such as retired judges or longstanding members of the civil service, possibly with requirements for adequate representation among competing political parties. Additionally, members of the board can be denied access to information that might aid in gerrymandering, such as the demographic makeup or voting patterns of the population.

Chapter 3: Cultural Patterns and Processes

Understanding the components and regional variations of cultural patterns and processes are critical to human geography. We studied the concepts of culture and cultural traits and learned how geographers assess the spatial and place dimensions of cultural groups as defined by language, religion, ethnicity, and gender, in the present as well as the past.

This module also explored cultural interaction at various scales, along with the adaptations, changes, and conflicts that may result. The geographies of language, religion, ethnicity, and gender are studied to identify and analyze the patterns and processes of cultural differences. We distinguished between languages and dialects, ethnic religions and universal religions, and folk and popular cultures, as well as between ethnic political movements. These distinctions help students understand the forces that affect the geographic patterns of each cultural characteristics.

Another significant emphasis of the module was the way culture shapes relationships between humans and the environment. We learned how culture is expressed in landscapes and how land use, in turn, represents cultural identity. Built environments enable the geographer to interpret cultural values, tastes, symbolism, and beliefs.

3.1 Understanding Race and Ethnicity

Trayvon Martin was a seventeen-year-old black teenager. On the evening of February 26, 2012, he was visiting with his father and his father’s fiancée in the Sanford, Florida multi-ethnic gated community where his father’s fiancée lived. Trayvon left her home on foot to buy a snack from a nearby convenience store. As he was returning, George Zimmerman, a white Hispanic male and the community’s neighborhood watch program coordinator, noticed him. In light of a recent rash of break-ins, Zimmerman called the police to report a person acting suspiciously, which he had done on many other occasions. The 911 operator told Zimmerman not to follow the teen, but soon after Zimmerman and Martin had a physical confrontation. According to Zimmerman, Martin attacked him, and in the ensuing scuffle Martin was shot and killed (CNN Library 2014).

A public outcry followed Martin’s death. There were allegations of racial profiling—the use by law enforcement of race alone to determine whether to stop and detain someone—a national discussion about “Stand Your Ground Laws,” and a failed lawsuit in which Zimmerman accused NBC of airing an edited version of the 911 call that made him appear racist. Zimmerman was not arrested until April 11, when he was charged with second-degree murder by special prosecutor Angela Corey. In the ensuing trial, he was found not guilty (CNN Library 2014).

The shooting, the public response, and the trial that followed offer a snapshot of the social constructs of race. Do you think race played a role in Martin’s death or in the public reaction to it? Do you think race had any influence on the initial decision not to arrest Zimmerman, or on his later acquittal? Does society fear black men, leading to racial profiling at an institutional level? What about the role of the media? Was there a deliberate attempt to manipulate public opinion? If you were a member of the jury, would you have convicted George Zimmerman?

Defining Race and Ethnicity

The idea of race refers to superficial physical differences that a particular society considers significant, while ethnicity describes shared culture. Moreover, the term “minority groups” describe subordinate groups, or that lack power in society regardless of skin color or country of origin. For example, in modern U.S. history, the elderly might be considered a minority group due to a diminished status that results from widespread prejudice and discrimination against them. Ten percent of nursing home staff admitted to physically abusing an older person in the past year, and 40 percent admitted to committing psychological abuse (World Health Organization 2011). In this chapter, we focus on racial and ethnic minorities.

Race, in biological terms, refers to a socially constructed way to identify humans based on physical characteristics, resulting from genetic ancestry. Shared genetic ancestry is a result of geographical isolation. Geographic isolation, since the era of colonization and even before then, has significantly decreased in most areas of the world. Less geographic isolation results in the mixing of racial groups. Thus, classifying people by their race with any accuracy is difficult.

Most biologists, geographers, and social scientists have all taken an official position rejecting the biological explanations of race. Over time, the typology of race that developed during early racial science has fallen into disuse, and the social construction of race is a more sociological way of understanding racial categories. Research in this school of thought suggests that race is not biologically identifiable and that previous racial categories were arbitrarily assigned, based on pseudoscience, and used to justify racist practices (Omi and Winant 1994; Graves 2003). When considering skin color, for example, the social construction of race perspective recognizes that the relative darkness or fairness of skin is an evolutionary adaptation to the available sunlight in different regions of the world.

Contemporary conceptions of race, therefore, which tend to be based on socioeconomic assumptions, illuminate how far removed modern understanding of race is from biological qualities. In modern society, some people who consider themselves “white” actually have more melanin (a pigment that determines skin color) in their skin than other people who identify as ”black.” In some countries, such as Brazil, class is more important than skin color in determining racial categorization. People with high levels of melanin may consider themselves “white” if they enjoy a middle-class lifestyle. On the other hand, someone with low levels of melanin might be assigned the identity of “black” if he or she has little education or money.

The social construction of race is also reflected in the way names for racial categories change with changing times. It is worth noting that race, in this sense, is also a system of labeling that provides a source of identity; specific labels fall in and out of favor during different social eras. For example, the category ”Negroid,” popular in the nineteenth century, evolved into the term “negro” by the 1960s, and then this term fell from use and was replaced with “African American.” This latter term was intended to celebrate the multiple identities that a black person might hold, but the word choice is a poor one: it lumps together a large variety of ethnic groups under an umbrella term while excluding others who could accurately be described by the label but who do not meet the spirit of the term. For example, actress Charlize Theron is a blonde-haired, blue-eyed “African American.”

PBS has created an exciting website called RACE – The Power of an Illusion that looks at whether race indeed is a biological characteristic of humans or a social construct. Take the Sorting People quiz and watch The Human Family Tree and Black in Latin America: An Island Divided to “witness” how migration and geography play a role in the complex issues surrounding race and ethnicity. Pay attention to how the racial and ethnic landscape of the island of Hispaniola impacts cultural identity and the geopolitics both within Hispaniola and beyond its shores.

Ethnicity is a term that describes shared culture – the practices, values, and beliefs of a group. This culture might include shared language, religion, and traditions, among other commonalities. Like race, the term ethnicity is difficult to describe, and its meaning has changed over time. Moreover, as with race, individuals may be identified or self-identify with ethnicities in complex, even contradictory, ways. For example, ethnic groups such as Irish, Italian American, Russian, Jewish, and Serbian might all be groups whose members are predominantly included in the “white” racial category.

Shared geography, language, and religion can often, but not always, factor into ethnic group categorizations. Ethnic groups distinguish themselves differently from one period to another. Ethnic identity can be used by individuals to identify themselves with others who have shared geographic, cultural, historical, linguistic, and religious ancestry; however, like race, ethnicity has been defined by the stereotypes created by dominant groups as a method of “Othering.” Othering is a process in which one group, usually the dominant group, views and represents themselves as “us/same” and another group as “them/other.”

Ethnicity, like race, continues to be an identification method that individuals and institutions use today—whether through the census, affirmative action initiatives, nondiscrimination laws, or simply in day-to-day personal relations.

Defining Minority Groups

Sociologist Louis Wirth (1945) defined a minority group as “any group of people who, because of their physical or cultural characteristics, are singled out from the others in the society in which they live for differential and unequal treatment, and who therefore regard themselves as objects of collective discrimination.” The term minority connotes discrimination, and in its social scientists use, the term subordinate group can be used interchangeably with the term minority, while the term dominant group is often substituted for the group that’s in the majority. These definitions correlate to the concept that the dominant group is that which holds the most power in a given society, while subordinate groups are those who lack power compared to the dominant group.

Note that being a numerical minority is not a characteristic of being a minority group; sometimes, larger groups can be considered minority groups due to their lack of power. It is the lack of power that is the predominant characteristic of a minority, or subordinate group. For example, consider apartheid in South Africa, in which a numerical majority (the black inhabitants of the country) were exploited and oppressed by the white minority.

According to Charles Wagley and Marvin Harris (1958), a minority group is distinguished by five characteristics: (1) unequal treatment and less power over their lives, (2) distinguishing physical or cultural traits like skin color or language, (3) involuntary membership in the group, (4) awareness of subordination, and (5) high rate of in-group marriage. Additional examples of minority groups might include the LGBTQ+ community, religious practitioners whose faith is not widely practiced where they live, and people with disabilities.

Scapegoat theory, developed initially from Dollard’s (1939) Frustration-Aggression theory, suggests that the dominant group will displace its unfocused aggression onto a subordinate group. History has shown us many examples of the scapegoating of a subordinate group. An example from the last century is the way Adolf Hitler was able to blame the Jewish population for Germany’s social and economic problems. In the United States, recent immigrants have frequently been the scapegoat for the nation’s—or an individual’s—woes. Many states have enacted laws to disenfranchise immigrants; these laws are popular because they let the dominant group scapegoat a subordinate group.

Stereotypes, Prejudice, and Discrimination

The terms stereotype, prejudice, discrimination, and racism are often used interchangeably in everyday conversation. Stereotypes are oversimplified generalizations about groups of people. Stereotypes can be based on race, ethnicity, age, gender, sexual orientation – almost any characteristic. They may be positive (usually about one’s own group, such as when women suggest they are less likely to complain about physical pain) but are often negative (usually toward other groups, such as when members of a dominant racial group suggest that a subordinate racial group is stupid or lazy). In either case, the stereotype is a generalization that does not take individual differences into account.

New stereotypes are rarely created; instead, they are recycled from subordinate groups that have assimilated into society and are reused to describe newly subordinate groups. For example, many stereotypes that are currently used to characterize black people were used earlier in American history to characterize Irish and Eastern European immigrants.

Prejudice and Racism

Prejudice refers to the beliefs, thoughts, feelings, and attitudes that someone holds about a group. Prejudice is not based on experience; instead, it is a prejudgment, originating outside experience. A 1970 documentary called Eye of the Storm illustrates how prejudice develops, by showing how defining one category of people as superior (children with blue eyes) results in prejudice against people who are not part of the favored category.

While prejudice is not necessarily specific to race, racism is a stronger type of prejudice used to justify the belief that one racial category is somehow superior or inferior to others; it is also a set of practices used by a racial majority to disadvantage a racial minority. The Ku Klux Klan is an example of a racist organization; its members’ belief in white supremacy has encouraged over a century of hate crime and hate speech.

Institutional racism refers to how racism is embedded in the fabric of society. For example, the disproportionate number of black men arrested, charged, and convicted of crimes may reflect racial profiling, a form of institutional racism.

Colorism is another kind of prejudice, in which someone believes one type of skin tone is superior or inferior to another within a racial group. Studies suggest that darker skinned African Americans experience more discrimination than lighter skinned African Americans (Herring, Keith, and Horton 2004; Klonoff and Landrine 2000). For example, if a white employer believes a black employee with a darker skin tone is less capable than a black employee with a lighter skin tone, that is colorism. At least one study suggested the colorism affected racial socialization, with darker-skinned black male adolescents receiving more warnings about the danger of interacting with members of other racial groups than did lighter-skinned black male adolescents (Landor et al. 2013).

Discrimination

While prejudice refers to biased thinking, discrimination consists of actions against a group of people. Discrimination can be based on age, religion, health, and other indicators; race-based laws against discrimination strive to address this set of social problems.

Discrimination based on race or ethnicity can take many forms, from unfair housing practices to biased hiring systems. Overt discrimination has long been part of U.S. history. In the late nineteenth century, it was not uncommon for business owners to hang signs that read, “Help Wanted: No Irish Need Apply.” Moreover, southern Jim Crow laws, with their “Whites Only” signs, exemplified overt discrimination that is not tolerated today.

However, we cannot erase discrimination from our culture just by enacting laws to abolish it. Even if a society managed to eradicate racism from each individual’s psyche, society itself would maintain it. Social scientist, Émile Durkheim calls racism a social fact, meaning that it does not require the action of individuals to continue. The reasons for this are complex and relate to the educational, criminal, economic, and political systems that exist in our society.

For example, when a newspaper identifies by race individuals accused of a crime, it may enhance stereotypes of a particular minority. Another example of racist practices is racial steering, in which real estate agents direct prospective homeowners toward or away from specific neighborhoods based on their race. Racist attitudes and beliefs are often more insidious and harder to pin down than specific racist practices.

Prejudice and discrimination can overlap and intersect in many ways. To illustrate, here are four examples of how prejudice and discrimination can occur. Unprejudiced nondiscriminators are open-minded, tolerant, and accepting individuals. Unprejudiced discriminators might be those who unthinkingly practice sexism in their workplace by not considering females for certain positions that have traditionally been held by men. Prejudiced nondiscriminators are those who hold racist beliefs but do not act on them, such as a racist store owner who serves minority customers. Prejudiced discriminators include those who actively make disparaging remarks about others or who perpetrate hate crimes.

Discrimination also manifests in different ways. The scenarios above are examples of individual discrimination, but other types exist. Institutional discrimination occurs when a societal system has developed with embedded disenfranchisement of a group, such as the U.S. military’s historical nonacceptance of minority sexualities (the “do not ask, do not tell” policy reflected this norm).

Institutional discrimination can also include the promotion of a group’s status, such in the case of white privilege, which is the benefits people receive simply by being part of the dominant group.

While most white people are willing to admit that nonwhite people live with a set of disadvantages due to the color of their skin, very few are willing to acknowledge the benefits they receive.

Theories of Race and Ethnicity

We can examine issues of race and ethnicity through three major perspectives: functionalism, conflict theory, and symbolic interactionism. As you read through these theories, ask yourself which one makes the most sense and why. Do we need more than one theory to explain racism, prejudice, stereotypes, and discrimination?

Functionalism

In the view of functionalism, racial and ethnic inequalities must have served an essential function in order to exist as long as they have. This concept, of course, is problematic. How can racism and discrimination contribute positively to society? A functionalist might look at “functions” and “dysfunctions” caused by racial inequality. Nash (1964) focused his argument on the way racism is functional for the dominant group, for example, suggesting that racism morally justifies a racially unequal society. Consider the way slave owners justified slavery in the antebellum South, by suggesting that black people were fundamentally inferior to white and preferred slavery to freedom.

Another way to apply the functionalist perspective to racism is to discuss the way racism can contribute positively to the functioning of society by strengthening bonds between in-group members through the ostracism of out-group members. Consider how a community might increase solidarity by refusing to allow outsiders access. On the other hand, Rose (1951) suggested that dysfunctions associated with racism include the failure to take advantage of talent in the subjugated group, and that society must divert from other purposes the time and effort needed to maintain artificially constructed racial boundaries. Consider how much money, time, and effort went toward maintaining separate and unequal educational systems before the civil rights movement.

Conflict Theory

Conflict theories are often applied to inequalities of gender, social class, education, race, and ethnicity. A conflict theory perspective of U.S. history would examine the numerous past and current struggles between the white ruling class and racial and ethnic minorities, noting specific conflicts that have arisen when the dominant group perceived a threat from the minority group. In the late nineteenth century, the rising power of black Americans after the Civil War resulted in draconian Jim Crow laws that severely limited black political and social power. For example, Vivien Thomas (1910–1985), the black surgical technician who helped develop the groundbreaking surgical technique that saves the lives of “blue babies” was classified as a janitor for many years, and paid as such, even though he was conducting complicated surgical experiments. The years since the Civil War have shown a pattern of attempted disenfranchisement, with gerrymandering and voter suppression efforts aimed at predominantly minority neighborhoods.

Social scientists, Patricia Hill Collins (1990), further developed intersection theory, originally articulated in 1989 by Kimberlé Crenshaw, which suggests we cannot separate the effects of race, class, gender, sexual orientation, and other attributes. When we examine race and how it can bring us both advantages and disadvantages, it is essential to acknowledge that the way we experience race is shaped, for example, by our gender and class. Multiple layers of disadvantage intersect to create the way we experience race. For example, if we want to understand prejudice, we must understand that the prejudice focused on a white woman because of her gender is very different from the layered prejudice focused on a poor Asian woman, who is affected by stereotypes related to being poor, being a woman, and her ethnic status.

Interactionism

For symbolic interactionists, race and ethnicity provide powerful symbols as sources of identity. Some interactionists propose that the symbols of race, not race itself, are what lead to racism. Famed Interactionist Herbert Blumer (1958) suggested that racial prejudice is formed through interactions between members of the dominant group: Without these interactions, individuals in the dominant group would not hold racist views. These interactions contribute to an abstract picture of the subordinate group that allows the dominant group to support its view of the subordinate group, and thus maintains the status quo.

An example of this might be an individual whose beliefs about a particular group are based on images conveyed in popular media, and those are unquestionably believed because the individual has never personally met a member of that group. Another way to apply the interactionist perspective is to look at how people define their races and the race of others. As we discussed the social construction of race, since some people who claim a white identity have a greater amount of skin pigmentation than some people who claim a black identity, how did they come to define themselves as black or white?

Culture of Prejudice

Culture of prejudice refers to the theory that prejudice is embedded in our culture. We grow up surrounded by images of stereotypes and casual expressions of racism and prejudice. Consider the casually racist imagery on grocery store shelves or the stereotypes that fill popular movies and advertisements. It is easy to see how someone living in the Northeastern United States, who may know no Mexican Americans personally, might gain a stereotyped impression from such sources as Speedy Gonzalez or Taco Bell’s talking Chihuahua. Because we are all exposed to these images and thoughts, it is impossible to know to what extent they have influenced our thought processes.

Intergroup Relationships

Intergroup relations (relationships between different groups of people) range along a spectrum between tolerance and intolerance. The most tolerant form of intergroup relations is pluralism, in which no distinction is made between minority and majority groups, but instead, there is equal standing. At the other end of the continuum are amalgamation, expulsion, ethnic cleansing, and even genocide – stark examples of intolerant intergroup relations.

Ethnic Cleansing and Genocide

The 20th Century was also the deadliest century, regarding war, in human history.  This century experienced two world wars, multiple civil wars, genocides in Rwanda (Tutsis and moderate Hutus), Sudan, Yugoslavia, and the Holocaust that decimated the Jewish population in Europe during WWII. In addition to WWI and WWII, this century experienced the Korean War, the Vietnam War, the Cold War, and the first Gulf War. Additionally, this century saw regional and civil conflicts such as those experienced in the Congo (6 million people died), as well as an upsurge in child soldiers and modern slavery.

Some of the worst acts by humans have been concerning ethnic cleansing and genocide. The United Nations Security Council established Resolution 780, which states that ethnic cleansing is “a purposeful policy designed by one ethnic or religious group to remove by violent and terror-inspiring means the civilian population of another ethnic or religious group from certain geographic areas.”

Genocide is usually defined as the intentional killing of large sums of people targeted because of their ethnicity, political ideology, religion, or culture. At first glance, it appears that ethnic cleansing and genocide are similar. With ethnic cleansing, the aim is to remove a group of people with similar ethnic backgrounds from a specific geographic region by any means possible. This could include forced migration, terror and rape, destruction of villages, and large-scale death. With genocide, the real intent is the death of a group of people at any scale possible until they are extinct. This has happened many times in recent history including Bosnia-Herzegovina, Burma, Cambodia, the Democratic Republic of the Congo, Rwanda, Sudan, and now Syria. Sadly, with all these ethnic conflicts, most were not officially declared as genocides by the United Nations Security Council, but the conditions on the ground and the reasons why they were occurring fit the definition.

Possibly the most well-known case of genocide is Hitler’s attempt to exterminate the Jewish people in the first part of the twentieth century. Also known as the Holocaust, the explicit goal of Hitler’s “Final Solution” was the eradication of European Jewry, as well as the destruction of other minority groups such as Catholics, people with disabilities, and LGTBQ+ individuals. With forced emigration, concentration camps, and mass executions in gas chambers, Hitler’s Nazi regime was responsible for the deaths of 12 million people, 6 million of whom were Jewish. Hitler’s intent was clear, and the high Jewish death toll certainly indicates that Hitler and his regime committed genocide. However, how do we understand genocide that is not so overt and deliberate?

The treatment of aboriginal Australians is also an example of genocide committed against indigenous people. Historical accounts suggest that between 1824 and 1908, white settlers killed more than 10,000 native Aborigines in Tasmania and Australia (Tatz 2006).

Another example is the European colonization of North America. Some historians estimate that Native American populations dwindled from approximately 12 million people in the year 1500 to barely 237,000 by the year 1900 (Lewy 2004). European settlers coerced American Indians off their lands, often causing thousands of deaths in forced removals, such as occurred in the Cherokee or Potawatomi Trail of Tears.

Settlers also enslaved Native Americans and forced them to give up their religious and cultural practices. However, the primary cause of Native American death was neither slavery nor war nor forced removal: it was the introduction of European diseases and Indians’ lack of immunity to them. Smallpox, diphtheria, and measles flourished among indigenous American tribes who had no exposure to the diseases and no ability to fight them. Quite simply, these diseases decimated the tribes. How planned this genocide was remains a topic of contention. Some argue that the spread of disease was an unintended effect of conquest, while others believe it was intentional citing rumors of smallpox-infected blankets being distributed as “gifts” to tribes.

Genocide is not just a historical concept; it is practiced today. Recently, ethnic and geographic conflicts in the Darfur region of Sudan have led to hundreds of thousands of deaths. As part of an ongoing land conflict, the Sudanese government and their state-sponsored Janjaweed militia have led a campaign of killing, forced displacement, and systematic rape of Darfuri people. Although a treaty was signed in 2011, the peace is fragile.

Today, there are a few situations that may be classified as a genocide. The first is in Myanmar, where the Buddhist govrenment has been systematically driving out Muslim populations called Rohingya.

There is also the situation in Yemen, where Saudi Arabia is bombing cities and towns using U.S. weaponry to target Iranian militants has killed over 10,000 people and over 40,000 injured as of 2019. Many human rights advocates claim the situation is approaching a genocide. On top of that, the civil war is creating a situation that could lead to the larget famine the world has seen in over a century.

In July 2011, South Sudan became the world’s newest country when it voted to break away from Sudan. Yet by December 2013, fighting between the new government and rebel fighters created a new civil war within the new country. Thousands of civilians have been killed, with millions more displaced by the violence. Like Yemen, there is now growing concern that the civil war will create a nationwide famine.

Expulsion

Expulsion refers to a subordinate group being forced, by a dominant group, to leave a particular area or country. As seen in the examples of the Trail of Tears and the Holocaust, expulsion can be a factor in genocide. However, it can also stand on its own as a destructive group interaction. Expulsion has often occurred historically with an ethnic or racial basis. In the United States, President Franklin D. Roosevelt issued Executive Order 9066 in 1942, after the Japanese government’s attack on Pearl Harbor. The Order authorized the establishment of internment camps for anyone with as little as one-eighth Japanese ancestry (i.e., one great-grandparent who was Japanese). Over 120,000 legal Japanese residents and Japanese U.S. citizens, many of them children, were held in these camps for up to four years, even though there was never any evidence of collusion or espionage. (In fact, many Japanese Americans continued to demonstrate their loyalty to the United States by serving in the U.S. military during the War.) In the 1990s, the U.S. executive branch issued a formal apology for this expulsion; reparation efforts continue today.

Segregation

Segregation refers to the physical separation of two groups, particularly in residence, but also in the workplace and social functions. It is essential to distinguish between de jure segregation (segregation that is enforced by law) and de facto segregation (segregation that occurs without laws but because of other factors). A stark example of de jure segregation is the apartheid movement of South Africa, which existed from 1948 to 1994. Under apartheid, black South Africans were stripped of their civil rights, and forcibly relocated to areas that segregated them physically from their white compatriots. Only after decades of degradation, violent uprisings, and international advocacy was apartheid finally abolished.

De jure segregation occurred in the United States for many years after the Civil War. During this time, many former Confederate states passed Jim Crow laws that required segregated facilities for blacks and whites. These laws were codified in 1896’s landmark Supreme Court case Plessy v. Ferguson, which stated that “separate but equal” facilities were constitutional. For the next five decades, blacks were subjected to legalized discrimination, forced to live, work, and go to school in separate—but unequal—facilities. It was not until 1954 and the Brown v. Board of Education case that the Supreme Court declared that “separate educational facilities are inherently unequal,” thus ending de jure segregation in the United States.

De facto segregation, however, cannot be abolished by any court mandate. Segregation is still alive and well in the United States, with different racial or ethnic groups often segregated by neighborhood, borough, or parish. Social scientists use segregation indices to measure racial segregation of different races in different areas. The indices employ a scale from zero to 100, where zero is the most integrated and 100 is the least. In the New York metropolitan area, for instance, the black-white segregation index was seventy-nine for the years 2005–2009. This means that 79 percent of either blacks or whites would have to move in order for each neighborhood to have the same racial balance as the whole metro region (Population Studies Center 2010).

Pluralism

Pluralism is represented by the ideal of the United States as a “salad bowl”: a great mixture of different cultures where each culture retains its own identity and yet adds to the flavor of the whole. Genuine pluralism is characterized by mutual respect on the part of all cultures, both dominant and subordinate, creating a multicultural environment of acceptance. In reality, true pluralism is a challenging goal to reach. In the United States, the mutual respect required by pluralism is often missing, and the nation’s past pluralist model of a melting pot posits a society where cultural differences aren’t embraced as much as erased.

Assimilation

Assimilation describes the process by which a minority individual or group gives up its own identity by taking on the characteristics of the dominant culture. In the United States, which has a history of welcoming and absorbing immigrants from different lands, assimilation has been a function of immigration.

Most people in the United States have immigrant ancestors. In relatively recent history, between 1890 and 1920, the United States became home to around 24 million immigrants. In the decades since then, further waves of immigrants have come to these shores and have eventually been absorbed into U.S. culture, sometimes after facing extended periods of prejudice and discrimination. Assimilation may lead to the loss of the minority group’s cultural identity as they become absorbed into the dominant culture, but assimilation has minimal to no impact on the majority group’s cultural identity.

Some groups may keep only symbolic gestures of their original ethnicity. For instance, many Irish Americans may celebrate Saint Patrick’s Day, many Hindu Americans enjoy a Diwali festival, and many Mexican Americans may celebrate Cinco de Mayo (a May 5 acknowledgment of Mexico’s victory at the 1862 Battle of Puebla). However, for the rest of the year, other aspects of their originating culture may be forgotten.

Assimilation is antithetical to the “salad bowl” created by pluralism; rather than maintaining their cultural flavor, subordinate cultures give up their traditions in order to conform to their new environment. Social scientists measure the degree to which immigrants have assimilated to a new culture with four benchmarks: socioeconomic status, spatial concentration, language assimilation, and intermarriage. When faced with racial and ethnic discrimination, it can be difficult for new immigrants to assimilate fully. Language assimilation, in particular, can be a formidable barrier, limiting employment and educational options and therefore constraining growth in socioeconomic status.

Amalgamation

Amalgamation is the process by which a minority group and a majority group combine to form a new group. Amalgamation creates the classic “melting pot” analogy; unlike the “salad bowl,” in which each culture retains its individuality, the “melting pot” ideal sees the combination of cultures that results in a new culture entirely.

Amalgamation, also known as miscegenation, is achieved through intermarriage between races. In the United States, anti-miscegenation laws flourished in the South during the Jim Crow era. It was not until 1967’s Loving v. Virginia that the last anti-miscegenation law was struck from the books, making these laws unconstitutional.

3.2 Understanding Culture

What are the rules when you pass an acquaintance at school, work, in the grocery store, or in the mall? Generally, we do not consider all of the intricacies of the rules of behavior. We may simply say, “Hello!” and ask, “How was your weekend?” or some other trivial question meant to be a friendly greeting. Rarely do we physically embrace or even touch the individual. In fact, doing so may be viewed with scorn or distaste, since as people in the United States we have fairly rigid rules about personal space. However, we all adhere to various rules and standards that are created and maintained in culture. These rules and expectations have meaning, and there are ways in which you may violate this negotiation. Consider what would happen if you stopped and informed everyone who said, “Hi, how are you?” exactly how you were doing that day, and in detail. You would more than likely violate rules of culture and specifically greeting. Perhaps in a different culture the question would be more literal, and it may require a response. Or if you are having coffee with a good friend, perhaps that question warrants a more detailed response. These examples are all aspects of culture, which is shared beliefs, values, and practices, that participants must learn. Sociologically, we examine in what situation and context certain behavior is expected, and in which situations perhaps it is not. These rules are created and enforced by people who interact and share culture.

In everyday conversation, people rarely distinguish between the terms culture and society, but the terms have slightly different meanings, and the distinction is important to a geographer. A society describes a group of people who share a community and a culture. By “community,” social scientists refer to a definable region—as small as a neighborhood (Brooklyn, or “the east side of town”), as large as a country (Ethiopia, the United States, or Nepal), or somewhere in between (in the United States, this might include someone who identifies with Southern or Midwestern society). To clarify, a culture represents the beliefs and practices of a group, while society represents the people who share those beliefs and practices. Neither society nor culture could exist without the other. In this chapter, we examine the relationship between culture and society in greater detail and pay special attention to the elements and forces that shape culture, including diversity and cultural changes. A final discussion touches on the different theoretical perspectives from which human geographers research culture.

Defining Culture

Humans are social creatures. Since the dawn of Homo sapiens nearly 250,000 years ago, people have grouped into communities in order to survive. Living together, people form everyday habits and behaviors – from specific methods of childrearing to preferred techniques for obtaining food. In modern-day Paris, many people shop daily at outdoor markets to pick up what they need for their evening meal, buying cheese, meat, and vegetables from different specialty stalls. In the United States, the majority of people shop once a week at supermarkets, filling large carts to the brim. How would a Parisian perceive U.S. shopping behaviors that Americans take for granted?

Almost every human behavior, from shopping to marriage to expressions of feelings, is learned. In the United States, people tend to view marriage as a choice between two people, based on mutual feelings of love. In other nations and in other times, marriages have been arranged through an intricate process of interviews and negotiations between entire families, or in other cases, through a direct system, such as a “mail-order bride.” To someone raised in New York City, the marriage customs of a family from Nigeria may seem strange or even wrong. Conversely, someone from a traditional Kolkata family might be perplexed with the idea of romantic love as the foundation for marriage and lifelong commitment. In other words, how people view marriage depends mostly on what they have been taught.

Behavior based on learned customs is not a bad thing. Being familiar with unwritten rules helps people feel secure and “normal.” Most people want to live their daily lives, confident that their behaviors will not be challenged or disrupted — however, even action as seemingly simple as commuting to work evidences a great deal of cultural propriety.

Culture consists of thoughts and tangible things. Material culture refers to the objects or belongings of a group of people. Nonmaterial culture, in contrast, consists of the ideas, attitudes, and beliefs of a society. Material and nonmaterial aspects of culture are linked, and physical objects often symbolize cultural ideas. These material and nonmaterial aspects of culture can vary subtly from region to region.

Cultural Universals

Often, a comparison of one culture to another will reveal obvious differences. However, all cultures also share common elements. Cultural universals are patterns or traits that are globally common to all societies. One example of a cultural universal is the family unit: every human society recognizes a family structure that regulates sexual reproduction and the care of children. Even so, how that family unit is defined and how it functions vary. In many Asian cultures, for example, family members from all generations commonly live together in one household. In these cultures, young adults continue to live in the extended household family structure until they marry and join their spouse’s household, or they may remain and raise their nuclear family within the extended family’s homestead. In the United States, by contrast, individuals are expected to leave home and live independently for a period before forming a family unit that consists of parents and their offspring. Other cultural universals include customs like funeral rites, weddings, and celebrations of births. However, each culture may view the ceremonies quite differently.

Anthropologist George Murdock first recognized the existence of cultural universals while studying systems of kinship around the world. Murdock found that cultural universals often revolve around basic human survival, such as finding food, clothing, and shelter, or around shared human experiences, such as birth and death or illness and healing. Through his research, Murdock identified other universals, including language, the concept of personal names, and, interestingly, jokes. Humor seems to be a universal way to release tensions and create a sense of unity among people (Murdock 1949). Social scientists consider humor necessary to human interaction because it helps individuals navigate otherwise tense situations.

Ethnocentrism and Cultural Relativism

Despite how much humans have in common, cultural differences are far more prevalent than cultural universals. For example, while all cultures have language, analysis of particular language structures and conversational etiquette reveal tremendous differences. In some Middle Eastern cultures, it is common to stand close to others in conversation. North Americans keep more distance and maintain an ample “personal space.” Even something as simple as eating and drinking varies significantly from culture to culture. If your professor comes into an early morning class holding a mug of liquid, what do you assume she is drinking? In the United States, it’s most likely filled with coffee, not Earl Grey tea, a favorite in England, or Yak Butter tea, a staple in Tibet.

The way cuisines vary across cultures fascinates many people. Some travelers pride themselves on their willingness to try unfamiliar foods, like celebrated food writer Anthony Bourdain, while others return home expressing gratitude for their native culture’s fare. Often, people in the United States express disgust at other cultures’ cuisine and think that it is gross to eat meat from a dog or guinea pig, for example, while they do not question their habit of eating cows or pigs. Such attitudes are an example of ethnocentrism, or evaluating and judging another culture based on how it compares to one’s cultural norms. Ethnocentrism, as social scientists William Graham Sumner (1906) described the term, involves a belief or attitude that one’s own culture is better than all others. Almost everyone is a little bit ethnocentric. For example, Americans tend to say that people from England drive on the “wrong” side of the road, rather than on the “other” side. Someone from a country where dog meat is standard fare might find it off-putting to see a dog in a French restaurant—not on the menu, but as a pet and patron’s companion. An example of ethnocentrism is referring to parts of Asia as the “Far East.” One might question, “Far East of where?”

A high level of appreciation for one’s own culture can be healthy; a shared sense of community pride, for example, connects people in a society. However, ethnocentrism can lead to disdain or dislike for other cultures and could cause misunderstanding and conflict. People with the best intentions sometimes travel to a society to “help” its people, because they see them as uneducated or backward – inherently inferior. In reality, these travelers are guilty of cultural imperialism, the deliberate imposition of one’s own cultural values on another culture. Europe’s colonial expansion, begun in the sixteenth century, was often accompanied by a severe cultural imperialism. European colonizers often viewed the people in the lands they colonized as uncultured savages who needed European governance, dress, religion, and other cultural practices. A more modern example of cultural imperialism may include the work of international aid agencies who introduce agricultural methods and plant species from developed countries while overlooking indigenous varieties and agricultural approaches that are better suited to the particular region.

Ethnocentrism can be so strong that when confronted with all of the differences of a new culture, one may experience disorientation and frustration, called culture shock. A traveler from Chicago might find the nightly silence of rural Montana unsettling, not peaceful. An exchange student from China might be annoyed by the constant interruptions in class as other students ask questions – a practice that is considered rude in China. Perhaps the Chicago traveler was initially captivated with Montana’s quiet beauty, and the Chinese student was initially excited to see a U.S.-style classroom firsthand. However, as they experience unanticipated differences from their own culture, their excitement gives way to discomfort and doubts about how to behave appropriately in the new situation. Eventually, as people learn more about a culture, they recover from culture shock.

Culture shock may appear because people are not always expecting cultural differences. Anthropologist Ken Barger (1971) discovered this when he conducted a participatory observation in an Inuit community in the Canadian Arctic. Initially, from Indiana, Barger hesitated when invited to join a local snowshoe race. He knew he would never hold his own against these experts. Sure enough, he finished last, to his mortification. However, the tribal members congratulated him, saying, “You really tried!” In Barger’s own culture, he had learned to value victory. To the Inuit people, winning was enjoyable, but their culture valued survival skills essential to their environment: how hard someone tried could mean the difference between life and death. Throughout his stay, Barger participated in caribou hunts, learned how to take shelter in winter storms, and sometimes went days with little or no food to share among tribal members. Trying hard and working together, two nonmaterial values, were indeed much more important than winning.

During his time with the Inuit tribe, Barger learned to engage in cultural relativism. Cultural relativism is the practice of assessing a culture by its own standards rather than viewing it through the lens of one’s own culture. Practicing cultural relativism requires an open mind and a willingness to consider, and even adapt to, new values and norms. However, indiscriminately embracing everything about a new culture is not always possible. Even the most culturally relativist people from egalitarian societies — ones in which women have political rights and control over their own bodies — would question whether the widespread practice of female genital mutilation in countries such as Ethiopia and Sudan should be accepted as a part of cultural tradition. Human geographers attempting to engage in cultural relativism, then, may struggle to reconcile aspects of their own culture with aspects of a culture that they are studying.

Sometimes when people attempt to rectify feelings of ethnocentrism and develop cultural relativism, they swing too far to the other end of the spectrum. Xenocentrism is the opposite of ethnocentrism, and refers to the belief that another culture is superior to one’s own. (The Greek root word xeno, pronounced “ZEE-no,” means “stranger” or “foreign guest.”) An exchange student who goes home after a semester abroad or a geographer who returns from the field may find it difficult to associate with the values of their own culture after having experienced what they deem a more upright or nobler way of living.

Perhaps the greatest challenge for geographers and other social scientists studying different cultures is the matter of keeping a perspective. It is impossible for anyone to keep all cultural biases at bay; the best we can do is strive to be aware of them. Pride in one’s own culture does not have to lead to imposing its values on others. Moreover, an appreciation for another culture should not preclude individuals from studying it with a critical eye.

Elements of Cultural Values and Beliefs

The first, and perhaps most crucial, elements of culture we will discuss are its values and beliefs. Values are a culture’s standard for discerning what is good and just in society. Values are deeply embedded and critical for transmitting and teaching a culture’s beliefs. Beliefs are the tenets or convictions that people hold to be true. Individuals in a society have specific beliefs, but they also share common values. To illustrate the difference, Americans commonly believe in the American Dream—that anyone who works hard enough will be successful and wealthy. Underlying this belief is the American value that wealth is useful and important.

Values help shape a society by suggesting what is right and wrong, beautiful and ugly, sought, or avoided. Consider the value that the United States places upon youth. Children represent innocence and purity, while a youthful adult appearance signifies sexuality. Shaped by this value, individuals spend millions of dollars each year on cosmetic products and surgeries to look young and beautiful. The United States also has an individualistic culture, meaning people place a high value on individuality and independence. In contrast, many other cultures are collectivist, meaning the welfare of the group and group relationships are a primary value.

Living up to a culture’s values can be difficult. It is easy to value good health, but it is hard to quit smoking. Marital monogamy is valued, but many spouses engage in infidelity. Cultural diversity and equal opportunities for all people are valued in the United States, yet the country’s highest political offices have been dominated by white men.

Values often suggest how people should behave, but they do not accurately reflect how people do behave. Values portray an ideal culture; the standards society would like to embrace and live up to. However, ideal culture differs from real culture, the way society actually is, based on what occurs and exists. In an ideal culture, there would be no traffic accidents, murders, poverty, or racial tension. However, in real culture, police officers, lawmakers, educators, and social workers continuously strive to prevent or repair those accidents, crimes, and injustices

One way societies strive to put values into action is through rewards, sanctions, and punishments. When people observe the norms of society and uphold their values, they are often rewarded. A boy who helps an elderly woman board a bus may receive a smile and a “thank you.” A business manager who raises profit margins may receive a quarterly bonus. People sanction certain behaviors by giving their support, approval, or permission, or by instilling formal actions of disapproval and nonsupport. Sanctions are a form of social control, a way to encourage conformity to cultural norms. Sometimes people conform to norms in anticipation or expectation of positive sanctions: good grades, for instance, may mean praise from parents and teachers. From a criminal justice perspective, properly used social control is also inexpensive crime control. Utilizing social control approaches pushes most people to conform to societal rules, regardless of whether authority figures (such as law enforcement) are present.

When people go against a society’s values, they are punished. A boy who shoves an older woman aside to board the bus first may receive frowns or even a scolding from other passengers. A business manager who drives away customers will likely be fired. Breaking norms and rejecting values can lead to cultural sanctions such as earning a negative label—lazy, no-good bum—or to legal sanctions, such as traffic tickets, fines, or imprisonment.

Values are not static; they vary across time and between groups as people evaluate, debate, and change collective societal beliefs. Values also vary from culture to culture. For example, cultures differ in their values about what kinds of physical closeness are appropriate in public. It is rare to see two male friends or coworkers holding hands in the United States, where that behavior often symbolizes romantic feelings. However, in many nations, masculine physical intimacy is considered natural in public. This difference in cultural values came to light when people reacted to photos of former president George W. Bush holding hands with the Crown Prince of Saudi Arabia in 2005. A simple gesture, such as hand-holding, carries significant symbolic differences across cultures.

Norms

So far, the examples in this chapter have often described how people are expected to behave in certain situations – for example, when buying food or boarding a bus. These examples describe the visible and invisible rules of conduct through which societies are structured, or what social scientists call norms. Norms define how to behave in accordance with what a society has defined as good, right, and important, and most members of the society adhere to them.

Formal norms are established, written rules. They are behaviors worked out and agreed upon in order to suit and serve the most people. Laws are formal norms, but so are employee manuals, college entrance exam requirements, and “no running” signs at swimming pools. Formal norms are the most specific and clearly stated of the various types of norms, and they are the most strictly enforced. However, even formal norms are enforced to varying degrees and are reflected in cultural values.

For example, money is highly valued in the United States, so monetary crimes are punished. It is against the law to rob a bank, and banks go to great lengths to prevent such crimes. People safeguard valuable possessions and install anti-theft devices to protect homes and cars. A less strictly enforced social norm is driving while intoxicated. While it is against the law to drive drunk, drinking is, for the most part, an acceptable social behavior. Moreover, though there are laws to punish drunk driving, there are few systems in place to prevent the crime. These examples show a range of enforcement regarding formal norms.

There are plenty of formal norms, but the list of informal norms – casual behaviors that are generally and widely conformed to – is longer. People learn informal norms through observation, imitation, and general socialization. Some informal norms are taught directly, while others are learned by observation, including observations of the consequences when someone else violates a norm. However, although informal norms define personal interactions, they extend into other systems as well. Most people do not commit even benign breaches of informal norms. Informal norms dictate appropriate behaviors without the need for written rules.

Norms may be further classified as either mores or folkways. Mores (mor-ays) are norms that embody the moral views and principles of a group. Violating them can have serious consequences. The strongest mores are legally protected with laws or other formal norms. In the United States, for instance, murder is considered immoral, and it is punishable by law (a formal norm). However, more often, mores are judged and guarded by public sentiment (an informal norm). People who violate mores are seen as shameful. They can even be shunned or banned from some groups. The mores of the U.S. school system require that a student’s writing be in the student’s own words or use special forms (such as quotation marks and a whole system of citation) for crediting other writers. Writing another person’s words as if they are one’s own has a name—plagiarism. The consequences of violating this norm are severe and usually, result in expulsion.

Unlike mores, folkways are norms without any moral underpinnings. Rather, folkways direct appropriate behavior in the day-to-day practices and expressions of a culture. They indicate whether to shake hands or kiss on the cheek when greeting another person. Many folkways are actions we take for granted. People need to act without thinking in order to get seamlessly through daily routines; they cannot stop and analyze every action (Sumner 1906). Those who experience culture shock may find that it subsides as they learn the new culture’s folkways and can move through their daily routines more smoothly. Folkways might be small manners, learned by observation and imitated, but they are by no means trivial. Like mores and laws, these norms help people negotiate their daily lives within a given culture.

Fold and Popular Culture

It may seem obvious that there is a multitude of cultural differences between societies in the world. After all, we can easily see that people vary from one society to the next. It is natural that a young woman from rural Kenya would have a very different view of the world from an older man in Mumbai—one of the most populated cities in the world. Additionally, each culture has its own internal variations. Sometimes the differences between cultures are not nearly as significant as the differences inside cultures.

Do you prefer listening to opera or hip-hop music? Do you like watching horseracing or NASCAR? Do you read books of poetry or celebrity magazines? In each pair, one type of entertainment is considered highbrow and the other lowbrow. Social scientists use the term high culture to describe the pattern of cultural experiences and attitudes that exist in the highest-class segments of a society. People often associate high culture with intellectualism, political power, and prestige. In America, high culture also tends to be associated with wealth. Events considered high culture can be expensive and formal—attending a ballet, seeing a play, or listening to a live symphony performance.

The term popular culture, also called pop-culture, refers to the pattern of cultural experiences and attitudes that exist in mainstream society. Popular culture events might include a parade, a baseball game, or the season finale of a television show. Rock and pop music – “pop” is short for “popular” – are part of popular culture. Popular culture is often expressed and spread via commercial media such as radio, television, movies, the music industry, publishers, and corporate-run websites. Unlike high culture, popular culture is known and accessible to most people. You can share a discussion of favorite football teams with a new coworker or comment on American Idol when making small talk in line at the grocery store. However, if you tried to launch into an in-depth discussion on the classical Greek play Antigone, few members of U.S. society today would be familiar with it.

Although high culture may be viewed as superior to popular culture, the labels of high culture and popular culture vary over time and place. Shakespearean plays, considered pop culture when they were written, are now part of our society’s high culture. Five hundred years from now, will our descendants associate Breaking Bad with the cultural elite?

Subculture and Counterculture

A subculture is just what it sounds like – a smaller cultural group within a broader culture; people of a subculture are part of the broader culture but also share a specific identity within a smaller group.

Thousands of subcultures exist within the United States. Ethnic and racial groups share the language, food, and customs of their heritage. Other subcultures are united by shared experiences. Biker culture revolves around a dedication to motorcycles. Some subcultures are formed by members who possess traits or preferences that differ from the majority of a society’s population. The body modification community embraces aesthetic additions to the human body, such as tattoos, piercings, and certain forms of plastic surgery. In the United States, adolescents often form subcultures to develop a shared youth identity. Alcoholics Anonymous offers support to those suffering from alcoholism. However, even as members of a subculture band together, they still identify with and participate in the larger society.

Human geographers and sociologists distinguish subcultures from countercultures, which are a type of subculture that rejects some of the larger culture’s norms and values. In contrast to subcultures, which operate relatively smoothly within the larger society, countercultures might actively defy larger society by developing their own set of rules and norms to live by, sometimes even creating communities that operate outside of greater society.

Cults, a word derived from culture, are also considered counterculture groups. The group “Yearning for Zion” (YFZ) in Eldorado, Texas, existed outside the mainstream and the limelight, until its leader was accused of statutory rape and underage marriage. The sect’s formal norms clashed too severely to be tolerated by U.S. law, and in 2008, authorities raided the compound and removed more than two hundred women and children from the property.

Cultural Change

Culture is always evolving. Moreover, new things are added to material culture every day, and they affect nonmaterial culture as well. Cultures change when something new (say, railroads or smartphones) opens up new ways of living and when new ideas enter a culture (say, as a result of travel or globalization).

Innovation: Discovery and Intervention

Innovation refers to an object or concept’s initial appearance in society – it is innovative because it is markedly new. There are two ways to come across an innovative object or idea: discover it or invent it. Discoveries make known previously unknown but existing aspects of reality. In 1610, when Galileo looked through his telescope and discovered Saturn, the planet was already there, but until then, no one had known about it. When Christopher Columbus encountered America, the land was, of course, already well known to its inhabitants. However, Columbus’s discovery was new knowledge for Europeans, and it opened the way to changes in European culture, as well as to the cultures of the discovered lands. For example, new foods such as potatoes and tomatoes transformed the European diet, and horses brought from Europe changed hunting practices of Native American tribes of the Great Plains.

Inventions result when something new is formed from existing objects or concepts—when things are put together in an entirely new manner. In the late 1800s and early 1900s, electric appliances were invented at an astonishing pace. Cars, airplanes, vacuum cleaners, lamps, radios, telephones, and televisions were all new inventions. Inventions may shape a culture when people use them in place of older ways of carrying out activities and relating to others, or as a way to carry out new kinds of activities. Their adoption reflects (and may shape) cultural values, and their use may require new norms for new situations.

Consider the introduction of modern communication technology, such as mobile phones and smartphones. As more and more people began carrying these devices, phone conversations no longer were restricted to homes, offices, and phone booths. People on trains, in restaurants, and other public places became annoyed by listening to one-sided conversations. Norms were needed for cell phone use. Some people pushed for the idea that those who are out in the world should pay attention to their companions and surroundings. However, technology-enabled a workaround such as texting, which enables quiet communication and has surpassed phoning as the leading way to meet today’s highly valued ability to stay in touch anywhere, everywhere.

When the pace of innovation increases, it can lead to generation gaps. A skeptical older generation sometimes dismisses technological gadgets that catch on quickly with one generation. A culture’s objects and ideas can cause not just generational but cultural gaps. Material culture tends to diffuse more quickly than nonmaterial culture; technology can spread through society in a matter of months, but it can take generations for the ideas and beliefs of society to change. Sociologist William F. Ogburn coined the term culture lag to refer to this time that elapses between the introduction of a new item of material culture and its acceptance as part of nonmaterial culture (Ogburn 1957).

Culture lag can also cause tangible problems. The infrastructure of the United States, built a hundred years ago or more, is having trouble supporting today’s more densely populated and fast-paced life. There is a lag in conceptualizing solutions to infrastructure problems. Rising fuel prices, increased air pollution, and traffic jams are all symptoms of culture lag. Although people are becoming aware of the consequences of overusing resources, the means to support changes take time to achieve.

Diffusion and Globalization

The integration of world markets and technological advances of the last decades have allowed for greater exchange between cultures through the processes of globalization and diffusion. Beginning in the 1980s, Western governments began to deregulate social services while granting greater liberties to private businesses. As a result, world markets became dominated by multinational companies in the 1980s, a new state of affairs at that time. We have since come to refer to this integration of international trade and finance markets as globalization. Increased communications and air travel have further opened doors for international business relations, facilitating the flow not only of goods but also of information and people as well (Scheuerman 2014 (revised)). Today, many U.S. companies set up offices in other nations where the costs of resources and labor are cheaper. When a person in the United States calls to get information about banking, insurance, or computer services, the person taking that call may be working in another country.

Alongside the process of globalization is diffusion, or the spread of material and nonmaterial culture. While globalization refers to the integration of markets, diffusion relates to a similar process in the integration of international cultures. Middle-class Americans can fly overseas and return with a new appreciation of Thai noodles or Italian gelato. Access to television and the Internet has brought the lifestyles and values portrayed in U.S. sitcoms into homes around the globe. Twitter feeds from public demonstrations in one nation have encouraged political protesters in other countries. When this kind of diffusion occurs, material objects and ideas from one culture are introduced into another.

Theoretical Perspectives on Culture

Music, fashion, technology, and values—all are products of culture. However, what do they mean? How do human geographers perceive and interpret culture based on these material and nonmaterial items? Let us finish our analysis of culture by reviewing them in the context of three theoretical perspectives: functionalism, conflict theory, and symbolic interactionism.

Functionalists view society as a system in which all parts work—or function—together to create society as a whole. In this way, societies need culture to exist. Cultural norms function to support the fluid operation of society, and cultural values guide people in making choices. Just as members of a society work together to fulfill a society’s needs, culture exists to meet its members’ basic needs.

Functionalists also study culture in terms of values. Education is an essential concept in the United States because it is valued. The culture of education—including material culture such as classrooms, textbooks, libraries, dormitories—supports the emphasis placed on the value of educating a society’s members.

Conflict theorists view social structure as inherently unequal, based on power differentials related to issues like class, gender, race, and age. For a conflict theorist, culture is seen as reinforcing issues of “privilege” for certain groups based upon race, sex, class, and so on. Women strive for equality in a male-dominated society. Senior citizens struggle to protect their rights, their health care, and their independence from a younger generation of lawmakers. Advocacy groups such as the ACLU work to protect the rights of all races and ethnicities in the United States.

Inequalities exist within a culture’s value system. Therefore, a society’s cultural norms benefit some people but hurt others. Some norms, formal and informal, are practiced at the expense of others. Women were not allowed to vote in the United States until 1920. Gay and lesbian couples have been denied the right to marry in some states. Racism and bigotry are very much alive today. Although cultural diversity is supposedly valued in the United States, many people still frown upon interracial marriages. Same-sex marriages are banned in most states, and polygamy—common in some cultures—is unthinkable to most Americans.

At the core of conflict theory is the effect of economic production and materialism: dependence on technology in rich nations versus a lack of technology and education in emerging nations. Conflict theorists believe that a society’s system of material production affects the rest of the culture. People who have less power also have less ability to adapt to cultural change. This view contrasts with the perspective of functionalism. In the U.S. culture of capitalism, to illustrate, we continue to strive toward the promise of the American dream, which perpetuates the belief that the wealthy deserve their privileges.

Symbolic interactionism is a sociological perspective that is most concerned with the face-to-face interactions between members of society. Interactionists see culture as being created and maintained by the ways people interact and in how individuals interpret each other’s actions. Proponents of this theory conceptualize human interactions as a continuous process of deriving meaning from both objects in the environment and the actions of others. This is where the term symbolic comes into play. Every object and action has a symbolic meaning, and language serves as a means for people to represent and communicate their interpretations of these meanings to others. Those who believe in symbolic interactionism perceive culture as highly dynamic and fluid, as it is dependent on how meaning is interpreted and how individuals interact when conveying these meanings.

We began this chapter by asking what culture is. Culture is comprised of all the practices, beliefs, and behaviors of a society. Because culture is learned, it includes how people think and express themselves. While we may like to consider ourselves individuals, we must acknowledge the impact of culture; we inherit thought language that shapes our perceptions and patterned behavior, including about issues of family and friends, and faith and politics.

To an extent, culture is a social comfort. After all, sharing a similar culture with others is precisely what defines societies. Nations would not exist if people did not coexist culturally. There could be no societies if people did not share heritage and language, and civilization would cease to function if people did not agree on similar values and systems of social control. Culture is preserved through transmission from one generation to the next, but it also evolves through processes of innovation, discovery, and cultural diffusion. We may be restricted by the confines of our own culture, but as humans, we can question values and make conscious decisions. No better evidence of this freedom exists than the amount of cultural diversity within our society and around the world. The more we study another culture, the better we become at understanding our own.

Defining Cutural Geography

Professor Don Mitchell argues that cultural geography as a subdiscipline did not come into existence merely to serve as a conduit through which geographers can describe and explain the various cultures of the world in the context of space and place. Instead, he contends that cultural geography is a product of “culture wars.” He builds this argument as follows:

In the nineteenth century, people in the Western World believed that Western civilization was superior to all others on earth, and they wanted to know why European culture was far more advanced (in their eyes) than any other. The British, in particular, were keen to pursue this line of research, but so, too, were the Germans, Americans, and French. After all, the nineteenth century was a time of almost unchallenged European imperialism. Therefore, nineteenth-century geographers tended to think of themselves as significant players in the imperial system.

Over time, the work of early cultural geographers split into two opposing camps. One group was epitomized by Carl Sauer, who is seen by many as the father of modern cultural geography, and the other by Friedrich Ratzel, Ellen Churchill Semple, and Ellsworth Huntington, who sought to deterministically connect human behavior to the physical environment.

Determinism

Environmental determinism argues that both general features and regional variations of human cultures and societies are determined by the physical and biological forms that make up the earth’s many natural landscapes. Geographers influenced by Semple and Huntington tended to describe and explain what they believed to be “superior” European culture (civilization) through the application of the theory of environmental determinism. From their writings, it does not seem that they ever recognized the inaccuracies of their position, let alone the arrogant, racist foundation upon which it rested.

Although modern geographers rarely discuss the impacts of environmental determinism except to note its serious flaws as a model for spatial analysis, its basic concepts were used by the Third Reich to justify German expansion in the 1930s and 1940s. Friedrich Ratzel, a German geographer (American geographer, Ellen Churchill Semple was one of his students) argued that nation states are organic and therefore, must grow in order to survive. In other words, states must continually seek additional “lebensraum” (living room). The state, a living thing, was a natural link between the people and the natural environment (blood and soil). Moreover, the state provided a living tie between people and a place. This application of environmental determinism, and Social Darwinism, eventually came to be more than a mere academic exercise because it was used to justify, or legitimize, the conquering of one people by another. At the height of European imperialism, academics depicted the tremendous colonial empires as natural extensions of superior European cultures that had developed in the beneficial natural surrounding of the mid-latitudes. The concept of “manifest destiny” was used similarly to justify the expansion of the United States from the Atlantic to Pacific shores, at the expense of indigenous people.

Although Ratzel, Semple, and Huntington never expected their ideas to be used to justify Adolf Hitler’s conquest of Europe, Nazi geographers and political scientists built upon their work to develop theories of Nordic racial and cultural superiority. Semple and Huntington wanted nothing more than to define the boundaries of their discipline and to explain the differences in “cultures” and “places” throughout the world. They were merely striving to carve out a piece of academic or intellectual turf for themselves and like-minded colleagues.

By the 1920s, environmental determinism was already under attack by people such as Carl Sauer (at the University of California, Berkeley). Nevertheless, many scholars continued to base their work on the belief that human beings are primarily a product of the environment in which they live. Frederick Jackson Turner, the American historian who eloquently described the westward expansion of the United States, and Sir Halford Mackinder, the British political scientist who developed the “Heartland Theory,” explained away the conquering of indigenous people by Europeans as perhaps regrettable, but nonetheless, natural and unavoidable (given the superiority of cultures spawned in the mid-latitude environs of Western Europe).

The Cultural Landscape

Carl Sauer was probably the most influential cultural geographer of the twentieth century. Sauer’s work is characterized by a focus on the material landscape tempered with an abiding interest in human ecology, and the damaging impacts of humans on the environment. Additionally, and of equal importance, Sauer worked tirelessly to trace the origins and diffusions of cultural practices such as agriculture, the domestication of animals, and the use of fire.

Although there is no question that Sauer’s contributions to cultural geography are of great worth, some also criticize him for an anti-modern, anti-urban bias. Even so, his efforts to correct the inherent flaws associated with “environmental determinism” significantly strengthened the discipline of geography, and cultural geography in particular.

In 1925, Sauer published The Morphology of Landscape. In this work, he sought to demonstrate that nature does not create culture, but instead, culture working with and on nature, creates ways-of-life. Sauer considered human impacts on the landscape to be a manifestation of culture. Therefore, he argued, in order to understand a culture, a geographer must learn to read the landscape.

Sauer looked at “culture” holistically. Simply put, Sauer regarded “culture” as a way of life. Sauer, however, did not fully develop an explanation of what “culture” is. Instead, he left it to anthropologist Franz Boas to debunk “environmental determinism” and “social Darwinism” and to call for the analysis of cultures on “their” own terms (as opposed to using a hierarchical ranking system). Although mildly rooted in “cultural relativism,” he was not interested in necessarily justifying cultural practices. To the contrary, he wanted to eliminate the application of personal biases when studying cultures (as in Mitchell, Don, Cultural Geography: A Critical Introduction).

3.3 Geography of World Languages

Language and religion are two essential cultural characteristics for human geographers to study. Geographers describe the historical and spatial distributions of language and religion across the landscape as a way of understanding cultural identity. Furthermore, when geographers study religion, they are less concerned with theology and more concerned with the diffusion and interaction of religious ideologies across time and space and the imprint it has on the cultural landscape.

Symbols and Language

Humans, consciously, and subconsciously, are always striving to make sense of their surrounding world. Symbols – such as gestures, signs, objects, signals, and words – help people understand that world. They provide clues to understanding experiences by conveying recognizable meanings that are shared by societies.

The world is filled with symbols. Sports uniforms, company logos, and traffic signs are symbols. In some cultures, a gold ring is a symbol of marriage. Some symbols are highly functional; stop signs, for instance, provide useful instruction. As physical objects, they belong to material culture, but because they function as symbols, they also convey nonmaterial cultural meanings. Some symbols are valuable only in what they represent. Trophies, blue ribbons, or gold medals, for example, serve no other purpose than to represent accomplishments. However, many objects have both material and nonmaterial symbolic value.

A police officer’s badge and uniform are symbols of authority and law enforcement. The sight of an officer in uniform or a squad car triggers reassurance in some citizens, and annoyance, fear, or anger in others.

It is easy to take symbols for granted. Few people challenge or even think about stick figure signs on the doors of public bathrooms. However, those figures are more than just symbols that tell men and women which bathrooms to use. They also uphold the value, in the United States, that public restrooms should be gender exclusive. Even though stalls are relatively private, most places do not offer unisex bathrooms.

Symbols often get noticed when they are out of context. Used unconventionally, they convey strong messages. A stop sign on the door of a corporation makes a political statement, as does a camouflage military jacket worn in an antiwar protest. Together, the semaphore signals for “N” and “D” represent nuclear disarmament – and form the well-known peace sign (Westcott 2008). Today, some college students have taken to wearing pajamas and bedroom slippers to class, clothing that was formerly associated only with privacy and bedtime. Though students might deny it, the outfit defies traditional cultural norms and makes a statement.

Even the destruction of symbols is symbolic. Effigies representing public figures are burned to demonstrate anger at certain leaders. In 1989, crowds tore down the Berlin Wall, a decades-old symbol of the division between East and West Germany, communism, and capitalism.

While different cultures have varying systems of symbols, one symbol is common to all: language. Language is a symbolic system through which people communicate and through which culture is transmitted. Some languages contain a system of symbols used for written communication, while others rely on only spoken communication and nonverbal actions.

Societies often share a single language, and many languages contain the same essential elements. An alphabet is a written system made of symbolic shapes that refer to spoken sound. Taken together, these symbols convey specific meanings. The English alphabet uses a combination of twenty-six letters to create words; these twenty-six letters make up over 600,000 recognized English words (OED Online 2011).

Rules for speaking and writing vary even within cultures, most notably by region. Do you refer to a can of carbonated liquid as “soda,” pop,” or “Coke”? Is a household entertainment room a “family room,” “rec room,” or “den”? When leaving a restaurant, do you ask your server for a “check,” the “ticket,” or your “bill”?

Language is continuously evolving as societies create new ideas. In this age of technology, people have adapted almost instantly to new nouns such as “e-mail” and “Internet,” and verbs such as “downloading,” “texting,” and “blogging.” Twenty years ago, the general public would have considered these nonsense words.

Even while it continually evolves, language continues to shape our reality. This insight was established in the 1920s by two linguists, Edward Sapir, and Benjamin Whorf. They believed that reality is culturally determined, and that any interpretation of reality is based on a society’s language. To prove this point, the geographers and other social scientists argued that every language has words or expressions specific to that language. In the United States, for example, the number thirteen is associated with bad luck. In Japan, however, the number four is considered unlucky, since it is pronounced similarly to the Japanese word for “death.”

The Sapir-Whorf hypothesis is based on the idea that people experience their world through their language, and that they, therefore, understand their world through the culture embedded in their language. The hypothesis, which has also been called linguistic relativity, states that language shapes thought (Swoyer 2003). Studies have shown, for instance, that unless people have access to the word “ambivalent,” they do not recognize an experience of uncertainty from having conflicting positive and negative feelings about one issue. Essentially, the hypothesis argues that, if a person cannot describe the experience, the person does not have the experience.

In addition to using language, people communicate without words. Nonverbal communication is symbolic, and, as in the case of language, much of it is learned through one’s culture. Some gestures are nearly universal: smiles often represent joy, and crying often represents sadness. Other nonverbal symbols vary across cultural contexts in their meaning. A thumbs-up, for example, indicates positive reinforcement in the United States, whereas, in Russia and Australia, it is an offensive curse (Passero 2002). Other gestures vary in meaning depending on the situation and the person. A wave of the hand can mean many things, depending on how it is done and for whom. It may mean “hello,” “goodbye,” “no, thank you,” or “I am royalty.” Winks convey a variety of messages, including “We have a secret,” “I am only kidding,” or “I am attracted to you.” From a distance, a person can understand the emotional gist of two people in conversation just by watching their body language and facial expressions. Furrowed brows and folded arms indicate a serious topic, possibly an argument. Smiles, with heads lifted and arms open, suggest a lighthearted, friendly chat.

Defining Language

Language and religion are two essential cultural characteristics for human geographers to study. Geographers describe the historical and spatial distributions of language and religion across the landscape as a way of understanding cultural identity. Furthermore, when geographers study religion, they are less concerned with theology and more concerned with the diffusion and interaction of religious ideologies across time and space and the imprint it has on the cultural landscape.

Languages relate to each other in much the same way that family groups (think of a family tree) relate to each other. Language is a system of communication that provides meaning to a group of people through speech. Nearly all languages around the world have a literary tradition: a system of written communication. Most nations have an official language. Most citizens of a nation with an official language speak and write in that language. Additionally, most official or governmental documents, monetary funds, and transportation signs are communicated in the official language. However, some regions, such as the European Union have 23 official languages.

A language family is a collection of languages related through a common prehistorical language that makes up the main trunk of language identity. A language tree will have language branches, a collection of languages related through a common ancestral language that existed thousands of years ago. Finally, a language group is a collection of languages within a single branch that shares a common origin from the relatively recent past and displays relatively few differences in grammar and vocabulary.

Dialects

There are various dialects within any language, and English in the United States is no exception. A dialect is a regional variation of a language, such as English, distinguished by distinctive vocabulary, spelling, and pronunciation. In the United States, there is a dialect difference between southern, northern, and western states. We can all understand each other, but the way we say things may sound accented or “weird” to others. There is also a dialect difference between American English and English spoken in Britain, as well as other parts of the British Commonwealth.

Origins and Diffusions of Language

All modern languages originate from an ancient language. The origin of every language may never be known because many ancient languages existed and changed before the written record. Root words within languages are the best evidence that we have to indicate that languages originated from pre-written history. The possible geographic origin of ancient languages is quite impressive. For example, several languages have similar root words for winter and snow, but not for the ocean. This indicates that the original language originated in an interior location away from the ocean. It was not until people speaking this language migrated toward the ocean that the word ocean was added to the lexicon (a catalog of a language’s words).

There are many layers within the Indo-European language family, but we will focus on the specifics. Though they sound very different, German and English, come from the same Germanic branch of the Indo-European language group. The Germanic branch is divided into High German and Low German. Most Germans speak High German, whereas English, Danish, and Flemish are considered subgroups of Low German. The Romance branch originated 2,000 years ago and is derived from Latin. Today, the Romance languages are Spanish, Portuguese, French, and Italian. The Balto-Slavic branch uses to be considered one broad language called Slavic in the 7th Century, but subdivided into a variety of smaller groups over time. Today the Balto-Slavic branch is composed of the following groups: East Slavic, West, Slavic, South Slavic, and Baltic. The Indo-European language branch spoken by most people around the world is Indo-Iranian with over 100 individual languages.

The origin of Indo-European languages has long been a topic of debate among scholars and scientists. In 2012, a team of evolutionary biologists at the University of Auckland led by Dr. Quentin Atkinson released a study that found all modern IE languages could be traced back to a single root: Anatolian — the language of Anatolia, now modern-day Turkey.

Distribution of Language Families

The next question that must be asked is why languages are diffused where they are diffused?  Social scientists, specifically linguistics and archaeologists, disagree on this issue because some believe that languages are diffused by war and conquest, whereas others believe diffusion occurs by peaceful/symbiotic means such as food and trade. For example, English is spoken by over 2 billion people and is the dominant language in 55 countries. Much of this diffusion has to do with British imperialism. The primary purpose of British imperialism was to appropriate as much foreign territory as possible to use as sources of raw materials. Imperialism involves diffusion of language through both conquest and trade.

The linguistic structure of the Sino-Tibetan language family is very complex and different from the Indo-European language family. Unlike European languages, the Sino-Tibetan language is based on hundreds of one-syllable spoken words. The other distinctive characteristic of this language is the way it is written. Rather than letters used in the Indo-European language, the Chinese language is written using thousands of characters called ideograms, which represent ideas or concepts rather than sounds. Sino-Tibetan language family exists mainly in China—the most populous nation in the world—and is over 4,000 years old. Of the over 1 billion Chinese citizens, 75 percent speak Mandarin, making it the most common language used in the world.

There are a large variety of other language families in Eastern and Southeast Asian. There is Austronesian in Indonesia, Austro-Asiatic that includes Vietnamese, Tai Kadai that is spoken in Thailand and surrounding countries, Korean and Japanese. In Southwest Asia (also called the Middle East), there are three dominant language families. The Afro-Asiatic languages are spoken by over 200 million people in several countries in the form of Arabic and are the written language of the Muslim holy book called the Quran. Hebrew is another Afro-Asiatic language and is the language of the Torah and Talmud (Jewish sacred texts).

The largest group of the Altaic language family is Turkish. The Turkish language used to be written with Arabic letters, but in 1928 the Turkish government required the use of the Roman alphabet in order to adapt the nation’s cultural and economic communications to those in line with their Western-European counterparts. Finally, the Uralic language family originated 7,000 years ago, near the Ural mountains in Siberia. All European countries speak Indo-European languages except Estonia, Finland, and Hungary, which speak Uralic instead.

The countries that make up Africa have a wealthy and sophisticated family of languages.  Africa has thousands of languages that have resulted from 5,000 years of isolation between the various tribes. Just like species that evolve differently over thousands of years of isolation, Africa’s languages have evolved into various tongues. However, there are three major African language families to focus on. The Niger-Congo language family is spoken by 95 percent of the people in sub-Saharan Africa. Within the Niger-Congo language is Swahili, which is the official language of only 800,00 people, but a secondary language is spoken by over 30 million Africans. Only a few million people in Africa speak languages from the Nilo-Saharan language family. The Khoisan language family is spoken by even fewer, but is distinctive because of the “clicking sounds” when spoken.

In a world dominated by communication, globalization, science, and the Internet, English has grown to be the dominant global language. Today English is considered a lingua franca (a language mutually understood and commonly used in trade by people who have different native languages). It is now believed that 500 million people speak English as a second language. There are other lingua fraca such as Swahili in Eastern Africa and Russian in nations that were once a part of the Soviet Union.

Pidgins and Creoles

Pidgins, also called contact languages, which develop out of contact between at least two groups of people who do not share a common language. A pidgin language is a usually a mixture of two or more languages, contains simplified grammar and vocabulary in, and is used for linguistic communication between groups, usually for trading purposes, who speak different languages. Pidgins are not first/native languages and are always learned as a second language. Many pidgins developed during the European colonization of Asia, Africa, and other areas of the world during the seventeenth to nineteenth centuries.

Creole languages are stable languages that develop from pidgins. Different from pidgins, creole languages are primary languages that are nativized by children. Additionally, creoles have their formal grammar and vocabulary. The grammar of a creole language often has grammatical features that differ from those of both parent languages. However, the vocabulary of a creole is primarily taken from the language of the dominant contact group.

Endangered Languages and Preserving Language Diversity

An isolated language is one that is unrelated to any other language. Thus it cannot be connected to any language family. These remote languages, and many others, are experiencing a mass extinction and are quickly disappearing off the planet. It is believed that nearly 500 languages are in danger of being lost forever. Think about the language you speak, the knowledge and understanding acquired and discovered through that language. What would happen to all that knowledge if your language suddenly disappeared? Would all of it be transferred to another language or would major components be lost to time and be rewritten by history? What would happen to your culture if your language was lost to time? Ultimately, is it possible that the Information Age is causing a Dis-information Age as thousands of languages are near extinction? Click here to view an Esri story map on Endangered Languages.

Consider the impact of language on culture, particularly religion. Most religions have some form of written or literary tradition or history, which allows for information to be transferred to future generations.  However, some religions are only transferred verbally, and when that culture disappears (which is happening at a frightening rate), so does all of the knowledge and history of that culture.

The Endangered Languages Project serves as an online resource for samples and research on endangered languages, as well as a forum for advice and best practices for those working to strengthen linguistic diversity.

3.4 Geography of World Religions

Origins and diffusion of World Religions

Our world’s cultural geography is very complex with language and religion as two cultural traits that contribute to the richness, diversity, and complexity of the human experience. Nowadays, the word “diversity” is gaining a great deal of attention, as nations around the world are becoming more culturally, religiously, and linguistically complex and interconnected. Specifically, in regards to religion, these prestigious cultural institutions are no longer isolated in their place of origin, but have diffused into other realms and regions with their religious history and cultural dominance. In some parts of the world, this has caused religious wars and persecution; in other regions, it has helped initiate cultural tolerance and respect for others.

These trends are, in some ways, the product of a history of migratory push and pull factors along with a demographic change that have brought together peoples of diverse religious and even linguistic backgrounds. It is critical that people critically learn about diverse cultures by understanding important cultural traits, such as the ways we communicate and maintain spiritual beliefs. Geographers need to be aware that even though our discipline might not be able to answer numerous questions related to language structure or address unique aspects of theological opinion, our field can provide insight by studying these cultural traits in a spatial context. In essence, geography provides us with the necessary tools to understand the spread of cultural traits and the role of geographic factors, both physical and cultural, in that process. People will then see that geography has influenced the distribution and diffusion of differing ideologies, as well as the diverse ways they practice their spiritual traditions.

As is the case with languages, geographers have a method of classifying religions so people can better understand the geographic diffusion of belief systems. Although religions are by themselves complex cultural institutions, the primary method for categorizing them is simple. In essence, there are two main groups: universalizing religions, which actively invite non-members to join them, and ethnic religions, which are associated with particular ethnic or national groups. Everyone can recount moments in his or her life in which there was interaction with individuals eager to share with others their spiritual beliefs and traditions. Also, that same person might have encountered individuals who are very private, perhaps secretive, when it comes to personal religious traditions deemed by this individual as exclusive to his or her family and the national group. A discussion of these life experiences can generate fascinating examples that serve as testimony to our world’s cultural richness when it comes to different religious traditions.

Origins of World Religions

A significant portion of the world’s universalizing religions has a precise hearth or place of origin. This designation is based on events in the life of a man, and the hearths where the largest universalizing religions originated are all in Asia. Of course, not all religions are from Asia. The three universalizing religions diffused from specific hearths, or places of origin, to other regions of the world. The hearths where each of these three largest universalizing religions originated are based on the events in the lives of key individuals within each religion. Together, Christianity, Islam, and Buddhism have over 2.5 billion adherents combined. Below are links to websites that analyze the diffusion of Christianity, Islam, and Buddhism.

Religious Conflict

Religion is often the catalyst of conflict between local values or traditions with issues and values that come with nationalism or even globalization. Religion tends to represent core beliefs that represent cultural values and identity, which, along with language, often represent local ideology rather than national or international ideology. There are some reasons why, but some include:

  • Culture is often the manifestation of core belief systems determined by the interplay between language and religion.
  • Universal religions try to appeal to the many, whereas ethnic religions focus on the few in a specific region.
  • Cultural landscapes or language and religion are often represented in the physical landscape. When opposing forces come and threaten the physical landscape, it threatens the cultural landscape.
  • Universal religions require the adoption of values that make conflict with local traditions and values. If the universal religion is forced upon another universal religion or ethnic religion, conflict may ensue.
  • Migrants tend to learn and simulate the language of the region they migrate to, but keep the religion they originated from. This can be viewed as a threat to the people the migrant moved to.

Types of World Religions

The major religions of the world (Hinduism, Buddhism, Islam, Confucianism, Christianity, Taoism, and Judaism) differ in many respects, including how each religion is organized and the belief system each upholds. Other differences include the nature of belief in a higher power, the history of how the world and the religion began, and the use of sacred texts and objects.

Religious Organizations

Religions organize themselves – their institutions, practitioners, and structures – in a variety of fashions. For instance, when the Roman Catholic Church emerged, it borrowed many of its organizational principles from the ancient Roman military and turned senators into cardinals, for example. Human geographers and sociologists use different terms, like ecclesia, denomination, and sect, to define these types of organizations. Scholars are also aware that these definitions are not static. Most religions transition through different organizational phases. For example, Christianity began as a cult, transformed into a sect, and today exists as an ecclesia.

Cults, like sects, are new religious groups. In the United States today this term often carries pejorative connotations. However, almost all religions began as cults and gradually progressed to levels of greater size and organization. The term cult is sometimes used interchangeably with the term new religious movement (NRM). In its pejorative use, these groups are often disparaged as being secretive, highly controlling of members’ lives, and dominated by a single, charismatic leader.

A sect is a small and relatively new group. Most of the well-known Christian denominations in the United States today began as sects. For example, the Methodists and Baptists protested against their parent Anglican Church in England, just as Henry VIII protested against the Catholic Church by forming the Anglican Church. From “protest” comes the term Protestant.

Occasionally, a sect is a breakaway group that may be in tension with the larger society. They sometimes claim to be returning to “the fundamentals” or to contest the veracity of a particular doctrine. When membership in a sect increases over time, it may grow into a denomination. Often a sect begins as an offshoot of a denomination, when a group of members believes they should separate from the larger group.

Some sects dissolve without growing into denominations. Social scientitsts call these established sects. Established sects, such as the Amish or Jehovah’s Witnesses fall halfway between sect and denomination on the ecclesia–cult continuum because they have a mixture of sect-like and denomination-like characteristics.

A denomination is a large, mainstream religious organization, but it does not claim to be official or state-sponsored. It is one religion among many. For example, Baptist, African Methodist Episcopal, Catholic, and Seventh-day Adventist are all Christian denominations.

The term ecclesia, initially referring to a political assembly of citizens in ancient Athens, Greece, now refers to a congregation. In geography, the term is used to refer to a religious group that most all members of a society belong to. It is considered a nationally recognized, or official religion that holds a religious monopoly and is closely allied with state and secular powers. The United States does not have an ecclesia by this standard; in fact, this is the type of religious organization that many of the first colonists came to America to escape.

One way to remember these religious organizational terms is to think of cults, sects, denominations, and ecclesia representing a continuum, with increasing influence on society, where cults are least influential, and ecclesia are most influential.

Scholars from a variety of disciplines have strived to classify religions. One widely accepted categorization that helps people understand different belief systems considers what or whom people worship (if anything). Using this method of classification, religions might fall into one of these basic categories.

Note that some religions may be practiced – or understood – in various categories. For instance, the Christian notion of the Holy Trinity (God, Jesus, Holy Spirit) defies the definition of monotheism, which is a religion based on a belief in a single deity, to some scholars. Similarly, many Westerners view the multiple manifestations of Hinduism’s godhead as polytheistic, which is a religion based on a belief in multiple deities,, while Hindus might describe those manifestations are a monotheistic parallel to the Christian Trinity. Some Japanese practice Shinto, which follows animism, which is a religion that believes in the divinity of nonhuman beings, like animals, plants, and objects of the natural world, while people who practice totemism believe in a divine connection between humans and other natural beings.

It is also important to note that every society also has nonbelievers, such as atheists, who do not believe in a divine being or entity, and agnostics, who hold that ultimate reality (such as God) is unknowable. While typically not an organized group, atheists and agnostics represent a significant portion of the population. It is essential to recognize that being a nonbeliever in a divine entity does not mean the individual subscribes to no morality. Indeed, many Nobel Peace Prize winners and other great humanitarians over the centuries would have classified themselves as atheists or agnostics.

Religions have emerged and developed across the world. Some have been short-lived, while others have persisted and grown. In this section, we will explore seven of the world’s major religions.

Hinduism

The oldest religion in the world, Hinduism originated in the Indus River Valley about 4,500 years ago in what is now modern-day northwest India and Pakistan. It arose contemporaneously with ancient Egyptian and Mesopotamian cultures. With roughly one billion followers, Hinduism is the third-largest of the world’s religions. Hindus believe in a divine power that can manifest as different entities. Three main incarnations—Brahma, Vishnu, and Shiva—are sometimes compared to the manifestations of the divine in the Christian Trinity.

Multiple sacred texts, collectively called the Vedas, contain hymns and rituals from ancient India and are mostly written in Sanskrit. Hindus generally believe in a set of principles called dharma, which refers to one’s duty in the world that corresponds with “right” actions. Hindus also believe in karma, or the notion that spiritual ramifications of one’s actions are balanced cyclically in this life or a future life (reincarnation).

Buddhism

Buddhism was founded by Siddhartha Gautama around 500 B.C.E. Siddhartha was said to have given up a comfortable, upper-class life to follow one of poverty and spiritual devotion. At the age of thirty-five, he famously meditated under a sacred fig tree and vowed not to rise before he achieved enlightenment (bodhi). After this experience, he became known as Buddha, or “enlightened one.” Followers were drawn to Buddha’s teachings and the practice of meditation, and he later established a monastic order.

Buddha’s teachings encourage Buddhists to lead a moral life by accepting the four Noble Truths: 1) life is suffering, 2) suffering arises from attachment to desires, 3) suffering ceases when attachment to desires ceases, and 4) freedom from suffering is possible by following the “middle way.” The concept of the “middle way” is central to Buddhist thinking, which encourages people to live in the present and to practice acceptance of others (Smith 1991). Buddhism also tends to deemphasize the role of a godhead, instead of stressing the importance of personal responsibility (Craig 2002).

Confucianism

Confucianism was the official religion of China from 200 B.C.E. until it was officially abolished when communist leadership discouraged the religious practice in 1949. The religion was developed by Kung Fu-Tzu (Confucius), who lived in the sixth and fifth centuries B.C.E. An extraordinary teacher, his lessons—which were about self-discipline, respect for authority and tradition, and jen (the kind treatment of every person)—were collected in a book called the Analects.

Some religious scholars consider Confucianism more of a social system than a religion because it focuses on sharing wisdom about moral practices but does not involve any specific worship; nor does it have formal objects. Its teachings were developed in the context of problems of social anarchy and a near-complete deterioration of social cohesion. Dissatisfied with the social solutions put forth, Kung Fu-Tzu developed his model of religious morality to help guide society (Smith 1991).

Taoism

In Taoism, the purpose of life is inner peace and harmony. Tao is usually translated as “way” or “path.” The founder of the religion is generally recognized to be a man named Laozi, who lived sometime in the sixth century B.C.E. in China. Taoist beliefs emphasize the virtues of compassion and moderation.

The central concept of tao can be understood to describe a spiritual reality, the order of the universe, or the way of modern life in harmony with the former two. The ying-yang symbol and the concept of polar forces are central Taoist ideas (Smith 1991). Some scholars have compared this Chinese tradition to its Confucian counterpart by saying that “whereas Confucianism is concerned with day-to-day rules of conduct, Taoism is concerned with a more spiritual level of being” (Feng and English 1972).

Judaism

After their Exodus from Egypt in the thirteenth century B.C.E., Jews, a nomadic society, became monotheistic, worshipping only one God. The Jews’ covenant, or promise of a special relationship with Yahweh (God), is an essential element of Judaism, and their sacred text is the Torah, which Christians also follow as the first five books of the Bible. Talmud refers to a collection of sacred Jewish oral interpretation of the Torah. Jews emphasize moral behavior and action in this world as opposed to beliefs or personal salvation in the next world.

Islam

Probably one of the most misunderstood religions in the world is Islam. Though predominantly centered in the Middle East and Northern Africa, Islam is the fastest growing religion in the world with 1.3 billion and is only second to Christianity is members. Islam is also divided into two major branches: Sunni and Shiite. The Sunni branch is the largest, composed of 83 percent of all Muslims. The Shiite branch is more concentrated in clusters such as Iran, Iraq, and Pakistan.

Islam is monotheistic religion and it follows the teaching of the prophet Muhammad, born in Mecca, Saudi Arabia, in 570 C.E. Muhammad is seen only as a prophet, not as a divine being, and he is believed to be the messenger of Allah (God), who is divine. The followers of Islam, whose U.S. population is projected to double in the next twenty years (Pew Research Forum 2011), are called Muslims.

Islam means “peace” and “submission.” The sacred text for Muslims is the Qur’an (or Koran). As with Christianity’s Old Testament, many of the Qur’an stories are shared with the Jewish faith. Divisions exist within Islam, but all Muslims are guided by five beliefs or practices, often called “pillars”: 1) Allah is the only god, and Muhammad is his prophet, 2) daily prayer, 3) helping those in poverty, 4) fasting as a spiritual practice, and 5) pilgrimage to the holy center of Mecca.

In Western nations, the primary loyalty of the population is to the state. In the Islamic world, however, loyalty to a nation-state is trumped by dedication to religion and loyalty to one’s family, extended family, tribal group, and culture. In regions dominated by Islam, tribalism and religion play determining roles in the operation of social, economic, cultural, and political systems. As a result, the nation states within the Islamic civilization are weak and generally ineffectual. Instead of nationalism, Muslims are far more interested in identifying with “ummah,” (Islamic civilization).

Furthermore, despite the lack of a core Islamic state, the leaders of the many Muslim nations created (1969) the Organization of the Islamic Conference in order to foster a sense of solidarity between Muslim states. Almost all nations with large Muslim populations are now members of the organization. Additionally, some of the more powerful Muslim states have sponsored the World Muslim Conference and the Muslim League to bring Muslims together in a unified block.

It is instructive to notice that the concept of ummah rests on the notion that nation-states are the illegitimate children of Western Civilization, designed to further Western interests at the expense of others. Currently, Islamic Civilization has no identifiable core state, but nations such as Iran, Turkey, and Saudi Arabia could assume that role in the future.

It is common for Americans to suggest that they do not have a problem with Islam; only Islamic extremists. Huntington, however, argues that the lessons of history demonstrate the opposite. In fact, over the last fourteen hundred years, Christians and Muslims have almost always had stormy relations. After Muslims were able to take control of North Africa, Iberia, the Middle East, Persia, and Northern India in the seventh and eighth centuries, relatively peaceful boundaries between Islam and Christendom existed for about two hundred years. In 1095, however, Christian rulers launched the Crusades to regain control of the “Holy Land.” Despite some successes, they were eventually defeated in 1291. Not long after this, the Ottoman Empire spread Islam into Byzantium, North Africa, the Balkans, and other parts of Europe. They eventually sacked Vienna, and for many years, Europe was under constant threat from Islamic forces. In the fifteenth century, Christians were able to regain control of Iberia, and the Russians were able to bring an end to Tatar rule. In 1683, the Ottomans again attacked Vienna but were defeated, and from that time on, the people of the Balkans sought to rid themselves of Ottoman rule. By the beginning of World War I, the Ottoman Empire was referred to as the “sick man of Europe.” By 1920, only four Islamic countries (Turkey, Saudi Arabia, Iran, and Afghanistan) were free of non-Muslim rule.

As Western colonialism began to wane in the twentieth century, the populations of about forty-five independent states were solidly Muslim. The independence of these Muslim nations was accompanied by a great deal of violence. 50% of the wars that occurred between 1820 and 1929 involved battles between Muslims and Christians. The conflicts were primarily products of two very different points of view. Whereas Christians believe in the separation of Church and State (God and Caesar), Muslims view religion and politics as the same. Additionally, both Christians and Muslims hold a universalistic view. Each believes that it is the one “true faith,” and both (to one extent or another) believe that they should convert others to their faith.

In addition to the importance of the religious foundations of the Western and Islamic Civilizations, practical, real-world factors also play important roles. For example, Muslim population growth has created large numbers of unemployed, angry youth who have been regularly recruited to Islamic causes. Furthermore, the resurgence of Islam has provided Muslims with confidence in the worth of their civilization relative to the West. Western policies and actions over the last century have also played a significant role in cracking the fault line between Islam and Christendom. From the Islamic point of view, the West (particularly the United States) has meddled in the internal affairs of the Islamic world far too often, and for far too long.

Huntington is convinced the Western and Islamic Civilizations are in for many years, perhaps more than a century, of conflict and tension. He points out that Muslims are growing increasingly anti-Western while at the same time, people in the Western Civilization are increasingly concerned about the intentions (and excesses) of modern Islamic states such as Iran. Europeans express a growing fear of (and impatience with) fundamentalist Muslims who threaten them with terrorist attacks. They are also growing weary of Islamic immigrants who refuse to adhere to European traditions, and in some cases, laws.

Huntington does not mince words. He boldly states,”…the underlying problem for the West is not Islamic fundamentalism. It is Islam; a different civilization whose people are convinced of the superiority of their culture, and are obsessed with the inferiority of their power.” He goes on to add, “…the problem for Islam is not the CIA or the U.S. Department of Defense. It is the West; a different civilization whose people are convinced of the universality of their culture, and believe that they are superior, if declining, power imposes on them an obligation to extend that culture throughout the world.” From Huntington’s perspective, these differences will fuel conflict between Western and Islamic cultures for many years to come.

Many Western leaders do not agree with Huntington’s view. Instead, they argue that Americans need not to fear Islam; only radical Islam. They point to the millions of Muslims living throughout the world in peace with their non-Muslim neighbors. If, they reason, Islam were indeed a religion of war and conquest, why is it that millions of Muslims lead peaceful lives? Instead of applying a negative stereotype to all Muslims, they believe our national security would be better served by making more considerable effort to understand the motivations and goals of radical fundamentalists. In a sense, they are calling for in-depth cultural studies that will lead to accurate cultural intelligence about the nature of Islamic terrorists — simply branding all Muslims as potential terrorists are, from those who do not agree with Huntington, simplistic and dangerous.

Christianity

Today the largest religion in the world, Christianity began 2,000 years ago in Palestine, with Jesus of Nazareth, a charismatic leader who taught his followers about caritas (charity) or treating others as you would like to be treated yourself.

The sacred text for Christians is the Bible. While Jews, Christians, and Muslims share many of same historical religious stories, their beliefs verge. In their shared sacred stories, it is suggested that the son of God—a messiah—will return to save God’s followers. While Christians believe that he already appeared in the person of Jesus Christ, Jews and Muslims disagree. While they recognize Christ as a prominent historical figure, their traditions do not believe he is the son of God, and their faiths see the prophecy of the Messiah’s arrival as not yet fulfilled.

Different Christian groups have variations among their sacred texts. For instance, Mormons, an established Christian sect, also use the Book of Mormon, which they believe details other parts of Christian doctrine and Jesus’ life that is not included in the Bible. Similarly, the Catholic Bible includes the Apocrypha, a collection that, while part of the 1611 King James translation, is no longer included in Protestant versions of the Bible. Although monotheistic, Christians often describe their god through three manifestations that they call the Holy Trinity: the father (God), the son (Jesus), and the Holy Spirit. The Holy Spirit is a term Christians often use to describe the religious experience, or how they feel the presence of the sacred in their lives. One foundation of Christian doctrine is the Ten Commandments, which decry acts considered sinful, including theft, murder, and adultery.

Holy Religiuos Places

Some of the places that in some ways contributed to the foundation and development of a faith frequently gain sacred status, either by the presence of a natural site ascribed as holy, or as the stage for miraculous events, or by some historical event such as the erection of a temple.  When a place gains that “sacred” reputation, it is not unusual to see peoples from different parts of the world traveling or making a pilgrimage to this site with the hope of experiencing spiritual and physical renewal.

Buddhists have eight holy sites because they have special meaning or essential events during the Buddha’s life. The first one is in Lumbini, Nepal where the Buddha was born around 563 B.C. The second holy site is in Bodh Gaya, Nepal, where it is believed Siddhartha reached enlightenment to become the Buddha. The third most important site is in Sarnath, India where he gave his first sermon. The fourth holiest site is Kusinagara, India where the Buddha died at the age of 80 and became enlightened. The other four holy sites are where Buddha performed/experienced specific miracles. People who practice Buddhism or Shintoism erect and use pagodas to house relics and sacred texts. Pagodas are also used for individual prayer and meditation.

Islam’s holiest sites are located in Saudi Arabia. The holiest city is Mecca, Saudi Arabia where the Prophet Muhammad was born. It is also the location of the religion’s holiest objects called the Ka’ba, a cube-like structure believed to have been built by Abraham and Ishmael. The second holiest site to Muslims in Medina, Saudi Arabia where Muhammad began his leadership and gained initial support from the people. Every healthy and financially able Muslim is supposed to make at least one pilgrimage to Mecca in their lifetime. For Muslims, a mosque is considered a holy site of worship, but also a place for community assembly. Usually assembled around a courtyard, the pulpit faces Mecca so that all Muslims pray toward their holiest site. Mosques will have a tower called a minaret where someone summons people to worship.

Meaning lord, master, or power, a Christian church is a place of gathering and worship. Compared to other religions, churches play a more important role because they are created to express values and principles. Churches also play a vital role in the landscape. In earlier days and smaller towns, churches tend to be the most significant buildings. Also because of their importance, Christian religions spend lots of money and commitment to the building and maintenance of their churches.


Chapter 2: Population and Migration

Understanding how the human population is organized geographically helps students make sense of cultural patterns, the political organization of space, food production issues, economic development concerns, natural resource use and decisions, and urban systems. Additionally, course themes of location, space, place, the scale of analysis, and pattern can be emphasized when studying fundamental population issues such as crude birth rates, crude death rates, total fertility rate, infant mortality rates, doubling time, and natural increase.

Explanations of why the population is growing or declining in some places are based on patterns and trends in fertility, demographic mortality, and migration. Analyses of refugee flows, immigration, and internal migration help us understand the connections between population phenomena. For example, environmental degradation and natural hazards may prompt population redistribution at various scales, which in turn creates new pressures on the environment, culture, and political institutions.

This module analyzed population trends across space and time as ways to consider models of population growth and decline, including the Malthusian demographic transition, and the epidemiological (mortality) transition model.

2.1 Population

Geographers study where and why people live in particular locations. Neither people nor resources are distributed uniformly across Earth. In regards to population growth, geographers emphasize three elements: the population size, the rate of increase of world population, the unequal distribution of population growth. Geographers seek to explain why these patterns exist.

The subject of overpopulation can be highly divisive, given the deep personal views that many people hold. Human geography emphasizes a geographic perspective on population growth as a relative concept. Human-environment interaction and overpopulation can be discussed in the contexts of carrying capacity, the availability of Earth’s resources, as well as the relationship between people and resources.

The study of the human population has never been more critical than it is today. There are over 7 billion people on the planet, but the majority of this growth has occurred in the last 100 years, mostly in developing nations. Humans do not live uniformly around the world, but rather in clusters because of earth’s physical geography. Environments that are too dry, wet, cold, or mountainous create a variety of limiting factors to humans. Two-thirds of the world’s population is located within three significant clusters: East Asia (China), South Asia (India and Indonesia, and Europe, with the majority in East and South Asia.

Demographers, scientists that study population issues, and other scientists say there is more to the story than pure population growth. Ecologists believe that humans have outgrown the Earth’s carrying capacity. There is not enough of the world’s resources to give every human a standard of living expected by most Americans. If all the people on the planet lived the average American lifestyle, it would require over three Earths. At this level of consumption, the earth cannot sustain a population of 7 billion, though we are expected to reach 9 billion by 2100.

Distribution of the World’s Population

Economist Jeffrey Sachs, director of the Earth Institute at Columbia University, believes that there are two reasons why the global population and extreme poverty occur where they do:

  • Capitalism distributes wealth to nations better than socialism or communism
  • Geography is a significant factor in population distribution in relationship to wealth

For example, the population tends to be lower in extreme environments such as arid climates, rainforests, polar or mountainous regions. Another example is a nation that has a large body of water within its boundaries or has large mineral deposits or resources that are likely to have more wealth and a larger population.

Humans only occupy five percent of the Earth’s surface because oceans, deserts, rainforests, and glaciers cover much of the planet. The term for areas where humans permanently settle is ecumene. Population growth and technology dramatically increase the ecumene of humans, which affects the world’s ecosystems.

It is argued that the world cannot support all the humans on the planet. On some level, that’s true, and on another, it is not. For example, we could pack all 7 billion humans in California, but that is not desirable, sanitary, or sustainable. The reality is that humans cannot live in many parts of the world due to moisture, temperature, or growing season issues. For example, 20 percent of the world is too dry to support humans. This mostly has to do with high-pressure systems around 30 degrees north and south of the equator where constant sunny conditions have created some of the world’s largest deserts. Some of these include the Sahara, Arabian Peninsula, Thar, Takla Makan, and Gobi deserts. Most deserts do not provide enough moisture to support agriculture for large populations.

Regions that receive too much moisture also cause problems for human settlement. These are tropical rainforest regions located between the Tropic of Cancer (23.5 degrees North) and the Tropic of Capricorn (23.5 degrees South). The problem with these regions of the world has to do with the soil erosion due to high precipitation. High levels of precipitation greatly hinder agricultural production because nutrients in the soil are quickly washed away. This is partly why slash-and-burn agriculture occurs in these regions. Locals will burn part of the forest to put nutrients back into the ground. This only works for a short period because the precipitation washes away nutrients within a few years, so farmers move on to other parts of the forest with their slash-and-burn practices.

Additionally, regions that are too cold pose problems for large population clusters and food production. The cold Polar Regions have a short growing season, and many of the Polar Regions have limited amounts of moisture because they are covered by high- pressure systems (much like the desert regions). Thus, cold polar regions are defined by temperature and lack of moisture, despite access to snow, ice, and glaciers. Mountainous and highland regions lack population clusters due to steep slopes, snow and ice cover, and short growing seasons.

Population Profiles

Demographers use various ways to measure and analyze population density. The arithmetic density, also called population density, of a population, is the total number of people in proportion to the area of land. This may not be the best indicator of actual population density because there are many environments humans cannot live comfortably in, including deserts, arctic, tropical forests, and mountainous regions. It also does not consider if the ground is used for producing food. The physiological density of a population is the total population in proportion to the area of arable land suited for agriculture. Even more specifically, agricultural density refers to the number of farmers available compared to arable land. A high agricultural density suggests that the available agricultural land used for farming and the farmers who are capable of producing and harvesting food is reaching its limit for that region. If the demand for food continues or rises, the risk is that there will not be enough arable land to feed their people. In contrast, an area with a low agricultural density has a higher potential for agricultural production. Economically, a low agricultural density would be favorable for future growth.

To understand these methods, let us look at an example. Let us say we have City X, which is home to 10,000 people, 6,000 of whom are farmers, and has a square area of 10,000 kilometers and a farmable square area of 4,000 kilometers. If we look at the arithmetic density, we come up with a population density of 1 person per kilometer (10,000 people/10,000 kilometers). If we look at the agricultural density, we come up with 1.5 people per kilometer (6,000 farmers/4,000 kilometers of farmable land).

Finally, if we look at the physiological density, we come up with 2.5 people per kilometer (10,000 people/4,000 kilometers of farmable land). Each of these numbers tells us something different.

Of these three methods, physiological density is considered the best way to measure population density because it is most reflective of population pressure on arable land. Arable land is any land that is suitable for growing crops. The higher the population density we find from this method, the faster the arable land is going to be used up or reach its output limit. That means there will not be enough land for the people that are coming into the area. In our example, if 100,000 more people moved to the same area, we would end up with a physiological density of 27.5 people per square kilometer (110,000 people/4,000 kilometers of farmable land)

A useful tool used by scientists that focus on demographics is a population profile, also called a population pyramid. A population profile visually demonstrates a particular region’s demographic structure concerning males and females and is often expressed in numbers or percentages.

The following are some characteristics of population profiles:

  • A bell-shaped graph will indicate that a country has experienced high population growth in the past but is experiencing a slight decrease.
  • Narrow triangles show countries with high population growth.
  • As a country’s population boom begins to age, a strange profile shape can develop with a broader top and a narrower base.
  • Populations that have stabilized have profiles where the bulge of past high birth rates migrates to older populations moderately and not quickly, while the base has a reasonably smaller but not dramatic base.
  • When a country has a large immigrant population, specifically “guest workers” that usually tend to be men, the male side of the graph will be dramatically wider than the women’s side of the graph.
  • If a country has experienced war, a catastrophic disaster, or a genocide that eliminates an entire generation, that generation will have a smaller number or percent than the generations before or after. For example, a significant war may cause a reduction in populations in their mid-20s and 30s, which would appear on the profile graph.

Global Population Trends

A region’s population will grow as long as their crude birth rates are higher than their crude death rates. A crude birth rate (CBR) is the total number of live births for every 1,000 people in a given year. So, a crude birth rate of 10 would mean ten babies are born every year for every 1,000 people in that region. Crude death rates (CDR) are the total number of deaths per 1,000 people in a given year.

When comparing CBRs to CDRs, a region’s natural increase rate can be determined. A natural increase rate (NIR) is the percent a population will grow per year, excluding annual migration. Usually, an NIR of 2.1 is required to maintain or stabilize a region’s population. Any more than that and the population will grow, any less than a NIR of 2.1 causes population contraction. The reason why the NIR percent is 2.1 and not 2.0 for stability is because not every human will pair up and have a child because of genetics, choice, or death before childbearing years. Once we know the NIR, we can determine the doubling time. Doubling time is how many years it would take for a defined population to double in size, assuming that NIR stays the same over time. Currently, about 82 million people are added to the world’s global population every year.

Key Factors Influencing Population Change

Three key factors to understand when trying to predict or analyze population change are the total fertility rate, infant mortality rate, and life expectancy at birth. Total fertility rate (TFR) is the average number of children a woman would be expected to have during childbearing years (between 15-49 years old). The global average for TFRs is about 2.5, but in less developed countries, it is as high as 5.0 or higher, and in more developed countries, it is as low as 2.0 or less. Fertility patterns can vary widely within countries. Racial and ethnic minorities may have higher fertility rates than the majority, and families with low incomes or low levels of education typically have more children than those that are affluent or well-educated. Women who work outside the home typically have fewer children than those who stay home, and rural families tend to have more children than city dwellers. In 2016, the number of births per 1,000 people worldwide was 20, with extremes ranging from a low of 8 or 9 (mainly in Northern and Western Europe and Hong Kong), to 60 or more in a few West African nations (Population Reference Bureau, 2016 World Population Data Sheet, pp. 10-19).

Mortality is the second significant variable that shapes population trends. A population’s age structure is an essential factor influencing its death rate. Death rates are highest among infants, young children, and the elderly, so societies with many older adults are likely to have more deaths per 1,000 people than those where most citizens are young adults. Developed countries with excellent medical services have more people in older age brackets than developing countries, so the developed societies can have higher death rates even though they are healthier places to live overall. Infant mortality rate (IMR) is determined by calculating how many children die before the age of 1 per 1,000 live births annually. The highest IMRs are in less developed countries where rates can be as high as 80 or more. Conversely, in a place like Europe, it is as low as 5 percent.

Life expectancy at birth is straightforward—it is an average of how many years a newborn is expected to live, assuming that mortality rates stay consistent. In more developed countries, the average life expectancy is over 80 years old, and in less developed countries, it is only around 40 years. When we compare CBRs, CDRs, and TFRs, we find that the world has a large population of youth with the most substantial percent in less developed countries. This causes high stress on the education systems and, to some extent, the health care systems in poorer countries. However, more developed countries tend to have older demographics, which tends to cause stress on the health care and social safety nets of those countries. The dependency ratio discussed later in this chapter, is used to understand these stresses and is the number of people who are too young or too old to work compared to the number of people who are in their “productive years.” The larger the ratio, the greater the economic stress on those nations.

2.2 Demographic Transition Model

Human geographers have determined that all nations go through a four-stage process called the demographic transition model (DTM). Developed in 1929 by American demographer Warren Thompson, the DTM’s function is to demonstrate the natural sequence of population change over time, depending on development and modernization. This can help geographers, and other scientists examine the causes and consequences of fertility, mortality, and natural increase rates. Though controversial, the DTM is used as the benchmark for forecasting human population growth regionally and globally.

Stage 1: Low Growth Rate

We have lived in the first stage of the Demographic Transition Model for most of human existence. In this first stage, CBRs and CDRs fluctuated significantly over time because of living conditions, food output, environmental conditions, war, and disease. However, the natural increase of the world was pretty stable because the CBRs and CDRs were about equal. However, around 8,000 BC, the world’s population began to grow dramatically due to the first agricultural revolution. During this time, humans learn to domesticate plants and animals for personal use and became less reliant on hunting and gathering for sustenance. While this transition allowed for more stable food production and village populations to grow, War and disease prevented population growth from occurring on a global scale.

Stage 2: High Growth Rate

Around the mid-1700s, global populations began to grow ten times faster than in the past for two reasons: The Industrial Revolution and increased wealth. The Industrial Revolution brought with it a variety of technological improvements in agricultural production and food supply. Increased wealth in Europe, and later North America, because of the Industrial Revolution, meant that more money and resources could be devoted to medicine, medical technology, water sanitation, and personal hygiene. Sewer systems installed in cities led to public health improvements. All of this dramatically caused CDRs to drop around the world. At first, CBRs stayed high as CDRs decreased; this caused populations to increase in Europe and North America. Over time, this would change.

Africa, Asia, and Latin America moved into Stage 2 of the demographic transition model 200 years later for different reasons than their European and North American counterparts. The medicine created in Europe and North America was brought into these emerging nations, creating what is now called the medical revolution. This diffusion of medicine in this region caused death rates to drop quickly. While the medical revolution reduced death rates, it did not bring with it the wealth and improved living conditions, and development that the Industrial Revolution created. Global population growth is highest in the regions that are still in Stage 2.

Stage 3: Moderate Growth Rate

Today, Europe and North America have moved to Stage 3 of the demographic transition model. A nation moves from Stage 2 to Stage 3 when CBRs begin to drop while CDRs simultaneously remain low or even continue to fall. It should be noted that the natural rate of increase in nations within Stage 3 is moderate because CBRs are somewhat higher than CDRs. The United States, Canada, and countries in Europe entered this stage in the early 20th Century. Latin American nations entered this stage later in the century.

Advances in technology and medicine cause a decrease in IMR and overall CDR during Stage 2. Social and economic changes bring about a reduction in CBR during Stage 3. Nations that begin to acquire wealth tend to have fewer children as they move away from rural-based development structures toward urban-based structures because more children survive, and the need for large families for agricultural work decreases. Additionally, women gain more legal rights and chose to enter the workforce, own property, and have fewer children as nations move into Stage 3.

Stage 4: Low Growth Rate

A nation enters Stage 4 of the demographic transition model when CBRs equal to or become less than CDRs. When CBRs are equal to CDRs, a nation will experience zero population growth (ZPG). It should be noted that sometimes a nation could have a slightly higher CBR, but still experience ZPG. This occurs in many countries where girls do not live as long before they reach their childbearing years due to gender inequality.

When a country enters Stage 4, the population ages, meanwhile fewer children are born. This creates an enormous strain on the social safety net programs of a country as is tries to support older citizens who are no longer working and contributing to the economy. Most of Europe has entered Stage 4. The United States would be approaching this stage if it were not for migration into the country.

A nation in the first two stages of the transition model will have a broad base of young people and a smaller proportion of older people. A country in Stage 4 will have a much smaller base of young people (fewer children), but a much larger population of elderly (decreased CDR).  A nation with a large youth population is more likely to be rural with high birthrates and possibly high death rates. This can tell geographers a lot about the health care system of that nation. Moreover, a country in Stage 4 with a large elderly population will have much fewer young people supporting the economy. These two examples represent the dependency ratio, mentioned earlier in this chapter. This ratio is the number of people, young and old, who are dependent on the working force.

Human geographers like to focus on the following demographic groups: 0-14 years old, 15-64 years old, and 65 and older. Individuals who are 0-14 and over 65 are considered dependents (though this is changing in older generations). One-third of all young people live in emerging nations, and this places considerable strain on those nations’ infrastructure such as schools, hospitals, and day-care. Older individuals in more developed nations (MDL) benefit from health care services, but require more help and resources from the government and economy. The author of this textbook uses the term “emerging nations,” rather than “less developed” or “developing,” or “third-world” nations as a more inclusive and equitable term.

Another ratio geographers look at is the number of males compared to females. This is called the sex ratio. Globally, more males are born than females, but males also have a higher death rate than females. However, understanding a nation’s sex ratio and its dependency ratio helps human geographers analyze fertility rates and natural increase.

As noted earlier, population growth has increased dramatically in the last century. No country is still in Stage 1, and very few have moved into Stage 4. The majority of the world is either in Stage 2 or 3, both having higher crude birth rates than crude death rates; therefore, the world’s population is over 7 billion today.

In summary, the demographic transition model is a model that helps human geographers understand and predict the demographics of individual nations. In Stage 1, CBR and CDR are very high and thus produce a low natural increase. In Stage 2, a nation’s CBR stays relatively high, but the CDR drops dramatically, producing the highest growth in population. In Stage 3, CDR stays low; however, changes in social customs and economic conditions result in a moderately low CBR. Finally, nations in Stage 4 have nearly equal CBR and CDR (sometimes higher CDR), creating a drop in natural increase.

2.3 Overpopulation

In 1798, Thomas Malthus published a short but revolutionary work called “An Essay on the Principle of Population.” In that essay, Malthus states that future population growth would be determined by two facts and one opinion. The facts were that food is necessary for survival and that men and women would continue to produce offspring. His view is that if the population is not restrained by war, famine, and disease, population growth will occur exponentially. He also argues that agricultural production of food could only grow arithmetically. The overall assumption is that population growth will quickly grow beyond food production, leading to food shortages and famines.

Malthus’ theory has not come to fruition, yet, due to technological advances in agriculture (fertilizers, insect and drought resistance, and better farming techniques). Some discredit Malthus because his hypothesis is based on a world supply of resources being fixed rather than expanding. Humans can expand the quantity of food and other resources by using new technologies to offset the scarcity of minerals and arable land. Thus, we can use resources more efficiently and substitute scarce resources with new ones. Even with a global human population of 7 billion, food production has grown faster than the worldwide rate of increase (NIR). Better growing techniques, higher-yielding, and genetically modified seeds, as well as cultivation of more land, have helped expand food supplies.

While new technologies have helped to increase food production, there are not enough emerging technologies to handle supply and demand. Adding to the problem is the fact that many insects have developed a resistance to pesticides. These problems have caused a slowdown and a leveling-off of food production in many regions of the world. Without breakthroughs in safe and sustainable food production, the food supply will not keep up with population growth.

Others believe that population growth is not a bad thing. A large population could stimulate economic growth, and therefore, production of food. Population growth could generate more customers and more ideas for improving technology. Additionally, some maintain that no cause-and-effect relationship exists between population growth and economic development. They argue that poverty, hunger, and other social welfare problems associated with a lack of economic development, famines, and war are a result of unjust social and economic institutions, not population growth.

Lately, there has been a rise in neo-Malthusian thought. One notable figure is Paul Ehrlich. In his book, The Population Bomb, Ehrlich argues that population growth cannot continue without controls because the planet will reach the carrying capacity of our species. In short, we must consider environmental factors as we discuss overpopulation concerns. For example, even though humans produce four times the amount of food that we consume, the environment pays an ecological price for our food production. The rapid population growth of the world has caused massive deforestation in the Boreal Forests and rainforests, increasing desertification that encroaches into arable land, over-fishing of the oceans, mass extinction of species, air and water pollution, and anthropogenic (human-induced) climate change. All of these things have economic and environmental costs that we must consider.

Population Policy

Governments and other entities can dramatically influence population change to increase or decrease population growth in their country by promoting anti-nationalist or pro-nationalist policies. Some countries take dramatic steps to reduce their population. For example, China’s One-Child Policy dictated that each family (husband and wife) could legally have only one child. Families that followed this policy were often given more money by the government or better housing. If a family illegally had another child, they would be fined heavily.

Children born illegally cannot attend school and have a difficult time finding jobs, getting government licenses, or even getting married. Some have reported that the government would force abortions on families with more than one child. One of the significant consequences of this policy was a dramatic increase in abortions and infanticides, especially females. Female infanticide is linked directly to a global cultural trend that privileges males over females, baby boys are desired, especially if the family is only allowed, one child. This specific focus on eliminating women is called gendercide. Half the Sky, written by Nicholas Kristof and Sheryl WuDunn, documents global gendercide and what is being done to combat this problem.

After the two great world wars, the United Nations Population Commission and the International Planned Parenthood Federation began to advocate for more global population control. Many groups who advocate for population control focus on:

  • Changing cultural attitudes that keep population rates high (or low)
  • Providing contraception to least developed countries (LDC)
  • Helping countries study population trends by improving census counts
  • Empowering women and emphasizing gender equality

It is believed that worldwide, over 60 percent of women between ages 15-49 use some form of contraception. This varies regionally. In the United States, contraception use is at nearly 75 percent, whereas in Africa, it is around 30 percent. The consensus today is that the focus on population planning should be on gender equality and improving the social status of women around the world. This is the focus of the International Conference on Population and Development.

Religious organizations are also concerned with population growth; however, they focus on contraception issues and not strictly population growth. Some religions and political entities find contraception use immoral which has influenced some governments to make access to and use of them illegal.

2.4 Migration

Migration is the physical movement of people from one place to another; it may be over long distances, such as moving from one country to another, and can occur as individuals, family units, or large groups. When referring to international movement, migration is called immigration.

Some interesting patterns occur with migration. Most people that migrate travel only a short distance from their original destination and usually within their country, often due to economic factors. This is called internal migration. Internal migration can be divided up even further into interregional migration (the permanent movement from one region of a country to another region) and intraregional migration (the permanent movement within a single region of a country).

The other type of migration is called international migration, which is the movement from one country to another. Some people can voluntarily migrate based on individual choice. At other times, an individual must leave against his or her will. This is forced migration. Ultimately, the distance people migrate depends on economic, gender, family status, and cultural factors. For example, long-distance migration tends to involve males looking for employment and traveling by themselves rather than risking to take their families.

Migration is very dynamic around the world, with peaks in different regions at different times. As noted earlier, there are several reasons why people migrate, but where are people relocating to or from? Migration transition is the change in migration patterns within a society caused by industrialization, population growth, and other social and economic changes that also produce the demographic transition. A critical factor in all forms of migration is mobility, the ability to move either permanently or temporarily.

There are several reasons why people migrate known as push and pull factors, and they occur on economic, cultural, or environmental lines. Push factors are events and conditions that compel an individual to move from a location. Pull factors are conditions that influence migrants to move to a particular location. The number one reason why people migrate is for economic purposes. This is because people either get “pushed” away from where they live due to a lack of employment opportunities or pulled because somewhere else either offer more jobs/higher-paying jobs.

Cultural push factors usually involve slavery, political instability, ethnic cleansing, famine, and war. People who choose to flee or are forced to flee as a result of these problems are often refugees. The United States Committee for Refugees classifies a refugee as someone who has been forced from their homes and cannot return because of their religion, race, nationality, or political opinion. In 2010, the United Nations High Commission for Refugees estimated that there are over 44 million people worldwide that have been forcibly displaced. The number grows to another 27 million when considering internally displaced persons (IDPs). Cultural pull factors could include people who want to live in democratic societies, gender equality, or educational or religious opportunities.

There has been a dramatic increase in immigration into the United States from Latin America, Africa, and the Middle East. Some from these regions migrate to the U.S. out of economic necessity. We hear quite a lot about guest workers in the United States. These are individuals who migrate temporarily to take up jobs in other countries. This phenomenon is also known as transnational migration. Others migrate to escape conflicts such as the civil wars in Somalia, Sudan, and Ethiopia. Genocides in Rwanda (1994) and, more recently, Darfur, Sudan have forced internal and international migration. The wars in Afghanistan and Iraq have also forced migration from these regions. The U.N. High Commission for Refugees estimates that over 2 million Iraqis (nearly 8 percent of the pre-war population) have been forced to migrate to nearby nations of Jordan, Syria, and Lebanon.

A variety of environmental push and pull factors also influence migration patterns. Environmental pull factors can include people wanting to live in particular environments. For example, many older adults like to live in Hawaii because they prefer the recreational opportunities that are provided for retired individuals. Some people want to live where snow activities are available or near an ocean. Push factors often are related to the frequency of natural disasters such as earthquakes, tsunamis, hurricanes, or flash floods that a region could experience. Climatic push/pull factors, such as droughts, also influence migration patterns. A very recent example of this is the drought and famine in East Africa. As anthropogenic climate change becomes more pressing, and hundreds of millions of people become displaced, the world will see more climate migrants forced from their homes.

The United States Agency for International Development (US AID) and the Famine Early Warning System Network track potential famines globally so that relief organizations can have a heads up and be more proactive when events occur. People who have been pushed for environmental reasons are called environmentally displaced persons, also called ecological refugees. The problem with these refugees is that they are not protected or given the same rights under the 1951 Refugee Convention. Under the convention, a refugee is a person with: “well-founded fear of being persecuted for reasons of race, religion nationality, and membership of a particular social group or political opinion, who is outside the country of his nationality and, owing to such fear, is unwilling to avail himself of the protection of that country.” However, more and more people are becoming environmental refugees because of climate change, droughts, flooding from large storm systems, water shortages, and more.

Questions for the Future

The issue of global human populations is often controversial because there is no clear consensus on how to deal with it. What demographers do know is that there are over 7.3 billion people on the planet, but they are not evenly distributed around the world. One consistent global pattern is water; nearly 80 percent of the world’s population lives near a large body of water.

  • Why do you think populations converge on large bodies of water?
  • What happens to populations when there is a shortage of water?

There are a variety of ways that geographers and demographers study population dynamics and profiles, often representing this data in the form of diagrams, graphs, and, most importantly, maps. One way social scientists have tried to describe historical, current, and future population trends is with the Demographic Transition Model. The model attempts to explain how more developed countries progressed with their demographics compared to less developed countries today. Some argue that though the model predicts demographic trends in North America and Europe, the model does not accurately represent population trends in other regions of the world. Others say the model is too simplistic because of environmental and cultural factors.

Another area of debate is what the potential ramifications could be as the human population exceeds more than 8 and 9 billion by 2050. This debate started a while ago with the Malthus theory. Many ecologists believe humans have reached the earth’s carrying capacity and cannot sustain such large populations. Others argue that technology has consistently kept ahead of food scarcity concerns and that high populations could be a benefit for less developed countries as a way to improve development.

Geographers also understand that humans are migrating species, and with technology, today can move across great distances. The reason for migration varies, but it all comes down to push or pull factors related to economic, political, social, or environmental reasons. Many of these travelers are temporary living as guest workers until they need to move on. Today, many migrants are refugees, living in a variety of living conditions from complex metropolitans to squatter towns or refugee camps. One thing we do know about human migration is that the majority of humans will die in the same town they were born in.

Chapter 1: Introduction to Human Geography

Human geography emphasizes the importance of geography as a field of inquiry and introduces students to the concept of spatial organization. Knowing the location of places, people, and events is a gateway to understanding complex environmental relationships and interconnections among places and across landscapes.

Geographic concepts emphasize location, space, place, scale of analysis, pattern, regionalization, and globalization. These concepts are essential to understanding spatial interaction and spatial behavior, the dynamics of human population growth and migration, patterns of culture, political control of territory, areas of agricultural production, the changing location of industry and economic development strategies, and evolving human settlement patterns, particularly urbanization. Geographers use geospatial technology (e.g., satellite imagery, aerial photography, geographic information systems (GIS), global positioning systems (GPS), and drone technology), spatial data, mathematical formulas, and design models to understand the world from a spatial perspective better.

Human geography enables us to consider the regional organization of various phenomena and encourages geographic analysis to understand processes in a changing world. For example, geographic perspectives on the impact of human activities on the environment, from local to global scales, include effects on land, water, atmosphere, population, biodiversity, and climate. These human ecological examples are inherent throughout the discipline, especially in topics dealing with population growth, agricultural and industrial practices, and rapid urbanization. Geographers apply geographic methods and geospatial technologies to a variety of situations.

1.1 Geography: The Science of Where, How, and Why

Geography as a Body of Knowledge

Geography seek to answer the “where,” “why,” and the “how.” Simply knowing where a country is located is undoubtedly helpful, but geographers dig deeper:

  • Why is it located there?
  • Why does it have a particular shape, and how does this shape affect how it interacts with its neighbors and its access to resources?
  • Why do the people of the country have certain cultural features?
  • Why does the country have a specific style of government?
  • How do we analyze patterns in human-environment interactions?

The list goes on and on, and as you might notice, incorporates a variety of historical, cultural, political, and physical features. This synthesis of the physical world and human activity is at the heart of the regional geographic approach.

The term “geography” comes from the Greek term geo- meaning “the earth” and – graphia meaning “to write,” and many early geographers did exactly that: they wrote about the world. Ibn Battuta, for example, was a scholar from Morocco and traveled extensively across Africa and Asia in the 14th century CE. Eratosthenes is commonly considered to be the “Father of Geography,” and in fact, he quite literally wrote the book on the subject in the third century BCE. His three-volume text, Geographica, included maps of the entire known world, including different climate zones, the locations of hundreds of different cities, and a coordinate system. This was a revolutionary and highly regarded text, especially for the time period. Eratosthenes is also credited as the first person to calculate the circumference of the Earth. Many early geographers, like Eratosthenes, were primarily cartographers, referring to people who scientifically study and create maps, and early maps, such as those used in Babylon, Polynesia, and the Arabian Peninsula, were often used for navigation. In the Middle Ages, as academic inquiry in Europe declined with the fall of the Roman Empire, Muslim geographer Muhammad al-Idrisi created one of the most advanced maps of pre-modern times, inspiring future geographers from the region.

Geography today, though using more advanced tools and techniques, draws on the foundations laid by these predecessors. What unites all geographers, whether they are travelers writing about the world’s cultures or cartographers mapping new frontiers, is an attention to the spatial perspective. As geographer Harm deBlij once explained, there are three main ways to look at the world. One way is chronological, as a historian might examine the sequence of world events. A second way is systematical, as a sociologist might explore the societal systems in place that help shape a given country’s structures of inequality. The third way is spatially, and this is the geographic perspective. Geographers, when confronted with a global problem, immediately ask the questions “Where?” and “Why?” Although geography is a broad discipline that includes quantitative techniques like statistics and qualitative methods like interviews, all geographers share this common way of looking at the world from a spatial perspective.

A Spatial Body of Knowledge

At the heart of the spatial perspective is the question of “where,” but there are a number of different ways to answer this question. Relative location refers to the location of a place relative to other places, and we commonly use relative location when giving directions to people. Wemight instruct them to turn “by the gas station on the corner,” or say that we live “in the dorm across from the fountain.” Another way to describe a place is by referring to its absolute location. Absolute location references an exact point on Earth and commonly uses specific coordinates like latitude and longitude. Lines of latitude and longitude are imaginary lines that circle the globe and form the geographic coordinate system. Lines of latitude run laterally, parallel to the equator, and measure distances north or south of the equator. Lines of longitude, on the other hand, converge at the poles and measure distances east and west of the prime meridian.

Every place on Earth has a precise location that can be measured with latitude and longitude. The location of the White House in Washington, DC, for example, is located at latitude 38.8977 °N and longitude 77.0365°W. Absolute location might also refer to details like elevation. The Dead Sea, located on the boundary of Jordan and Israel, is the lowest location on land, dipping down to 1,378 feet below sea level.

Historically, most maps were hand-drawn, but with the advent of computer technology came more advanced maps created with the aid of satellite technology. Geographic information science (GIS), sometimes also referred to as geographic information systems, uses computers and satellite imagery to capture, store, manipulate, analyze, manage, and present spatial data. GIS essentially uses layers of information and is often used to make decisions in a wide variety of contexts. An urban planner might use GIS to determine the best location for a new fire station, while a biologist might use GIS to map the migratory paths of birds. You might use GIS to get navigation directions from one place to another, layering place names, buildings, and roads.

One difficulty with map-making, even when using advanced technology, is that the earth is roughly a sphere while maps are generally flat. When converting the spherical Earth to a flat map, some distortion always occurs. A map projection, or a representation of Earth’s surface on a flat plane, always distorts at least one of these four properties: area, shape, distance, and direction. Some maps preserve three of these properties, while significantly distorting another, while other maps seek to minimize overall distortion but distort each property somewhat. So, which map projection is best? That depends on the purpose of the map. The Mercator projection, while significantly distorting the size of places near the poles, preserves angles and shapes, making it ideal for navigation.

The Winkel Tripel projection is so-named because its creator, Oswald Winkel, sought to minimize three kinds of distortion: area, direction, and distance. It has been used by the National Geographic Society since 1998 as the standard projection of world maps.

When representing the Earth on a manageable-sized map, the actual size of location is reduced. Scale is the ratio between the distance between two locations on a map and the corresponding distance on Earth’s surface. A 1:1000 scale map, for example, would mean that 1 meters on the map equals 1000 meters, or 1 kilometer, on Earth’s surface. Scale can sometimes be a confusing concept for students, so it’s important to remember that it refers to a ratio. It doesn’t refer to the size of the map itself, but rather, how zoomed in or out the map is. A 1:1 scale map of your room would be the exact same size of your room – plenty of room for significant detail, but hard to fit into your glove compartment.

As with map projections, the “best” scale for a map depends on what it’s used for. If you’re going on a walking tour of a historic town, a 1:5,000 scale map is commonly used. If you’re a geography student looking at a map of the entire world, a 1:50,000,000 scale map would be appropriate. “Large” scale and “small” scale refer to the ratio, not to the size of the landmass on the map. 1 divided by 5,000 is 0.0002, which is a larger number than 1 divided by 50,000,000 (which is 0.00000002). Thus, a 1:5,000 scale map is considered “large” scale while 1:50,000,000 is considered “small” scale.

All maps have a purpose, whether it’s to guide sailing ships, help students create a more accurate mental map of the world, or tell a story. The map projection, color scheme, scale, and labels are all decisions made by the mapmaker. Some argued that the widespread use of the Mercator projection, which made Africa look smaller relative to North America and Eurasia, led people to minimize the importance of Africa’s political and economic issues. Just as texts can be critiqued for their style, message, and purpose, so too can maps be critiqued for the information and message they present.

The spatial perspective, and answering the question of “where,” encompasses more than just static locations on a map. Often, answering the question of “where” relates to movement across space. Diffusion refers to the spreading of something from one place to another, and might relate to the physical movement of people or the spread of disease, or the diffusion of ideas, technology, or other intangible phenomena. Diffusion occurs for different reasons and at different rates. Just as static features of culture and the physical landscape can be mapped, geographers can also map the spread of various characteristics or ideas to study how they interact and change.

1.2 Scientific Inquiry

Science is a path to gaining knowledge about the natural world. The study of science also includes the body of knowledge that has been collected through scientific inquiry. Scientists conduct scientific investigations by asking testable questions that can be systematically observed and careful evidence collected. Then they use logical reasoning and some imagination to develop a testable idea, called a hypothesis, along with explanations to explain the concept — finally, scientists design and conduct experiments based on their hypotheses.

Science seeks to understand the fundamental laws and principles that cause natural patterns and govern natural processes. It is more than just a body of knowledge; science is a way of thinking that provides a means to evaluate and create new knowledge without bias. At its best, science uses objective evidence over subjective evidence to reach sound and logical conclusions.

Truth in science is a difficult concept, and this is because science is falsifiable, which means an initial explanation (hypothesis) is testable and able to be proven false. A scientific theory can never wholly be proven correct; it is only after exhaustive attempts to falsify competing for ideas and variations that the theory is assumed to be true. While it may seem like a weakness, the strength behind this is that all scientific ideas have stood up to scrutiny, which is not necessarily true for non-scientific ideas and procedures. It is the ability to prove current ideas wrong that is a driving force in science and has driven many scientific careers.

Early Scientific Thought

Western science began in ancient Greece, specifically Athens, and early democracies like Athens encouraged individuals to think more independently than in the past when kings ruled most civilizations. Foremost among these early philosophers/scientists was Aristotle, born in 384 B.C.E., who contributed to foundations of knowledge and science. Aristotle was a student of Plato and a tutor to Alexander the Great, who would conquer the Persian Empire as far as India, spreading Greek culture in the process. Aristotle used deductive reasoning, applying what he thought he knew to establish a new idea (if A, then B).

Deductive reasoning starts with generalized principles or established or assumed knowledge and extends them to new ideas or conclusions. If a deductive conclusion is derived from sound principles, then the conclusion has a high degree of certainty. This contrasts with inductive reasoning, which begins from new observations and attempts to discern the underlying principles that explain the observations. Inductive reasoning relies on evidence to infer a conclusion and does not have the perceived certainty of deductive reasoning. Both are important in science. Scientists take existing principles and laws and see if these explain observations. Also, they make new observations and seek to determine the principles and laws that underlie them. Both emphasize the two most important aspects of science: observations and inferences.

The Romans absorbed Greek culture. The Romans controlled people and resources in their Empire by building an infrastructure of roads, bridges, and aqueducts. Their road network helped spread Greek culture and knowledge throughout the Empire. The fall of the Roman Empire ushered in the Medieval period in Europe in which scientific progress in Europe was largely overlooked. During Europe’s Medieval period, science flourished in the Middle East between 800 and 1450 CE as the Islamic civilization developed. Empirical experimentation grew during this time and was a vital component of the scientific revolution that started in 17th century Europe. Empiricism emphasizes the value of evidence gained from testing and observations of the senses. Because of the respect, others hold for Aristotle’s wisdom and knowledge, his logical approach was accepted for centuries and formed an essential basis for understanding nature. The Aristotelian approach came under criticism by 17th-century scholars of the Renaissance.

As science progressed, certain aspects of science that could not be experimented and sensed awaited the development of new technologies, such as atoms, molecules, and the deep-time of geology. The Renaissance, following the Medieval period between the fourteenth and seventeenth centuries, was a great awakening of artistic and scientific thought and expression in Europe.

The foundational example of the modern scientific approach is the understanding of the solar system. The Greek astronomer Claudius Ptolemy, in the second century, using an Aristotelian approach and mathematics, observed the Sun, Moon, and stars moving across the sky and deductively reasoned that Earth must be at the center of the universe with the celestial bodies circling Earth. Ptolemy even had mathematical, astronomical calculations that supported his argument. The view of the cosmos with Earth at its center is called the geocentric model.

In contrast, early Renaissance scholars used new instruments such as the telescope to enhance astronomical observations and developed new mathematics to explain those observations. These scholars proposed a radically new understanding of the cosmos, one in which Earth and the other planets orbited around the centrally located Sun. This is known as the heliocentric model, and astronomer Nicolaus Copernicus (1473-1543) was the first to offer a solid mathematical explanation for it around 1543.

The Scientific Method

Science and scientists are wary of situations that either discourage or avoid the process of falsifiability. If a statement or an explanation of a phenomenon cannot be tested or does not meet scientific standards, then it is not considered science, but instead is considered a pseudoscience. Falsifiability separates science from pseudoscience. Pseudoscience is a collection of ideas that may appear scientific but does not use the scientific method. An example of pseudoscience is astrology, which is a belief system that the movement of celestial bodies influences human behavior. This is not to be confused with astronomy, which is the scientific study of celestial bodies and the cosmos. There are many astronomical observations associated with astrology, but astrology does not use the scientific method. Conclusions in astrology are not based on evidence and experiments, and its statements are not falsifiable.

Science is also a social process. Scientists share their ideas with peers at conferences for guidance and feedback. A scientist’s research paper and data are rigorously reviewed by many qualified peers before publication. Research results are not allowed to be published by a reputable journal or publishing house until other scientists who are experts in the field have determined that the methods are scientifically sound and the conclusions are reasonable. Science aims to “weed out” misinformation, invalid research results, and wild speculation. Thus, the scientific process is slow, cautious, and conservative. Scientists do not jump to conclusions, but wait until an overwhelming amount of evidence from many independent researchers points to the same conclusion before accepting a scientific concept.

Science is the realm of facts and observations, not moral judgments. Scientists might enjoy studying tornadoes, but their opinion that tornadoes are exciting is not essential to learning about them. Scientists increase our technological knowledge, but science does not determine how or if we use that knowledge. Scientists discovered to build an atomic bomb, but scientists did not decide whether or when to use it. Scientists have accumulated data on warming temperatures; their models have shown the likely causes of this warming. However, although scientists are primarily in agreement on the causes of global warming, they cannot force politicians or individuals to pass laws or change behaviors.

For science to work, scientists must make some assumptions. The rules of nature, whether simple or complex, are the same everywhere in the universe. Natural events, structures, and landforms have natural causes, and evidence from the natural world can be used to learn about those causes. The objects and events in nature can be better understood through careful, systematic study. Scientific ideas can change if we gather new data or learn more. An idea, even one that is accepted today, may need to be modified or be entirely replaced if new evidence contradicts previous scientific ideas. However, the body of scientific knowledge can grow and evolve because some theories become more accepted with repeated testing or old theories are modified or replaced with new knowledge.

Scientific research may be done to build knowledge or to solve problems and lead to scientific discoveries and technological advances. Pure research often aids in the development of applied research. Sometimes the results of pure research may be applied long after the pure research was completed. Sometimes something unexpected is discovered while scientists are conducting their research. Some ideas are not testable. For example, supernatural phenomena, such as stories of ghosts, werewolves, or vampires, cannot be tested. Scientists describe what they see, whether in nature or a laboratory.

The scientific method is a series of steps that help to investigate the answer those questions; scientists use data and evidence gathered from observations, experience, or experiments to answer their questions.

However, scientific inquiry rarely proceeds in the same sequence of steps outlined by the scientific method. For example, the order of the steps might change because more questions arise from the data that is collected. Still, to come to valid conclusions, logical, repeatable steps of the scientific method must be followed.

Scientific Research

A scientist will first try to find answers to their questions by researching what may already be known about the topic. This information will allow the scientist to create a good experimental design. If this question has already been answered, the research may be enough, or it may lead to new questions. For example, a farmer researches no-till farming on the Internet, at the library, at the local farming supply store, and elsewhere. She learns about various farming methods, what types of fertilizers are best to use, and what the best crop spacing would be. From her research, she also learns that no-till farming can be a way to reduce carbon dioxide emissions into the atmosphere, which helps in the fight against global warming.

Hypothesis

With the information collected from background research, the scientist creates a plausible explanation for their question, called a hypothesis. The hypothesis must directly answer the question at hand and must be testable. Having a hypothesis guides a scientist in designing experiments and interpreting data. Referring back to the farmer, they would hypothesize that no-till farming will decrease soil erosion on hills of similar steepness as compared to the traditional farming technique because there will be fewer disturbances to the soil.

Data Collection

To support or refute a hypothesis, the scientist must collect data. A great deal of logic and methodology goes into designing tests to collect data so the data can answer scientific questions. Experiment or observation usually gathers data, and sometimes improvements in technology will allow new tests to address a hypothesis better.

Observation is used to collect data when it is not possible for practical or ethical reasons to perform experiments. Written descriptions of observations are qualitative data-based, and this data is used to answer critical questions. Scientists use many various types of instruments to make quantitative measurements, typically based on the scientific discipline. Electron microscopes can be used to explore tiny objects or telescopes to learn about the universe. Probes or drones make observations where it is too dangerous or too impractical for scientists to go.

Objective observation is without personal bias and is observed the same by all individuals. Humans, by their nature, do have a bias, so no observation is entirely free of bias; the goal is to be as free of bias as possible. A subjective observation is based on a person’s feelings and beliefs and is unique to that individual. Science uses quantitative over qualitative objective observations, whenever possible.

A quantitative observation can be measured and expressed with a number. Qualitative observations are not numeric but rather verbal descriptions. For example, saying a rock is red or heavy is qualitative. However, measuring the exact color of red, or measuring the density of the rock (which can be traced to the proportion of certain minerals in the rock) is quantitative. This is why quantitative measurements are much more useful to scientists. Calculations can be done on specific numbers, but cannot be done on qualitative values.

A good experiment must have one factor that can be manipulated or changed, called the independent variable. The rest of the factors must remain the same, called experimental controls. The outcome of the experiment, or what changes as a result of the experiment, is the dependent variable because the variable “depends” on the independent variable.

Return to the example of the farmer. She decides to experiment on two separate hills that have similar steepness and receives similar amounts of sunshine. On one hill, the farmer uses a traditional farming technique that includes plowing. On the other, she uses a no-till method, spacing plants farther apart and using specialized equipment for planting. The plants on both hillsides receive identical amounts of water and fertilizer, and she measures plant growth on both hillsides. In this experiment:

  • What is the independent variable?
  • What are the experimental controls?
  • What is the dependent variable?

The independent variable is the farming technique – either traditional or no-till – because that is what is being manipulated. For a fair comparison of the two farming techniques, the two hills must have the same slope and the same amount of fertilizer and water. These are the experimental controls. The amount of erosion is the dependent variable. It is what the farmer is measuring. During an experiment, scientists make many measurements. Data in the form of numbers is quantitative.

Data gathered from advanced equipment usually goes directly into a computer, or the scientist may put the data into a database. The data can then be statistically analyzed to determine specific relationships between different categories of data. Statistics can make sense of the variability in a data set.

In just about every human endeavor, errors are unavoidable. In a scientific experiment, this is called experimental error. Systematic errors may be inherent in the experimental setup so that the numbers are always skewed in one direction. For example, a scale may always measure one-half ounce high. The error will disappear if the scale is re-calibrated. Random errors may occur because the measurement is not precisely analyzed. For example, a stopwatch may be stopped too soon or too late. Data errors can be corrected by taking several measurements and averaging them. If a result is inconsistent with the results from other samples and many tests have conducted, it is likely that a mistake was made in that experiment, and the inconsistent data point can be thrown out.

Conclusion

Scientists study graphs, tables, diagrams, images, descriptions, and all other available data to conclude from their experiments. Is there an answer to the question based on the results of the experiment? Was the hypothesis supported? Some experiments support a hypothesis entirely, and some do not. If a hypothesis is shown to be wrong, the experiment was not a failure because all experimental results contribute to knowledge. Experiments that do or do not support a hypothesis may lead to even more questions and more experiments.

Let’s return to the farmer again. After a year, the farmer finds that erosion on the traditionally farmed hill is 2.2 times greater than erosion on the no-till hill. She also discovers that the plants on the no-till plots are taller and have higher amounts of moisture in the soil. From this, she decides to convert to no-till farming for future crops. The farmer continues researching to see what other factors may help reduce erosion.

Scientific Theory

As scientists conduct experiments and make observations to test a hypothesis, over time, they collect many data points. If a hypothesis explains all the data and none of the data contradicts the hypothesis, over time, the hypothesis becomes a theory. A scientific theory is supported by many observations and has no significant inconsistencies. A theory must continually be tested and revised by the scientific community. Once a theory has been developed, it can be used to predict behavior. A theory provides a model of reality that is simpler than the phenomenon itself. Even a theory can be overthrown if conflicting data is discovered. However, a longstanding theory that has lots of evidence to back it up is less likely to be removed than a newer theory.

Science does not prove anything beyond a shadow of a doubt. Scientists seek evidence that supports or refutes an idea. If there is no significant evidence to refute an idea and a lot of evidence to support it, the idea is accepted. The more lines of evidence that support an idea, the more likely it will stand the test of time. The value of a theory is when scientists can use it to offer reliable explanations and make accurate predictions.

Scientific Denial

Introductory science courses usually deal with accepted scientific theory, and credible ideas that oppose the standardly accepted theories are not included. This makes it easier for students to understand complex material. A student who further studies a discipline will encounter controversies later. However, at the introductory level, the established science is presented. This section on science denial discusses how some groups of people argue that some established scientific theories are wrong, not based on their scientific merit but rather on the ideology of the group.

When an organization or person denies or doubts the scientific consensus on an issue in a non-scientific way, it is referred to as science denial. The rationale is rarely based on objective scientific evidence but instead is based on subjective social, political, or economic reasons. Science denial is a rhetorical argument that has been applied selectively to issues that some organizations or people oppose. Three (past and current) issues that demonstrate this are: 1) the teaching of evolution in public schools, 2) early links between tobacco smoke and cancer, and 3) anthropogenic (human-caused) climate change. Of these, denial of climate change has a strong connection with geographic science. A climate denier denies explicitly or doubts the scientific conclusions of the community of scientists who specifically study climate.

Science denial generally uses three rhetorical but false arguments. The first argument tries to undermine science by claiming that the methods are flawed or that the science is unsettled. The idea that the science is unsettled creates doubt for a regular citizen. A sense of doubt delays action. Scientists typically avoid claiming universal truths and use language that conveys a sense of uncertainty because scientific ideas change as more evidence is uncovered. This avoidance of universal truths should not be confused with the uncertainty of scientific conclusions.

The second argument attacks the researchers who’re findings they disagree with. They claim that ideology and an economic agenda motivate scientific conclusions. They claim that the researchers want to “get more funding for their research” or “expand government regulation.” This is an ad hominem argument in which a person’s character is attacked instead of the merit of their argument.

The third argument is to demand equal media coverage for a “balanced” view in an attempt to validate the false controversy. This includes equal time in the educational curriculum. For example, the last rhetorical argument would demand that explanations for evolution or climate change be discussed along with alternative religious or anthropogenic ones, even when there is little scientific evidence supporting the alternatives. Conclusions based on the scientific method should not be confused with alternative outcomes based on ideologies. Two entirely different methods for concluding nature are involved and do not belong together in the same course.

The formation of new conclusions based on the scientific method is the only way to change scientific findings. We would not teach Flat Earth geology along with plate tectonics because Flat Earthers do not follow the scientific method. Using the fact that scientists avoid universal truths and change their ideas as more evidence is uncovered is how the scientific process works and shouldn’t be seen as meaning that the science is unsettled. Because of widespread scientific illiteracy, these arguments are used by those who wish to suppress science and misinform the general public.

In a classic case of science denial, the rhetorical arguments were used in the 1950s, ’60s, and ’70s by the tobacco industry and their scientists to deny the links between tobacco and cancer. Once it became clear that the tobacco industry could not show that smoking did not cause cancer, their next strategy was to create a sense of “doubt” on the science. They suggested that science was not yet fully understood, and the issue needed more study. Thus legislative action should be delayed. This false sense of “doubt” is the crucial component that misleads the public and prevents action. This is currently being employed by those who deny human involvement in climate change.

1.3 Geographic Perspective

Physical Perspective

When we describe places, we can discuss their absolute and relative location and their relationship and interaction with other places. As regional geographers, we can dig deeper and explore both the physical and human characteristics that make a particular place unique. Geographers explore a wide variety of spatial phenomena, but the discipline can roughly be divided into two branches: physical geography and human geography. Physical geography focuses on natural features and processes, such as landforms, climate, and water features. Human geography is concerned with human activity, such as culture, language, and religion. However, these branches are not exclusive. You might be a physical geographer who studies hurricanes, but your research includes the human impact from these events. You might be a human geographer who studies food, but your investigations include the ecological impact of agricultural systems. Regional geography takes this holistic approach, exploring both the physical and human characteristics of the world’s regions.

Much of Earth’s physical landscape, from mountains to volcanoes to earthquakes to valleys, has resulted from the movement of tectonic plates. As the theory of plate tectonics describes, these rigid plates are situated on top of a bed of molten, flowing material, much like a cork floating in a pot of boiling water. There are seven major tectonic plates and numerous minor plates.

Where two tectonic plates meet is known as a plate boundary, and boundaries can interact in three different ways. Where two plates slide past one another is called a transform boundary. The San Andreas Fault in California is an example of a transform boundary. A divergent plate boundary is where two plates slide apart from one another. Africa’s Rift Valley was formed by this type of plate movement. Convergent plate boundaries occur when two plates slide towards one another. In this case, where two plates have roughly the same density, upward movement can occur, creating mountains. The Himalaya Mountains, for example, were formed from the Indian plate converging with the Eurasian plate. In other cases, subduction occurs and one plate slides below the other. Here, deep, under-ocean trenches can form. The 2004 Indian Ocean earthquake and tsunami occurred because of a subducting plate boundary off the west coast of Sumatra, Indonesia.

The interaction between tectonic plates and historical patterns of erosion and deposition have generated a variety of landforms across Earth’s surface. Each of the world’s regions has identifiable physical features, such as plains, valleys, mountains, and major water bodies. Topography refers to the study of the shape and features of the surface of the Earth. Areas of high relief have significant changes in elevation on the landscape, such as steep mountains, while areas of low relief are relatively flat.

Another key feature of Earth’s physical landscape is climate. Weather refers to the short-term state of the atmosphere. We might refer to the weather as partly sunny or stormy, for example. Climate, on the other hand, refers to long-term weather patterns and is affected by a place’s latitude, terrain, altitude, and nearby water bodies. Geographers commonly use the Köppen climate classification system to refer to the major climate zones found in the world.

Each climate zone in the Köppen climate classification system is assigned a lettered code, referring to the temperature and precipitation patterns found in the particular region. Climate varies widely across Earth. Cherrapunji, India, located in the CWB climate zone, receives over 11,000 mm (400 in) of rain each year. In contrast, the Atacama Desert (BWk), situated along the western coast of South America across Chile, Peru, Bolivia, and Argentina, typically receives only around 1 to 3 mm (0.04 to 0.12 in) of rain each year.

Earth’s climate has gone through significant changes historically, alternating between long periods of warming and cooling. Since the industrial revolution in the 1800s, however, global climate has experienced a warming phase. 95 percent of scientists agree that this global climate change has resulted primarily from human activities, particularly the emission of greenhouse gasses like carbon dioxide. Fifteen of the last sixteen warmest years ever recorded have occurred since 2000. Overall, this warming has contributed to rising sea levels as the polar ice caps melt, changing precipitation patterns, and the expansion of deserts. The responses to global climate change, and the impacts from it vary by region.

Human Perspective

The physical setting of the world’s places has undoubtedly influenced the human setting; just as human activities have shaped the physical landscape. There are currently around 7.4 billion people in the world, but these billions of people are not uniformly distributed. When we consider where people live in the world, we tend to cluster in areas that are warm and are near water and avoid places that are cold and dry. There are three major population clusters in the world: East Asia, South Asia, and Europe.

Just as geographers can discuss “where” people are located, we can explore “why” population growth is occurring in particular areas. All of the 10 most populous cities in the world are located in countries traditionally categorized as “developing.” These countries typically have high rates of population growth. A population grows, quite simply, when more people are born than die. The birth rate refers to the total number of live births per 1,000 people in a given year. In 2012, the average global birth rate was 19.15 births per 1,000 people.

Subtracting the death rate from the birth rate results in a country’s rate of natural increase (RNI). For example, Madagascar has a birth rate of 37.89 per 1,000 and a death rate of 7.97 per 1,000. 37.89 minus 7.97 is 29.92 per 1,000. If you divide the result by 10, you’d get 2.992 per 100 or 2.992 percent. In essence, this means that Madagascar’s population is increasing at a rate of 2.992 percent per year. The natural increase rate does not include immigration. Some countries in Europe, in fact, have a negative natural increase rate, but their population continues to increase due to immigration.

The birth rate is directly affected by the total fertility rate (TFR), which is the average number of children born to a woman during her childbearing years. In developing countries, the total fertility rate is often 4 or more children, contributing to high population growth. In developed countries, on the other hand, the total fertility rate may be only 1 or 2 children, which can ultimately lead to population decline.

A number of factors influence the total fertility rate, but it is generally connected to a country’s overall level of development. As a country develops and industrializes, it generally becomes more urbanized. Children are no longer needed to assist with family farms, and urban areas might not have large enough homes for big families.

Women increasingly enter the workforce, which can delay childbearing and further restrict the number of children a family desires. Culturally, a shift occurs when industrialized societies no longer value large family sizes. As women’s education increases, women are able to take control of their reproductive rights. Contraceptive use becomes more widespread and socially acceptable.

This shift in population characteristics as a country industrialized can be represented by the demographic transition model (DTM). This model demonstrates the changes in birth rates, death rates, and population growth over time as a country develops. In stage one, during feudal Europe, for example, birth rates and death rates were very high. Populations were vulnerable to drought and disease, and thus population growth was minimal. No country remains in stage one today. In stage two, a decline in death rates leads to a rise in population. This decline in death rates occurred as a result of agricultural productivity and improvements in public health. Vaccines, for example, greatly reduced the mortality from childhood diseases.

Stage two countries are primarily agricultural, and thus there is a cultural and historical preference for large families, so birth rates remain high. Most of Sub- Saharan Africa is in stage two. In stage three, urbanization and increasing access to contraceptives leads to a decline in the birth rate. As country industrializes, women enter the workforce and seek higher education. The population growth begins to slow. Much of Middle and South America, as well as India, are in stage three.

In stage four, birth rates approach the death rates. Women have increased independence as well as educational and work opportunities, and families may choose to have a small number of children or none at all. Most of Europe, as well as China, are in stage four. Some have proposed a stage five of the demographic transition model. In some countries, the birth rate has fallen below the death rate as families choose to have only 1 child. In these cases, a population will decline unless there is significant immigration. Japan, for example, is in stage five and has a total fertility rate of 1.41. Although this is only a model, and each country passes through the stages of demographic transition at different rates, the generalized model of demographic transition holds true for most countries of the world.

As countries industrialize and become more developed, they shift from primarily rural settlements to urban ones. Urbanization refers to the increased proportion of people living in urban areas. As people migrate out of rural, agricultural areas, the proportion of people living in cities increases. As people living in cities have children, this further increases urbanization. For most of human history, we have been predominantly rural. By the middle of 2009, however, the number of people living in urban areas surpassed the number of people living in rural areas for the first time. In 2014, 54 percent of the world’s population lived in urban areas. This is expected to increase to 66 percent by 2050.

The number of megacities, cities with 10 million people or more, has also increased. In 1990, there were 10 megacities in the world. In 2014, there were 28 megacities. Tokyo- Yokohama is the largest metropolitan area in the world with over 38 million inhabitants.

Regional Thinking

The world can be divided into regions based on human and/or physical characteristics. Regions simply refer to spatial areas that share a common feature. There are three types of regions: formal, functional, and perceptual. Formal regions, some- times called homogenous regions, have at least one characteristic in common. A map of religions in Europe, for example, groups countries based on the dominant religion, creating formal regions. This isn’t to say that everyone in Spain is Roman Catholic, but rather that most people in Spain are Roman Catholic. Other for- mal regions might include political affiliation, climate, agricultural zones, or ethnicity. Formal regions might also be established by governmental organizations, such as the case with state or provincial boundaries.

Functional regions, unlike formal regions, are not homogenous in the sense that they do not share a single cultural or physical characteristic. Rather, functional regions are united by a particular function, often economic. Functional regions are some- times called nodal regions and have a nodal arrangement, with a core and surrounding nodes. A metropolitan area, for example, often includes a central city and its surrounding suburbs. We tend to think of the area as a “region” not because everyone is the same religion or ethnicity, or has the same political affiliation, but because it functions as a region. Los Angeles, for example, is the second-most populous city in the United States. However, the region of Los Angeles extends far beyond its official city limits. In fact, over 471,000 workers commute into Los Angeles County from the surrounding region every day. Los Angeles, as with all metropolitan areas, functions economically as a single region and is thus considered a functional region. Other examples of functional regions include church parishes, radio station listening areas, and newspaper subscription areas.

Perceptual regions are not as well-defined as formal or functional regions and are based on people’s perceptions. The southeastern region of the United States is often referred to as “the South,” but where the exact boundary of this region depends on individual perception. Some people might include all of the states that formed the Confederacy during the Civil War. Others might exclude Missouri or Oklahoma. Perceptual regions exist at a variety of scales. In your hometown, there might be a perceptual region called “the west side.” Internationally, regions like the Midlands in Britain or the Swiss Alps are considered perceptual. Similarly, “the Middle East” is a perceptual region. It is perceived to exist as a result of religious and ethnic characteristics, but people wouldn’t necessarily agree on which countries to include. Perceptual regions are real in the sense that our perceptions are real, but their boundaries are not uniformly agreed upon.

As geographers, we can divide the world into a number of different regions based on formal criteria and functional interaction. However, there is a matter of perception, as well. We might divide the world based on landmasses since landmasses often share physical and cultural characteristics. Sometimes water connects people more than land, though. In the case of Europe, for example, the Mediterranean Sea historically provided economic and cultural links to the surrounding countries though we consider them to be three separate continents. Creating regions can often be a question of “lumpers and splitters;” who do you lump together and who do you split apart? Do you have fewer regions united by only a couple characteristics or more regions that share a great deal in common?

Most geographers take a balanced approach to “lumping and splitting,” identifying nine distinct world regions. These regions are largely perceptual, however. Where does “Middle” America end and “South” America begin? Why is Pakistan, a predominantly Muslim country, characterized as “South” Asia and not “Southwest” Asia? Why is Russia its own region? You might divide the world into entirely different regions.

While it might seem like there are clear boundaries between the world’s regions, in actuality, where two regions meet are zones of gradual transition. These transition zones are marked by gradual spatial change. Moscow, Russia, for example, is quite similar to other areas of Eastern Europe, though they are considered two different regions on  the map. Likewise, were it not for the Rio Grande and a large boundary fence dividing the cities of El Paso, Texas and Ciudad Juarez, Mexico; you might not realize that this metropolitan area stretches across two countries and world regions. Even within regions, country boundaries often mark spaces of gradual transition rather than a stark delineation between two completely different spaces. The boundary between Peru and Ecuador, for example, is quite relaxed as international boundaries go and residents of the countries can move freely across the boundary to the towns on either side.

Globalization

When we start to explore the spatial distribution of economic development, we find that there are stark differences between and within world regions. Some countries have a very high standard of living and high average incomes, while others have few resources and high levels of poverty. Politically, some countries have stable, open governments, while others have long-standing authoritarian regimes. Thus, world regional geography is, in many ways, a study of global inequality. But the geographic study of inequality is more than just asking where inequalities are present; it is also digging deeper and asking why those inequalities exist. How can we measure inequality? Generally, inequality refers to uneven distributions of wealth, which can actually be challenging to measure. By some accounts, the wealthiest one percent of people in the world have as much wealth as the bottom 99 percent. Wealth inequality is just one facet of global studies of inequality, however. There are also differences in income: around half of the world survives on less than $2 per day, and around one-fifth have less than $1 per day. There are also global differences in literacy, life expectancy, and health care. There are differences in the rights and economic opportunities for women compared to men. There are differences in the way resources are distributed and conserved.

Furthermore, these differences don’t exist in a bubble. The world is increasingly interconnected, a process known as globalization. This increased global integration is economical but also cultural. An economic downturn in one country can affect its trading partners half a world away. A Hollywood movie might be translated into dozens of different languages and distributed worldwide. Today, it is quite easy for a business woman in the United States to video chat with her factory manager in a less developed country. For many, the relative size of the world is shrinking as a result of advances in transportation and communications technology.

For others in the poorest, most debt-ridden countries, the world is not flat. As global poverty rates have decreased over the past few decades, the number of people living in poverty within Sub-Saharan Africa has increased. In addition, while global economic integration has increased, most monetary transactions still occur within rather than between countries. The core countries can take advantage of globalization, choosing from a variety of trading partners and suppliers of raw materials, but the same cannot always be said of those in the periphery.

Globalization has often led to cultural homogenization, as “Western” culture has increasingly become the global culture. American fast food chains can now be found in a majority of the world’s countries. British and American pop music plays on radio stations around the world. The Internet, in particular, has facilitated the rapid diffusion of cultural ideas and values. But how does globalization affect local cultures? Some worry that as global culture has become more homogenized, local differences are slowly erasing. Traditional music, clothing, and food preferences might be replaced by foreign cultural features, which can lead to conflict. There is thus a tension between globalization, the benefits of global connectivity, and local culture.

It is the uniqueness of the world’s regions, the particular combination of physical landscapes and human activities that have captivated geographers from the earliest explorers to today’s researchers. And while it might simply be interesting to read about distant cultures and appreciate their uniqueness, geographers continue to dig deeper and ask why these differences exist. Geography matters. Even as we have become more culturally homogeneous and economically interconnected, there remain global differences in the geography of countries, and these differences can have profound effects. Geographic study helps us understand the relationship between the world’s communities, explain global differences and inequalities, and better address future challenges.

1.4 Map Interpretation

Geographic Coordinate Systems

The geographic coordinate system measures location from only two values, despite the fact that the locations are described for a three-dimensional surface. The two values used to define location are both measured relative to the polar axis of the Earth. The two measures used in the geographic coordinate system are called latitude and longitude.

Latitude is an angular measurement north or south of the equator relative to a point found at the center of the Earth. This central point is also located on the Earth’s rotational or polar axis. The equator is the starting point for the measurement of latitude. The equator has a value of zero degrees. A line of latitude or parallel of 30° North has an angle that is 30° north of the plane represented by the equator. The maximum value that latitude can attain is either 90° North or South. These lines of latitude run parallel to the rotational axis of the Earth.

Lines connecting points of the same latitude, called parallels, have lines running parallel to each other. The only parallel that is also a great circle is the equator. All other parallels are small circles. The following are the most important parallel lines:

  • Equator, 0 degrees
  • Tropic of Cancer, 23.5 degrees N
  • Tropic of Capricorn, 23.5 degrees S
  • Arctic Circle, 66.5 degrees N
  • Antarctic Circle, 66.5 degrees S
  • North Pole, 90 degrees N (infinitely small circle)
  • South Pole, 90 degrees S (infinitely small circle)

Longitude is the angular measurement east and west of the Prime Meridian. The position of the Prime Meridian was determined by international agreement to be in-line with the location of the former astronomical observatory at Greenwich, England. Because the Earth’s circumference is similar to a circle, it was decided to measure longitude in degrees. The number of degrees found in a circle is 360. The Prime Meridian has a value of zero degrees. A line of longitude or meridian of 45° West has an angle that is 45° west of the plane represented by the Prime Meridian. The maximum value that a meridian of longitude can have is 180° which is the distance halfway around a circle. This meridian is called the International Date Line. Designations of west and east are used to distinguish where a location is found relative to the Prime Meridian. For example, all of the locations in North America have a longitude that is designated west. At 180 degrees of the Prime Meridian in the Pacific Ocean is the International Date Line. The line determines where the new day begins in the world. Now because of this, the International Date Line is not a straight line, rather it follows national borders so that a country is not divided into two separate days.

Ultimately, when parallel and meridian lines are combined, the result is a geographic grid system that allows users to determine their exact location on the planet.

Great and Small Circles

Much of Earth’s grid system is based on the location of the North Pole, South Pole, and the Equator. The poles are an imaginary line running from the axis of Earth’s rotation. The plane of the equator is an imaginary horizontal line that cuts the earth into two halves. This brings up the topic of great and small circles. A great circle is any circle that divides the earth into a circumference of two halves. It is also the largest circle that can be drawn on a sphere. The line connecting any points along a great circle is also the shortest distance between those two points.

Examples of great circles include the Equator, all lines of longitude, the line that divides the earth into day and night called the circle of illumination, and the plane of the ecliptic, which divides the earth into equal halves along the equator. Small circles are circles that cut the earth, but not into equal halves.

Time Zones

Before the late nineteenth century, timekeeping was primarily a local phenomenon. Each town would set their clocks according to the motions of the Sun. Noon was defined as the time when the Sun reached its maximum altitude above the horizon. Cities and towns would assign a clockmaker to calibrate a town clock to these solar motions. This town clock would then represent “official” time, and the citizens would set their watches and clocks accordingly.

The ladder half of the nineteenth century was a time of increased movement of humans. In the United States and Canada, large numbers of people were moving west, and settlements in these areas began expanding rapidly. To support these new settlements, railroads moved people and resources between the various cities and towns. However, because of the nature of how local time was kept, the railroads experience significant problems in constructing timetables for the various stops. Timetables could only become more efficient if the towns and cities adopted some standard method of keeping time.

In 1878, Canadian Sir Sanford Fleming suggested a system of worldwide time zones that would simplify the keeping of time across the Earth. Fleming proposed that the globe should be divided into 24 time zones, every 15 degrees of longitude in width. Since the world rotates once every 24 hours on its axis and there are 360 degrees of longitude, each hour of Earth rotation represents 15 degrees of longitude.

Railroad companies in Canada and the United States began using Fleming’s time zones in 1883. In 1884, an International Prime Meridian Conference was held in Washington D.C. to adopt the standardized method of timekeeping and determined the location of the Prime Meridian. Conference members agreed that the longitude of Greenwich, England would become zero degrees longitude and established the 24 time zones relative to the Prime Meridian. It was also proposed that the measurement of time on the Earth would be made relative to the astronomical measurements at the Royal Observatory at Greenwich. This time standard was called Greenwich Mean Time (GMT).

Today, many nations operate on variations of the time zones suggested by Sir Fleming. In this system, time in the various zones is measured relative the Coordinated Universal Time (UTC) standard at the Prime Meridian. Coordinated Universal Time became the standard legal reference of time all over the world in 1972. UTC is determined from atomic clocks that are coordinated by the International Bureau of Weights and Measures (BIPM) located in France. The numbers located at the bottom of the time zone map indicate how many hours each zone is earlier (negative sign) or later (positive sign) than the Coordinated Universal Time standard. Also, note that national boundaries and political matters influence the shape of the time zone boundaries. For example, China uses a single time zone (eight hours ahead of Coordinated Universal Time) instead of five different time zones.

Coordinate Systems and Map Projections

Depicting the Earth’s three-dimensional surface on a two-dimensional map creates a variety of distortions that involve distance, area, and direction. It is possible to create maps that are somewhat equidistance. However, even these types of maps have some form of distance distortion. Equidistance maps can only control distortion along either lines of latitude or longitude. Distance is often correct on equidistance maps only in the direction of latitude.

On a map that has a large scale, 1:125,000 or larger, distance distortion is usually insignificant. An example of a large-scale map is a standard topographic map. On these maps measuring straight line distance is simple. Distance is first measured on the map using a ruler. This measurement is then converted into a real-world distance using the map’s scale. For example, if we measured a distance of 10 centimeters on a map that had a scale of 1:10,000, we would multiply 10 (distance) by 10,000 (scale). Thus, the actual distance in the real world would be 100,000 centimeters.

Measuring distance along map features that are not straight is a little more difficult. One technique that can be employed for this task is to use several straight-line segments. The accuracy of this method is dependent on the number of straight-line segments used. Another method for measuring curvilinear map distances is to use a mechanical device called an opisometer. This device uses a small rotating wheel that records the distance traveled. The recorded distance is measured by this device either in centimeters or inches.

Distance and Direction on Maps

Depicting the Earth’s three-dimensional surface on a two-dimensional map creates a variety of distortions that involve distance, area, and direction. It is possible to create maps that are somewhat equidistance. However, even these types of maps have some form of distance distortion. Equidistance maps can only control distortion along either lines of latitude or longitude. Distance is often correct on equidistance maps only in the direction of latitude.

On a map that has a large scale, 1:125,000 or larger, distance distortion is usually insignificant. An example of a large-scale map is a standard topographic map. On these maps measuring straight line distance is simple. Distance is first measured on the map using a ruler. This measurement is then converted into a real-world distance using the map’s scale. For example, if we measured a distance of 10 centimeters on a map that had a scale of 1:10,000, we would multiply 10 (distance) by 10,000 (scale). Thus, the actual distance in the real world would be 100,000 centimeters.

Measuring distance along map features that are not straight is a little more difficult. One technique that can be employed for this task is to use several straight-line segments. The accuracy of this method is dependent on the number of straight-line segments used. Another method for measuring curvilinear map distances is to use a mechanical device called an opisometer. This device uses a small rotating wheel that records the distance traveled. The recorded distance is measured by this device either in centimeters or inches.

Like distance, direction is difficult to measure on maps because of the distortion produced by projection systems. However, this distortion is quite small on maps with scales larger than 1:125,000. Direction is usually measured relative to the location of the North or South Pole. Directions determined from these locations are said to be relative to True North or True South. The magnetic poles can also be used to measure direction. However, these points on the Earth are located in spatially different spots from the geographic North and South Pole.

Mapping Our Changing World

Have you ever found driving directions and maps online, used a smartphone to ‘check-in’ to your favorite restaurant, or entered a town name or zip code to retrieve the local weather forecast? Every time you and millions of other users perform these tasks, you are making use of Geographic Information Science (GIScience) and related spatial technologies. Many of these technologies, such as Global Positioning Systems (GPS) and in-vehicle navigation units, are very well-known, and you can probably recall the last time you have used them.

Other applications and services that are the products of GIScience are a little less obvious, but they are every bit as common. If you are connected to the Internet, you are making use of geospatial technologies right now. Every time your browser requests a web page from a Content Delivery Network (CDN), a geographic lookup occurs and the server you are connected to contacts other servers that are closest to it and retrieves the information. This happens so that the delay between your request to view the data and the data being sent to you is as short as possible.

GIScience and the related technologies are everywhere, and we use them every day. When it comes to information, “spatial is special.” Reliance on spatial attributes is what separates geographic information from other types of information. There are several distinguishing properties of geographic information. Understanding them, and their implications for the practice of geographic information science is a key utilizing geographic data.

  • Geographic data represent spatial locations and non-spatial attributes measured at certain times.
  • Geographic space is continuous.
  • Geographic space is nearly spherical.
  • Geographic data tend to be spatially dependent.

Spatial attributes tell us where things are, or where things were at the time the data were collected. By merely including spatial attributes, geographic data allow us to ask a plethora of geographic questions. Another essential characteristic of geographic space is that it is “continuous.” Although the Earth has valleys, canyons, caves, oceans, and more, there are no places on Earth without a location, and connections exist from one place to another. Outside of science fiction, there are no tears in the fabric of space-time. Modern technology can measure location very precisely, making it possible to generate incredibly detailed depictions of geographic feature location (e.g., of the coastline of the eastern U.S). It is often possible to measure so precisely that we collect more location data than we can store and much more than is useful for practical applications. How much information is useful to store or to display in a map will depend on the map scale (how much of the world we represent within a fixed display such as the size of your computer screen) as well as on the map’s purpose.

In addition to being continuous, geographic data also tend to be spatially dependent. More simply, “everything is related to everything else, but near things are more related than distant things” (which leads to an expectation that things that are near to one another tend to be more alike than things that are far apart). How alike things are in relation to their proximity to other things can be measured by a statistical calculation known as spatial autocorrelation. Without this fundamental property, geographic information science as we know it today would not be possible.

Geographic data comes in many types, from many different sources and captured using many techniques; they are collected, sold, and distributed by a wide array of public and private entities. In general, we can divide the collection of geographic data into two main types: directly collected data and remotely sensed data. Directly collected data are generated at the source of the phenomena being measured. Examples of directly collected data include measurements such as temperature readings at specific weather stations, elevations recorded by visiting the location of interest, or the position of a grizzly bear equipped with a GPS-enabled collar. Also, included here are data derived from surveys (e.g., the census) or observation (e.g., Audubon Christmas bird count).

Remotely sensed data are measured from remote distances without any direct contact with the phenomena or need to visit the locations of interest. Satellite images, sonar readings, and radar are all forms of remotely sensed data.

Maps are both the raw material and the product of geographic information systems (GIS). All maps represent features and characteristics of locations, and that representation depends upon data relevant at a particular time. All maps are also selective; they do not show us everything about the place depicted; they show only the particular features and characteristics that their maker decided to include. Maps are often categorized into reference or thematic maps based upon the producer’s decision about what to include and the expectations about how the map will be used. The prototypical reference map depicts the location of “things” that are usually visible in the world; examples include road maps and topographic maps depicting terrain.

Thematic maps, in contrast, typically depict “themes.” They generally are more abstract, involving more processing and interpretation of data, and often depict concepts that are not directly visible; examples include maps of income, health, climate, or ecological diversity. There is no clear-cut line between reference and thematic maps, but the categories are useful to recognize because they relate directly to how the maps are intended to be used and to decisions that their cartographers have made in the process of shrinking and abstracting aspects of the world to generate the map. Different types of thematic maps include:

Choropleth – a thematic map that uses tones or colors to represent spatial data as average values per unit area

Proportional symbol – uses symbols of different sizes to represent data associated with different areas or locations within the map

Isopleth– also known as contour maps or isopleth maps depict smooth continuous phenomena such as precipitation or elevation

Dot – uses a dot symbol to show the presence of a feature or phenomenon – dot maps rely on a visual scatter to show a spatial pattern

Dasymetric – an alternative to a choropleth map but instead of mapping the data so that the region appears uniform, ancillary information is used to model the internal distribution of the data

1.5 Geospatial Technology

Suppose that you have launched a new business that manufactures solar-powered lawnmowers. You are planning a mail campaign to bring this revolutionary new product to the attention of prospective buyers. However, since it is a small business, you cannot afford to sponsor coast-to-coast television commercials or to send brochures by mail to more than 100 million U.S. households. Instead, you plan to target the most likely customers – those who are environmentally conscious, have higher than average family incomes, and who live in areas where there are enough water and the sunshine to support lawns and solar power.

Fortunately, lots of data are available to help you define your mailing list. Household incomes are routinely reported to banks and other financial institutions when families apply for mortgages, loans, and credit cards. Personal tastes related to issues like the environment are reflected in behaviors such as magazine subscriptions and credit card purchases. Market research companies collect such data and transform it into information by creating “lifestyle segments” – categories of households that have similar incomes and tastes. Your solar lawnmower company can purchase lifestyle segment information by 5-digit ZIP code, or even by ZIP+4 codes, which designate individual households.

Geographic Information Systems

It is astonishing how valuable information from the millions upon millions of transactions that are recorded every day. The fact that lifestyle information products are often delivered by geographic areas, such as ZIP codes, speaks to the appeal of geographic information systems (GIS). The scale of these data and their potential applications are increasing continually with the advent of new mechanisms for sharing information and making purchases that are linked to our GPS-enabled smartphones. A Geographical Information System (GIS) is a computer-based tool used to help people transform geographic data into geographic information.

GIS arose out of the need to perform spatial queries on geographic data (questions addressed to a database such as wanting to know a distance or the location where two objects intersect). A spatial query requires knowledge of locations as well as attributes about that location. For example, an environmental analyst might want to know which public drinking water sources are located within one mile of a known toxic chemical spill. Alternatively, a planner might be called upon to identify property parcels located in areas that are subject to flooding.

Numerous tools exist to help users perform database management operations. Microsoft Excel and Access allow users to retrieve specific records, manipulate the records, and create new user content. ESRI’s ArcGIS allows users to organize and manipulate files, but also map the geographic database files in order to find interesting spatial patterns and processes in graphic form.

Global Positioning Sytems

The use of location-based technologies has reached unprecedented levels. Location- enabled devices, giving us access to a wide variety of LBSs, permeate our households and can be found in almost every mall, office, and vehicle. From digital cameras and mobile phones to in-vehicle navigation units and microchips in our pets, millions of people and countless devices have access to the Global Positioning System (GPS). Most of us have some basic idea of what GPS is, but just what is it, exactly, that we are all connected to?

The Global Positioning System (GPS) is a satellite-based navigation system made up of a network of 24 satellites placed into orbit by the U.S. Department of Defense. GPS was originally intended for military applications, but in the 1980s, the government made the system available for civilian use. GPS works in any weather conditions, anywhere in the world, 24 hours a day.

In a nutshell, GPS works like this: satellites circle the Earth twice a day in a very and transmit a signal to Earth. GPS receivers (or smartphones and watches) take this information and use trilateration to calculate the user’s exact location. Now, with distance measurements from a few satellites, the receiver can determine the user’s position and display it on the unit’s electronic map.

Using GPS to determine your location is not very useful if you do not know about the landscape around you. For instance, your GPS could tell you that you are in the mall, but without a map, you may not know how to get to the door. There are many stories of people whose maps were out of date, and they followed their GPS into a river or a lake. Remote sensing allows mapmakers to collect physical data from a distance without visiting or interacting directly with the location.

Remote Sensing

The distance between the object and observer can be considerable, for example, imaging from the Hubble telescope, or rather small, as is the case in the use of microscopes for examining bacterial growth. In geography, the term remote sensing takes on a specific connotation dealing with space-borne and aerial imaging systems used to remotely sense electromagnetic radiation reflected and emitted from Earth’s surface.

Remote sensing systems work in much the same way as a desktop scanner connected to a personal computer. A desktop scanner creates a digital image of a document by recording, pixel by pixel, the intensity of light reflected from the document. Color scanners may have three light sources and three sets of sensors, one each for the blue, green, and red wavelengths of visible light. Remotely sensed data, like the images produced by a desktop scanner, consist of reflectance values arrayed in rows and columns that make up raster grids.

Remote sensing is used to solve a host of problems across a wide variety of disciplines. For example, Landsat imagery is used to monitor plant health and foliar changes. In contrast, imagery such as that produced by IKONOS is used for geospatial intelligence applications (yes, that means spying) and monitoring urban infrastructure. Other satellites, such as AVHRR (Advanced High-Resolution Radiometer), are used to monitor the effects of global warming on vegetation patterns on a global scale. The MODIS (Moderate Resolution Imaging Spectroradiometer) Terra and Aqua sensors are designed to monitor atmospheric and oceanic composition in addition to the typical terrestrial applications.