4.1: The Role of Socialization
4.1.1: The Role of Socialization
Socialization prepares people for social life by teaching them a group’s shared norms, values, beliefs, and behaviors.
Learning Objective
Describe the three goals of socialization and why each is important
Key Points
- Socialization prepares people to participate in a social group by teaching them its norms and expectations.
- Socialization has three primary goals: teaching impulse control and developing a conscience, preparing people to perform certain social roles, and cultivating shared sources of meaning and value.
- Socialization is culturally specific, but this does not mean certain cultures are better or worse than others.
Key Terms
- Jeffrey J. Arnett
-
In his 1995 paper, “Broad and Narrow Socialization: The Family in the Context of a Cultural Theory,” sociologist Jeffrey J. Arnett outlined his interpretation of the three primary goals of socialization.
- socialization
-
The process of learning one’s culture and how to live within it.
- norm
-
A rule that is enforced by members of a community.
Example
- The belief that killing is immoral is an American norm, learned through socialization. As children grow up, they are exposed to social cues that foster this norm, and they begin to form a conscience composed of this and other norms.
The role of socialization is to acquaint individuals with the norms of a given social group or society. It prepares individuals to participate in a group by illustrating the expectations of that group.
Socialization is very important for children, who begin the process at home with family, and continue it at school. They are taught what will be expected of them as they mature and become full members of society. Socialization is also important for adults who join new social groups. Broadly defined, it is the process of transferring norms, values, beliefs, and behaviors to future group members.
Socialization in School
Schools, such as this kindergarten in Afghanistan, serve as primary sites of socialization.
Three Goals of Socialization
In his 1995 paper, “Broad and Narrow Socialization: The Family in the Context of a Cultural Theory,” sociologist Jeffrey J. Arnett outlined his interpretation of the three primary goals of socialization. First, socialization teaches impulse control and helps individuals develop a conscience. This first goal is accomplished naturally: as people grow up within a particular society, they pick up on the expectations of those around them and internalize these expectations to moderate their impulses and develop a conscience. Second, socialization teaches individuals how to prepare for and perform certain social roles—occupational roles, gender roles, and the roles of institutions such as marriage and parenthood. Third, socialization cultivates shared sources of meaning and value. Through socialization, people learn to identify what is important and valued within a particular culture.
The term “socialization” refers to a general process, but socialization always takes place in specific contexts. Socialization is culturally specific: people in different cultures are socialized differently, to hold different beliefs and values, and to behave in different ways. Sociologists try to understand socialization, but they do not rank different schemes of socialization as good or bad; they study practices of socialization to determine why people behave the way that they do.
4.1.2: Nature vs. Nurture: A False Debate
Is nature (an individual’s innate qualities) or nurture (personal experience) more important in determining physical and behavioral traits?
Learning Objective
Discuss both sides of the nature versus nurture debate, understanding the implications of each
Key Points
- Nature refers to innate qualities like human nature or genetics.
- Nurture refers to care given to children by parents or, more broadly, to environmental influences such as media and marketing.
- The nature versus nurture debate raises philosophical questions about determinism and free will.
Key Terms
- determinism
-
The doctrine that all actions are determined by the current state and immutable laws of the universe, with no possibility of choice.
- nature
-
The innate characteristics of a thing. What something will tend by its own constitution, to be or do. Distinct from what might be expected or intended.
- nurture
-
The environmental influences that contribute to the development of an individual; see also nature.
Example
- Recently, the nature versus nurture debate has entered the realm of law and criminal defense. In some cases, lawyers for violent offenders have argued that criminal activity is caused by nature -that is, by genes, rather than rational decision-making processes. Such arguments may suggest that the accused are less culpable for their crimes. A “genetic predisposition to violence” could be a mitigating factor in crime if the science behind genetic determinants can be found conclusive. “Nurture”-based explanations, such as a disadvantaged background, have in some cases already been accepted as mitigating factors.
The nature versus nurture debate rages over whether an individual’s innate qualities or personal experiences are more important in determining physical and behavioral traits .
In the social and political sciences, the nature versus nurture debate may be compared with the structure versus agency debate, a similar discussion over whether social structure or individual agency (choice or free will) is more important for determining individual and social outcomes.
Historically, the “nurture” in the nature versus nurture debate has referred to the care parents give to children. But today, the concept of nurture has expanded to refer to any environmental factor – which may arise from prenatal, parental, extended family, or peer experiences, or even from media, marketing, and socioeconomic status. Environmental factors could begin to influence development even before it begins: a substantial amount of individual variation might be traced back to environmental influences that affect prenatal development.
The “nature” in the nature versus nurture debate generally refers to innate qualities. In historical terms, nature might refer to human nature or the soul. In modern scientific terms, it may refer to genetic makeup and biological traits . For example, researchers have long studied twins to determine the influence of biology on personality traits. These studies have revealed that twins, raised separately, still share many common personality traits, lending credibility to the nature side of the debate. However, sample sizes are usually small, so generalization of the results must be done with caution.
Identical Twins
Because of their identical genetic makeup, twins are used in many studies to assess the nature versus nurture debate.
The nature versus nurture debate conjures deep philosophical questions about free will and determinism. The “nature” side may be criticized for implying that we behave in ways in which we are naturally inclined, rather than in ways we choose. Similarly, the “nurture” side may be criticized for implying that we behave in ways determined by our environment, not ourselves.
Of course, sociologists point out that our environment is, at least in part, a social creation.
4.1.3: Sociobiology
Sociobiology examines and explains social behavior based on biological evolution.
Learning Objective
Discuss the concept of sociobiology in relation to natural selection and Charles Darwin, as well as genetics and instinctive behaviors
Key Points
- Sociobiologists believe that human behavior, like nonhuman animal behavior, can be partly explained as the outcome of natural selection.
- Sociobiologists are interested in instinctive, or intuitive behavior, and in explaining the similarities, rather than the differences, between cultures.
- Many critics draw an intellectual link between sociobiology and biological determinism, the belief that most human differences can be traced to specific genes rather than differences in culture or social environments.
Key Terms
- natural selection
-
A process by which heritable traits conferring survival and reproductive advantage to individuals, or related individuals, tend to be passed on to succeeding generations and become more frequent in a population, whereas other less favorable traits tend to become eliminated.
- biological determinism
-
The hypothesis that biological factors such as an organism’s genes (as opposed to social or environmental factors) determine psychological and behavioral traits.
- sociobiology
-
The science that applies the principles of evolutionary biology to the study of social behavior in both humans and animals.
Example
- A sociobiological explanation of humans running might argue that human beings are good at running because our bodies have evolved to run bipedally. In the course of human evolution, we had to run away from predators. Those who had the genetic makeup to be effective runners survived and passed along their genes, while those lacking the genetic predisposition for running were killed by predators.
Sociobiology is a field of scientific study which is based on the assumption that social behavior has resulted from evolution. It attempts to explain and examine social behavior within that context. Often considered a branch of biology and sociology, it also draws from ethology, anthropology, evolution, zoology, archaeology, population genetics, and other disciplines. Within the study of human societies, sociobiology is very closely allied to the fields of Darwinian anthropology, human behavioral ecology, and evolutionary psychology. While the term “sociobiology” can be traced to the 1940s, the concept didn’t gain major recognition until 1975 with the publication of Edward O. Wilson’s book, Sociobiology: The New Synthesis .
Edward O. Wilson
E. O. Wilson is a central figure in the history of sociobiology.
Sociobiologists believe that human behavior, like nonhuman animal behavior, can be partly explained as the outcome of natural selection. They contend that in order to fully understand behavior, it must be analyzed in terms of evolutionary considerations. Natural selection is fundamental to evolutionary theory. Variants of hereditary traits, which increase an organism’s ability to survive and reproduce, are more likely to be passed on to subsequent generations. Thus, inherited behavioral mechanisms that allowed an organism a greater chance of surviving and reproducing in the past are more likely to survive in present organisms.
Following this evolutionary logic, sociobiologists are interested in how behavior can be explained as a result of selective pressures in the history of a species. Thus, they are often interested in instinctive, or intuitive behavior, and in explaining the similarities, rather than the differences, between cultures. Sociobiologists reason that common behaviors likely evolved over time because they made individuals who exhibited those behaviors more likely to survive and reproduce.
Many critics draw an intellectual link between sociobiology and biological determinism, the belief that most human differences can be traced to specific genes rather than differences in culture or social environments. Critics also see parallels between sociobiology and biological determinism as a philosophy underlying the social Darwinian and eugenics movements of the early 20th century as well as controversies in the history of intelligence testing.
4.1.4: Deprivation and Development
Social deprivation, or prevention from culturally normal interaction with society, affects mental health and impairs child development.
Learning Objective
Explain why social deprivation is problematic for a person (especially children) and the issues it can lead to
Key Points
- As they develop, humans go through several critical periods, or windows of time during which they need to experience particular environmental stimuli in order to develop properly.
- Feral children provide an example of the effects of severe social deprivation during critical developmental periods.
- Attachment theory argues that infants must develop stable, on-going relationships with at least one adult caregiver in order to form a basis for successful development.
- The term maternal deprivation is a catch phrase summarizing the early work of psychiatrist and psychoanalyst John Bowlby on the effects of separating infants and young children from their mother.
- In United States law, the “tender years” doctrine was long applied when custody of infants and toddlers was preferentially given to mothers.
Key Terms
- Attachment Theory
-
Attachment theory describes the dynamics of long-term relationships between humans. Its most important tenet is that an infant needs to develop a relationship with at least one primary caregiver for social and emotional development to occur normally.
- feral children
-
A feral child is a human child who has lived isolated from human contact from a very young age, and has no experience of human care, loving or social behavior, and, crucially, of human language.
- Social deprivation
-
In instances of social deprivation, particularly for children, social experiences tend to be less varied and development may be delayed or hindered.
Example
- Social deprivation theory has had implications for family law. “Tender years” laws in the United States are based on social attachment theories and social deprivation theories, especially the theory of maternal deprivation, developed by psychiatrist and psychoanalyst John Bowlby. Maternal deprivation theory explains the effects of separating infants and young children from their mother. The idea that separation from the female caregiver has profound effects is one with considerable resonance outside the conventional study of child development. In United States law, the “tender years” doctrine was long applied in child custody cases, and led courts to preferentially award custody of infants and toddlers to mothers.
Humans are social beings, and social interaction is essential to normal human development. Social deprivation occurs when an individual is deprived of culturally normal interaction with the rest of society. Certain groups of people are more likely to experience social deprivation. For example, social deprivation often occurs along with a broad network of correlated factors that all contribute to social exclusion; these factors include mental illness, poverty, poor education, and low socioeconomic status.
By observing and interviewing victims of social deprivation, research has provided an understanding of how social deprivation is linked to human development and mental illness. As they develop, humans pass through critical periods, or windows of time during which they need to experience particular environmental stimuli in order to develop properly. But when individuals experience social deprivation, they miss those critical periods. Thus, social deprivation may delay or hinder development, especially for children.
Feral children provide an example of the effects of severe social deprivation during critical developmental periods. Feral children are children who grow up without social interaction. In some cases, they may have been abandoned early in childhood and grown up in the wilderness. In other cases, they may have been abused by parents who kept them isolated from other people. In several recorded cases, feral children failed to develop language skills, had only limited social understanding, and could not be rehabilitated.
Attachment theory may explain why social deprivation has such dire effects for children . According to attachment theory, an infant needs to develop a relationship with at least one primary caregiver for social and emotional development to occur normally.
Maternal Deprivation
The idea that separation from the female caregiver has profound effects is one with considerable resonance outside the conventional study of child development.
4.1.5: Isolation and Development
Social isolation refers to a complete or near-complete lack of contact with society, which can affect all aspects of a person’s life.
Learning Objective
Interpret why social isolation can be problematic for a person in society and the importance of social connections
Key Points
- True social isolation is not the same as loneliness. It is often a chronic condition that persists for years and affects all aspects of a person’s existence.
- Emotional isolation is a term used to describe a state of isolation where the individual is emotionally isolated, but may have a well functioning social network.
- Social networks promote good health by providing direct support, encouraging healthy behaviors, and linking people with diffuse social networks that facilitate access to a wide range of resources supportive of health.
- Sociologists debate whether new technologies, such as the Internet and mobile phones, exacerbate social isolation or encourage it.
- A widely-held hypothesis is that social ties link people with diffuse social networks that facilitate access to a wide range of resources supportive of health.
Key Terms
- social isolation
-
Social isolation refers to a complete or near-complete lack of contact with society. It is usually involuntary, making it distinct from isolating tendencies or actions taken by an individual who is seeking to distance himself from society.
- emotional isolation
-
Emotional isolation is a term used to describe a state of isolation where the individual is emotionally isolated, but may have a well functioning social network.
- social network
-
The web of a person’s social, family, and business contacts, who provide material and social resources and opportunities.
Example
- Socially isolated individuals are more likely to experience negative health outcomes, such as failing to seek treatment for conditions before they become life-threatening. Individuals with vibrant social networks are more likely to encounter friends, family, or acquaintances who encourage them to visit the doctor to get a persistent cough checked out, whereas an isolated individual may allow the cough to progress until they experience serious difficulty breathing. Isolated individuals are also less likely to seek or get treatment for drug addiction or mental health problems, which they might be unable to recognize on their own. Older adults are especially prone to social isolation as their families and friends pass away.
Social isolation occurs when members of a social species (like humans) have complete or near-complete lack of contact with society. Social isolation is usually imposed involuntary, not chosen. Social isolation is not the same as loneliness rooted in temporary lack of contact with other humans, nor is it the same as isolating actions that might be consciously undertaken by an individual. A related phenomenon, emotional isolation may occur when individuals are emotionally isolated, even though they may have well-functioning social networks.
While loneliness is often fleeting, true social isolation often lasts for years or decades and tends to be a chronic condition that affects all aspects of a person’s existence and can have serious consequences for health and well being. Socially isolated people have no one to turn to in personal emergencies, no one to confide in during a crisis, and no one against whom to measure their own behavior against or from whom to learn etiquette or socially acceptable behavior. Social isolation can be problematic at any age, although it has different effects for different age groups (that is, social isolation for children may have different effects than social isolation for adults, although both age groups may experience it).
Social isolation can be dangerous because the vitality of individuals’ social relationships affect their health. Social contacts influence individuals’ behavior by encouraging health-promoting behaviors, such as adequate sleep, diet, exercise, and compliance with medical regimens or by discouraging health-damaging behaviors, such as smoking, excessive eating, alcohol consumption, or drug abuse. Socially isolated individuals lack these beneficial influences, as well as lacking a social support network that can provide help and comfort in times of stress and distress. Social relationships can also connect people with diffuse social networks that facilitate access to a wide range of resources supportive of health, such as medical referral networks, access to others dealing with similar problems, or opportunities to acquire needed resources via jobs, shopping, or financial institutions. These effects are different from receiving direct support from a friend; instead, they are based on the ties that close social ties provide to more distant connections.
Sociologists debate whether new technologies, such as the Internet and mobile phones exacerbate social isolation or could help overcome it. With the advent of online social networking communities, people have increasing options for engaging in social activities that do not require real-world physical interaction. Chat rooms, message boards, and other types of communities are now meeting social needs for those who would rather stay home alone, yet still develop communities of online friends.
Social Isolation
Older adults are particularly susceptible to social isolation.
4.1.6: Feral Children
A feral child is a human child who has lived isolated from human contact from a very young age.
Learning Objective
Analyze the differences between the fictional and real-life depictions of feral children
Key Points
- Legendary and fictional feral children are often depicted as growing up with relatively normal human intelligence and skills and an innate sense of culture or civilization.
- In reality, feral children lack the basic social skills that are normally learned in the process of enculturation. They almost always have impaired language ability and mental function. These impairments highlight the role of socialization in human development.
- The impaired ability to learn language after having been isolated for so many years is often attributed to the existence of a critical period for language learning, and is taken as evidence in favor of the critical period hypothesis.
Key Terms
- critical period
-
A critical period refers to the window of time during which a human needs to experience a particular environmental stimulus in order for proper development to occur.
- enculturation
-
The process by which an individual adopts the behaviour patterns of the culture in which he or she is immersed.
- feral child
-
A child who is raised without human contact as a result of being abandoned, allegedly often raised by wild animals.
Example
- Peter Pan is a well-known example of a fictional feral child who is raised without adult supervision or assistance. Whereas Peter Pan’s upbringing is glorified, all real cases involve some form of serious child abuse.
A feral child is a human child who has lived isolated from human contact from a very young age, and has no (or little) experience of human care, loving or social behavior, and, crucially, of human language. Some feral children have been confined in isolation by other people, usually their own parents. In some cases, this child abandonment was due to the parents rejecting a child’s severe intellectual or physical impairment. Feral children may have experienced severe child abuse or trauma before being abandoned or running away.
Depictions of Feral Children
Myths, legends, and fictional stories have depicted feral children reared by wild animals such as wolves and bears. Legendary and fictional feral children are often depicted as growing up with relatively normal human intelligence and skills and an innate sense of culture or civilization, coupled with a healthy dose of survival instincts. Their integration into human society is also made to seem relatively easy. These mythical children are often depicted as having superior strength, intelligence, and morals compared to “normal” humans. The implication is that because of their upbringing they represent humanity in a pure and uncorrupted state, similar to the noble savage.
Feral Children in Reality
In reality, feral children lack the basic social skills that are normally learned in the process of enculturation. For example, they may be unable to learn to use a toilet, have trouble learning to walk upright, and display a complete lack of interest in the human activity around them. They often seem mentally impaired and have almost insurmountable trouble learning human language. The impaired ability to learn language after having been isolated for so many years is often attributed to the existence of a critical period for language learning at an early age, and is taken as evidence in favor of the critical period hypothesis. It is theorized that if language is not developed, at least to a degree, during this critical period, a child can never reach his or her full language potential. The fact that feral children lack these abilities pinpoints the role of socialization in human development.
Examples of Feral Children
Famous examples of feral children include Ibn Tufail’s Hayy, Ibn al-Nafis’ Kamil, Rudyard Kipling’s Mowgli, Edgar Rice Burroughs’ Tarzan, J. M. Barrie’s Peter Pan, and the legends of Atalanta, Enkidu and Romulus and Remus. Tragically, feral children are not just fictional. Several cases have been discovered in which caretakers brutally isolated their children and in doing so prevented normal development.
A real-life example of a feral child is Danielle Crockett, known as “The Girl in the Window”. The officer who found Danielle reported it was “the worst case of child neglect he had seen in 27 years”. Doctors and therapists diagnosed Danielle with environmental autism, yet she was still adopted by Bernie and Diane Lierow. Danielle could not speak or respond to others nor eat solid food. Today, Danielle lives in Tennessee with her parents and has made remarkable progress. She communicates through the PECS system and loves to swim and ride horses.
Peter Pan
Peter Pan is an example of a fictional feral child.
4.1.7: Institutionalized Children
Institutionalized children may develop institutional syndrome, which refers to deficits or disabilities in social and life skills.
Learning Objective
Discuss both the processes of institutionalization and deinstitutionalization, as they relate to issues juveniles may have
Key Points
- The term “institutionalization” can be used both in regard to the process of committing an individual to a mental hospital or prison, and to institutional syndrome.
- Juvenile wards are sections of psychiatric hospitals or psychiatric wards set aside for children and adolescents with mental illness.
- Deinstitutionalization is the process of replacing long-stay psychiatric hospitals with less isolated community mental health service for those diagnosed with a mental disorder.
Key Terms
- Institutional syndrome
-
In clinical and abnormal psychology, institutional syndrome refers to deficits or disabilities in social and life skills, which develop after a person has spent a long period living in mental hospitals, prisons, or other remote institutions.
- mental illness
-
Mental illness is a broad generic label for a category of illnesses that may include affective or emotional instability, behavioral dysregulation, and/or cognitive dysfunction or impairment.
- deinstitutionalization
-
The process of abolishing a practice that has been considered a norm.
In clinical and abnormal psychology, institutional syndrome refers to deficits or disabilities in social and life skills, which develop after a person has spent a long period living in mental hospitals, prisons, or other remote institutions. In other words, individuals in institutions may be deprived of independence and of responsibility, to the point that once they return to “outside life” they are often unable to manage many of its demands. It has also been argued that institutionalized individuals become psychologically more prone to mental health problems.
The term institutionalization can be used both in regard to the process of committing an individual to a mental hospital or prison, or to institutional syndrome; thus a person being “institutionalized” may mean either that he/she has been placed in an institution, or that he/she is suffering the psychological effects of having been in an institution for an extended period of time.
Juvenile wards are sections of psychiatric hospitals or psychiatric wards set aside for children and/or adolescents with mental illness. However, there are a number of institutions specializing only in the treatment of juveniles, particularly when dealing with drug abuse, self-harm, eating disorders, anxiety, depression or other mental illness.
Psychiatric Wards
Many state hospitals have mental health branches, such as the Northern Michigan Asylum.
Deinstitutionalization is the process of replacing long-stay psychiatric hospitals with less isolated community mental health service for those diagnosed with a mental disorder or developmental disability. Deinstitutionalization can have multiple definitions; the first focuses on reducing the population size of mental institutions. This can be accomplished by releasing individuals from institutions, shortening the length of stays, and reducing both admissions and readmission. The second definition refers to reforming mental hospitals’ institutional processes so as to reduce or eliminate reinforcement of dependency, hopelessness, learned helplessness, and other maladaptive behaviors.
4.2: The Self and Socialization
4.2.1: Dimensions of Human Development
The dimensions of human development are divided into separate, consecutive stages of life from birth to old age.
Learning Objective
Analyze the differences between the various stages of human life – prenatal, toddler, early and late childhood, adolescence, early and middle adulthood and old age
Key Points
- The stages of human development are: prenatal development, toddler, early childhood, late childhood, adolescence, early adulthood, middle adulthood, and old age.
- Prenatal development is the process in which a human embryo gestates during pregnancy, from fertilization until birth. From birth until the first year, the child is referred to as an infant. Babies between ages of 1 and 2 are called “toddlers”.
- In the phase of early childhood, children attend preschool, broaden their social horizons and become more engaged with those around them.
- In late childhood, intelligence is demonstrated through logical and systematic manipulation of symbols related to concrete objects.
- Adolescence is the period of life between the onset of puberty and the full commitment to an adult social role.
- In early adulthood, a person must learn how to form intimate relationships. Middle adulthood refers to the period between ages 40 to 60.The final stage is old age, which refers to those over 60–80 years.
- In early adulthood, the person must learn how to form intimate relationships, both in friendship and love.
- Middle adulthood generally refers to the period between ages 40 to 60. During this period, middle-aged adults experience a conflict between generativity and stagnation.
- The last and final stage is old age, which refers to those over 60–80 years.
Key Terms
- diurnal
-
Happening or occurring during daylight, or primarily active during that time.
- Prenatal development
-
Prenatal development is the process in which a human embryo gestates during pregnancy, from fertilization until birth.
The dimensions of human development are divided into separate but consecutive stages in human life. They are characterized by prenatal development, toddler, early childhood, late childhood, adolescence, early adulthood, middle adulthood, and old age.
Prenatal development is the process during which a human embryo gestates during pregnancy, from fertilization until birth. The terms prenatal development, fetal development, and embryology are often used interchangeably. The embryonic period in humans begins at fertilization and from birth until the first year, the child is referred to as an infant. The majority of a newborn infant’s time is spent in sleep. At first, this sleep is evenly spread throughout the day and night but after a couple of months, infants generally become diurnal.
Human Embryogenesis
The first few weeks of embryogenesis in humans begin with the fertilizing of the egg and end with the closing of the neural tube.
Babies between ages of 1 and 2 are called “toddlers. ” In this stage, intelligence is demonstrated through the use of symbols, language use matures, and memory and imagination are developed. In the phase of early childhood, children attend preschool, broaden their social horizons and become more engaged with those around them. In late childhood, intelligence is demonstrated through logical and systematic manipulation of symbols related to concrete objects. Children go through the transition from the world at home to that of school and peers. If children can discover pleasure in intellectual stimulation, being productive, seeking success, they will develop a sense of competence.
Adolescence is the period of life between the onset of puberty and the full commitment to an adult social role. In early adulthood, the person must learn how to form intimate relationships, both in friendship and love. The development of this skill relies on the resolution of other stages. It may be hard to establish intimacy if one has not developed trust or a sense of identity. If this skill is not learned, the alternative is alienation, isolation, a fear of commitment, and the inability to depend on others
Middle adulthood generally refers to the period between ages 40 to 60. During this period, middle-aged adults experience a conflict between generativity and stagnation. They may either feel a sense of contributing to the next generation and their community or a sense of purposelessness. The last and final stage is old age, which refers to those over 60–80 years. During old age, people frequently experience a conflict between integrity and despair.
4.2.2: Sociological Theories of the Self
Sociological theories of the self attempt to explain how social processes such as socialization influence the development of the self.
Learning Objective
Interpret Mead’s theory of self in term of the differences between “I” and “me”
Key Points
- One of the most important sociological approaches to the self was developed by American sociologist George Herbert Mead. Mead conceptualizes the mind as the individual importation of the social process.
- This process is characterized by Mead as the “I” and the “me. ” The “me” is the social self and the “I” is the response to the “me. ” The “I” is the individual’s impulses. The “I” is self as subject; the “me” is self as object.
- For Mead, existence in a community comes before individual consciousness. First one must participate in the different social positions within society and only subsequently can one use that experience to take the perspective of others and thus become self-conscious.
- Primary Socialization occurs when a child learns the attitudes, values, and actions appropriate to individuals as members of a particular culture.
- Secondary socialization refers to the process of learning the appropriate behavior as a member of a smaller group within the larger society.
- Group socialization is the theory that an individual’s peer groups, rather than parental figures, influences his or her personality and behavior in adulthood.
- Organizational socialization is the process whereby an employee learns the knowledge and skills necessary to assume his or her organizational role.
- In the social sciences, institutions are the structures and mechanisms of social order and cooperation governing the behavior of a set of individuals within a given human collectivity. Institutions include the family, religion, peer group, economic systems, legal systems, penal systems, language and the media.
Key Terms
- community
-
A group sharing a common understanding and often the same language, manners, tradition and law. See civilization.
- socialization
-
The process of learning one’s culture and how to live within it.
- generalized other
-
the general notion that a person has regarding the common expectations of others within his or her social group
- The self
-
The self is the individual person, from his or her own perspective. Self-awareness is the capacity for introspection and the ability to reconcile oneself as an individual separate from the environment and other individuals.
Example
- The processes of socialization are most easily seen in children. As they learn more about the world around them, they begin to reflect the social norms to which they are exposed. This is the quintessential example of socialization, though the same process applies to any newcomer to a given society.
Sociological theories of the self attempt to explain how social processes such as socialization influence the development of the self. One of the most important sociological approaches to the self was developed by American sociologist George Herbert Mead. Mead conceptualizes the mind as the individual importation of the social process. Mead presented the self and the mind in terms of a social process. As gestures are taken in by the individual organism, the individual organism also takes in the collective attitudes of others, in the form of gestures, and reacts accordingly with other organized attitudes.
George Herbert Mead
George Herbert Mead (1863–1931) was an American philosopher, sociologist, and psychologist, primarily affiliated with the University of Chicago, where he was one of several distinguished pragmatists. He is regarded as one of the founders of social psychology and the American sociological tradition in general.
This process is characterized by Mead as the “I” and the “me. ” The “me” is the social self and the “I” is the response to the “me. ” In other words, the “I” is the response of an individual to the attitudes of others, while the “me” is the organized set of attitudes of others which an individual assumes. The “me” is the accumulated understanding of the “generalized other,” i.e. how one thinks one’s group perceives oneself. The “I” is the individual’s impulses. The “I” is self as subject; the “me” is self as object. The “I” is the knower, the “me” is the known. The mind, or stream of thought, is the self-reflective movements of the interaction between the “I” and the “me. ” These dynamics go beyond selfhood in a narrow sense, and form the basis of a theory of human cognition. For Mead the thinking process is the internalized dialogue between the “I” and the “me. “
Understood as a combination of the “I” and the “me,” Mead’s self proves to be noticeably entwined within a sociological existence. For Mead, existence in a community comes before individual consciousness. First one must participate in the different social positions within society and only subsequently can one use that experience to take the perspective of others and become self-conscious.
4.2.3: Psychological Approaches to the Self
The psychology of self is the study of either the cognitive or affective representation of one’s identity.
Learning Objective
Discuss the development of a person’s identity in relation to both the Kohut and Jungian self
Key Points
- The earliest formulation of the self in modern psychology derived from the distinction between the self as I, the subjective knower, and the self as Me, the object that is known.
- Heinz Kohut, an American psychologist, theorized that the self was bipolar, and was comprised of two systems of narcissistic perfection, one of which contained ambitions and the other of which contained ideals.
- In Jungian theory, derived from the psychologist C.G. Jung, the Self is one of several archetypes. It signifies the coherent whole, unifying both the consciousness and unconscious mind of a person.
- Social constructivists claim that timely and sensitive intervention by adults when a child is on the edge of learning a new task could help children learn new tasks.
- Attachment theory focuses on open, intimate, emotionally meaningful relationships.
- The nativism versus empiricism debate focuses on the relationship between innateness and environmental influence in regard to any particular aspect of development.
- A nativist account of development would argue that the processes in question are innate, that is, they are specified by the organism’s genes. An empiricist perspective would argue that those processes are acquired in interaction with the environment.
Key Terms
- archetype
-
according to the Swiss psychologist Carl Jung, a universal pattern of thought, present in an individual’s unconscious, inherited from the past collective experience of humanity
- affective
-
relating to, resulting from, or influenced by the emotions
- cognitive
-
the part of mental functions that deals with logic, as opposed to affective functions which deal with emotions
Example
- One can see how developmental psychology has become de rigeur in changes in parenting tactics. Since the publication of these studies, parents and caregivers pay even more attention to affection and encouraging hands on play. Encouraging a child to complete an art project and hugging her upon its completion is an example of how these theories are implemented in everyday life.
Psychology of the Self
The psychology of the self is the study of the cognitive or affective representation of one’s identity. In modern psychology, the earliest formulation of the self derived from the distinction between the self as “I,” the subjective knower, and the self as “me,” the object that is known. Put differently, let us say an individual wanted to think about their “self” as an analytic object. They might ask themselves the question, “what kind of person am I? ” That person is still, in that moment, thinking from some perspective, which is also considered the “self. ” Thus, in this case, the “self” is both what is doing the thinking, and also, at the same time, the object that is being thought about. It is from this dualism that the concept of the self initially emerged in modern psychology. Current psychological thought suggests that the self plays an integral part in human motivation, cognition, affect, and social identity.
The Kohut Self
Heinz Kohut, an American psychologist, theorized a bipolar self that was comprised of two systems of narcissistic perfection, one of which contained ambitions and the other of which contained ideals. Kohut called the pole of ambitions the narcissistic self (later called the grandiose self). He called the pole of ideals the idealized parental imago. According to Kohut, the two poles of the self represented natural progressions in the psychic life of infants and toddlers.
The Jungian Self
In Jungian theory, derived from the psychologist C.G. Jung , the Self is one of several archetypes. It signifies the coherent whole, unifying both the conscious and unconscious mind of a person. The Self, according to Jung, is the end product of individuation, which is defined as the process of integrating one’s personality. For Jung, the Self could be symbolized by either the circle (especially when divided into four quadrants), the square, or the mandala. He also believed that the Self could be symbolically personified in the archetypes of the Wise Old Woman and Wise Old Man.
Carl Gustav Jung
According to Jung, the Self is one of several archetypes.
In contrast to earlier theorists, Jung believed that an individual’s personality had a center. While he considered the ego to be the center of an individual’s conscious identity, he considered the Self to be the center of an individual’s total personality. This total personality included within it the ego, consciousness, and the unconscious mind. To Jung, the Self is both the whole and the center. While Jung perceived the ego to be a self-contained, off-centered, smaller circle contained within the whole, he believed that the Self was the greater circle. In addition to being the center of the psyche, Jung also believed the Self was autonomous, meaning that it exists outside of time and space. He also believed that the Self was the source of dreams, and that the Self would appear in dreams as an authority figure that could either perceive the future or guide an individual’s present.
4.3: Theories of Socialization
4.3.1: Theories of Socialization
Socialization is the means by which human infants begin to acquire the skills necessary to perform as functioning members of their society.
Learning Objective
Discuss the different types and theories of socialization
Key Points
- Group socialization is the theory that an individual’s peer groups, rather than parental figures, influences his or her personality and behavior in adulthood.
- Gender socialization refers to the learning of behavior and attitudes considered appropriate for a given sex.
- Cultural socialization refers to parenting practices that teach children about their racial history or heritage and, sometimes, is referred to as pride development.
- Sigmund Freud proposed that the human psyche could be divided into three parts: Id, ego, and super-ego.
- Piaget’s theory of cognitive development is a comprehensive theory about the nature and development of human intelligence.
- Positive Adult Development is one of the four major forms of adult developmental study that can be identified. The other three forms are directionless change, stasis, and decline.
Key Term
- socialization
-
The process of learning one’s culture and how to live within it.
Example
- Primary Socialization occurs when a child learns the attitudes, values, and actions appropriate to individuals as members of a particular culture. For example if a child saw his/her mother expressing a discriminatory opinion about a minority group, then that child may think this behavior is acceptable and could continue to have this opinion about minority groups.
“Socialization” is a term used by sociologists, social psychologists, anthropologists, political scientists, and educationalists to refer to the lifelong process of inheriting and disseminating norms, customs, and ideologies, providing an individual with the skills and habits necessary for participating within his or her own society. Socialization is thus “the means by which social and cultural continuity are attained.”
Socialization is the means by which human infants begin to acquire the skills necessary to perform as a functioning member of their society and is the most influential learning process one can experience. Unlike other living species, whose behavior is biologically set, humans need social experiences to learn their culture and to survive. Although cultural variability manifests in the actions, customs, and behaviors of whole social groups, the most fundamental expression of culture is found at the individual level. This expression can only occur after an individual has been socialized by his or her parents, family, extended family, and extended social networks.
The looking-glass self is a social psychological concept, created by Charles Horton Cooley in 1902, stating that a person’s self grows out of society’s interpersonal interactions and the perceptions of others. The term refers to people shaping themselves based on other people’s perception, which leads people to reinforce other people’s perspectives on themselves. People shape themselves based on what other people perceive and confirm other people’s opinion on themselves.
George Herbert Mead developed a theory of social behaviorism to explain how social experience develops an individual’s personality. Mead’s central concept is the self: the part of an individual’s personality composed of self-awareness and self-image. Mead claimed that the self is not there at birth, rather, it is developed with social experience.
Sigmund Freud was an Austrian neurologist who founded the discipline of psychoanalysis, a clinical method for treating psychopathology through dialogue between a patient and a psychoanalyst. In his later work, Freud proposed that the human psyche could be divided into three parts: Id, ego, and super-ego. The id is the completely unconscious, impulsive, child-like portion of the psyche that operates on the “pleasure principle” and is the source of basic impulses and drives; it seeks immediate pleasure and gratification. The ego acts according to the reality principle (i.e., it seeks to please the id’s drive in realistic ways that will benefit in the long term rather than bringing grief). Finally, the super-ego aims for perfection. It comprises that organized part of the personality structure, mainly but not entirely unconscious that includes the individual’s ego ideals, spiritual goals, and the psychic agency that criticizes and prohibits his or her drives, fantasies, feelings, and actions.
Different Forms of Socialization
Group socialization is the theory that an individual’s peer groups, rather than parental figures, influences his or her personality and behavior in adulthood. Adolescents spend more time with peers than with parents. Therefore, peer groups have stronger correlations with personality development than parental figures do. For example, twin brothers, whose genetic makeup are identical, will differ in personality because they have different groups of friends, not necessarily because their parents raised them differently.
Gender socialization Henslin (1999) contends that “an important part of socialization is the learning of culturally defined gender roles” (p. 76). Gender socialization refers to the learning of behavior and attitudes considered appropriate for a given sex. Boys learn to be boys, and girls learn to be girls. This “learning” happens by way of many different agents of socialization. The family is certainly important in reinforcing gender roles, but so are one’s friends, school, work, and the mass media. Gender roles are reinforced through “countless subtle and not so subtle ways,” said Henslin (1999, p. 76).
Cultural socialization refers to parenting practices that teach children about their racial history or heritage and, sometimes, is referred to as “pride development. ” Preparation for bias refers to parenting practices focused on preparing children to be aware of, and cope with, discrimination. Promotion of mistrust refers to the parenting practices of socializing children to be wary of people from other races. Egalitarianism refers to socializing children with the belief that all people are equal and should be treated with a common humanity.
4.3.2: Cooley
In 1902, Charles Horton Cooley created the concept of the looking-glass self, which explored how identity is formed.
Learning Objective
Discuss Cooley’s idea of the “looking-glass self” and how people use socialization to create a personal identity and develop empathy for others
Key Points
- The looking-glass self is a social psychological concept stating that a person’s self grows out of society’s interpersonal interactions and the perceptions of others.
- There are three components of the looking-glass self: We imagine how we appear to others, we imagine the judgment of that appearance, and we develop our self (identity) through the judgments of others.
- George Herbert Mead described self as “taking the role of the other,” the premise for which the self is actualized. Through interaction with others, we begin to develop an identity about who we are, as well as empathy for others.
Key Terms
- George Herbert Mead
-
(1863–1931) An American philosopher, sociologist, and psychologist, primarily affiliated with the University of Chicago, where he was one of several distinguished pragmatists.
- Looking-Glass self
-
The looking-glass self is a social psychological concept, created by Charles Horton Cooley in 1902, stating that a person’s self grows out of society’s interpersonal interactions and the perceptions of others.
- Charles Horton Cooley
-
Charles Horton Cooley (August 17, 1864-May 8, 1929) was an American sociologist and the son of Thomas M. Cooley. He studied and went on to teach economics and sociology at the University of Michigan, and he was a founding member and the eighth president of the American Sociological Association.
Example
- An example of the looking-self concept is computer technology. Using computer technology, people can create an avatar, a customized symbol that represents the computer user. For example, in the virtual world Second Life the computer-user can create a humanlike avatar that reflects the user in regard to race, age, physical makeup, status and the like. By selecting certain physical characteristics or symbols, the avatar reflects how the creator seeks to be perceived in the virtual world and how the symbols used in the creation of the avatar influence others’ actions toward the computer-user.
The looking-glass self is a social psychological concept created by Charles Horton Cooley in 1902. It states that a person’s self grows out of society’s interpersonal interactions and the perceptions of others. The term refers to people shaping their identity based on the perception of others, which leads the people to reinforce other people’s perspectives on themselves. People shape themselves based on what other people perceive and confirm other people’s opinion of themselves.
There are three main components of the looking-glass self:
- First, we imagine how we must appear to others.
- Second, we imagine the judgment of that appearance.
- Finally, we develop our self through the judgments of others.
In hypothesizing the framework for the looking glass self, Cooley said, “the mind is mental” because “the human mind is social. ” In other words, the mind’s mental ability is a direct result of human social interaction. Beginning as children, humans begin to define themselves within the context of their socializations. The child learns that the symbol of his/her crying will elicit a response from his/her parents, not only when they are in need of necessities, such as food, but also as a symbol to receive their attention. George Herbert Mead described the self as “taking the role of the other,” the premise for which the self is actualized. Through interaction with others, we begin to develop an identity about who we are, as well as empathy for others.
An example of the looking-self concept is computer technology. Using computer technology, people can create an avatar, a customized symbol that represents the computer user. For example, in the virtual world Second Life, the computer-user can create a human-like avatar that reflects the user in regard to race, age, physical makeup, status, and the like. By selecting certain physical characteristics or symbols, the avatar reflects how the creator seeks to be perceived in the virtual world and how the symbols used in the creation of the avatar influence others’ actions toward the computer user.
4.3.3: Mead
For Mead, the self arises out of the social act of communication, which is the basis for socialization.
Learning Objective
Discuss Mead’s theory of social psychology in terms of two concepts – pragmatism and social behaviorism
Key Points
- George Herbert Mead was an American philosopher, sociologist, and psychologist and one of several distinguished pragmatists.
- The two most important roots of Mead’s work are the philosophy of pragmatism and social behaviorism.
- Pragmatism is a wide-ranging philosophical position that states that people define the social and physical “objects” they encounter in the world according to their use for them.
- One of his most influential ideas was the emergence of mind and self from the communication process between organisms, discussed in the book, Mind, Self and Society, also known as social behaviorism.
Key Terms
- symbolic interactionism
-
Symbolic interactionism is the study of the patterns of communication, interpretation, and adjustment between individuals.
- social behaviorism
-
Discussed in the book, Mind, Self and Society, social behaviorism refers to the emergence of mind and self from the communication process between organisms.
- pragmatism
-
The theory that problems should be met with practical solutions rather than ideological ones; a concentration on facts rather than emotions or ideals.
Example
- In Pragmatism, nothing practical or useful is held to be necessarily true, nor is anything which helps to survive merely in the short term. For example, to believe my cheating spouse is faithful may help me feel better now, but it is certainly not useful from a more long-term perspective because it doesn’t accord with the facts (and is therefore not true).
George Herbert Mead was an American philosopher, sociologist, and psychologist, primarily affiliated with the University of Chicago, where he was one of several distinguished pragmatists. He is regarded as one of the founders of social psychology and the American sociological tradition in general.
The two most important roots of Mead’s work, and of symbolic interactionism in general, are the philosophy of pragmatism and social behaviorism. Pragmatism is a wide ranging philosophical position from which several aspects of Mead’s influences can be identified. There are four main tenets of pragmatism: First, to pragmatists true reality does not exist “out there” in the real world, it “is actively created as we act in and toward the world. Second, people remember and base their knowledge of the world on what has been useful to them and are likely to alter what no longer “works. ” Third, people define the social and physical “objects” they encounter in the world according to their use for them. Lastly, if we want to understand actors, we must base that understanding on what people actually do. In Pragmatism nothing practical or useful is held to be necessarily true, nor is anything which helps to survive merely in the short term. For example, to believe my cheating spouse is faithful may help me feel better now, but it is certainly not useful from a more long-term perspective because it doesn’t align with the facts (and is therefore not true).
Mead was a very important figure in twentieth century social philosophy. One of his most influential ideas was the emergence of mind and self from the communication process between organisms, discussed in the book, Mind, Self and Society, also known as social behaviorism. For Mead, mind arises out of the social act of communication. Mead’s concept of the social act is relevant, not only to his theory of mind, but also to all facets of his social philosophy. His theory of “mind, self, and society” is, in effect, a philosophy of the act from the standpoint of a social process involving the interaction of many individuals, just as his theory of knowledge and value is a philosophy of the act from the standpoint of the experiencing individual in interaction with an environment.
Mead is a major American philosopher by virtue of being, along with John Dewey, Charles Peirce, and William James, one of the founders of pragmatism. He also made significant contributions to the philosophies of nature, science, and history, to philosophical anthropology, and to process philosophy. Dewey and Alfred North Whitehead considered Mead a thinker of the first rank. He is a classic example of a social theorist whose work does not fit easily within conventional disciplinary boundaries.
George Herbert Mead
George Herbert Mead (1863–1931) was an American philosopher, sociologist, and psychologist, primarily affiliated with the University of Chicago, where he was one of several distinguished pragmatists. He is regarded as one of the founders of social psychology and the American sociological tradition in general.
4.3.4: Freud
According to Freud, human behavior, experience, and cognition are largely determined by unconscious drives and events in early childhood.
Learning Objective
Discuss Freud’s “id”, “ego” and “super-ego” and his six basic principles of psychoanalysis and how psychoanalysis is used today as a treatment for a variety of psychological disorders
Key Points
- Psychoanalysis is a clinical method for treating psychopathology through dialogue between a patient and a psychoanalyst.
- The specifics of the analyst’s interventions typically include confronting and clarifying the patient’s pathological defenses, wishes, and guilt.
- Freud named his new theory the Oedipus complex after the famous Greek tragedy Oedipus Rex by Sophocles. The Oedipus conflict was described as a state of psychosexual development and awareness.
- The id is the completely unconscious, impulsive, child-like portion of the psyche that operates on the “pleasure principle” and is the source of basic impulses and drives.
- The ego acts according to the reality principle (i.e., it seeks to please the id’s drive in realistic ways that will benefit in the long term rather than bringing grief).
- The super-ego aims for perfection. It comprises that organized part of the personality structure.
- The super-ego aims for perfection. It comprises that organised part of the personality structure
Key Terms
- Oedipus complex
-
In Freudian theory, the complex of emotions aroused in a child by an unconscious sexual desire for the parent of the opposite sex.
- the unconscious
-
For Freud, the unconscious refers to the mental processes of which individuals make themselves unaware.
Example
- The most common problems treatable with psychoanalysis include: phobias, conversions, compulsions, obsessions, anxiety, attacks, depressions, sexual dysfunctions, a wide variety of relationship problems (such as dating and marital strife), and a wide variety of character problems (painful shyness, meanness, obnoxiousness, workaholism, hyperseductiveness, hyperemotionality, hyperfastidiousness).
Sigmund Freud was an Austrian neurologist who founded the discipline of psychoanalysis. Interested in philosophy as a student, Freud later decided to become a neurological researcher in cerebral palsy, Aphasia, and microscopic neuroanatomy. Freud went on to develop theories about the unconscious mind and the mechanism of repression and established the field of verbal psychotherapy by creating psychoanalysis, a clinical method for treating psychopathology through dialogue between a patient and a psychoanalyst. The most common problems treatable with psychoanalysis include phobias, conversions, compulsions, obsessions, anxiety, attacks, depressions, sexual dysfunctions, a wide variety of relationship problems (such as dating and marital strife), and a wide variety of character problems (painful shyness, meanness, obnoxiousness, workaholism, hyperseductiveness, hyperemotionality, hyperfastidiousness).
The Basic Tenets of Psychoanalysis
The basic tenets of psychoanalysis include the following:
- First, human behavior, experience, and cognition are largely determined by irrational drives.
- Those drives are largely unconscious.
- Attempts to bring those drives into awareness meet psychological resistance in the form of defense mechanisms.
- Besides the inherited constitution of personality, one’s development is determined by events in early childhood.
- Conflicts between conscious view of reality and unconscious (repressed) material can result in mental disturbances, such as neurosis, neurotic traits, anxiety, depression etc.
- The liberation from the effects of the unconscious material is achieved through bringing this material into the consciousness.
Psychoanalysis as Treatment
Freudian psychoanalysis refers to a specific type of treatment in which the “analysand” (the analytic patient) verbalizes thoughts, including free associations, fantasies, and dreams, from which the analyst induces the unconscious conflicts. This causes the patient’s symptoms and character problems, and interprets them for the patient to create insight for resolution of the problems. The specifics of the analyst’s interventions typically include confronting and clarifying the patient’s pathological defenses, wishes, and guilt. Through the analysis of conflicts, including those contributing to resistance and those involving transference onto the analyst of distorted reactions, psychoanalytic treatment can hypothesize how patients unconsciously are their own worst enemies: how unconscious, symbolic reactions that have been stimulated by experience are causing symptoms.
The Id, The Ego, Super-Ego
Freud hoped to prove that his model was universally valid and thus turned to ancient mythology and contemporary ethnography for comparative material. Freud named his new theory the Oedipus complex after the famous Greek tragedy Oedipus Rex by Sophocles. The Oedipus conflict was described as a state of psychosexual development and awareness. In his later work, Freud proposed that the human psyche could be divided into three parts: Id, ego, and super-ego. The id is the completely unconscious, impulsive, child-like portion of the psyche that operates on the “pleasure principle” and is the source of basic impulses and drives; it seeks immediate pleasure and gratification. The ego acts according to the reality principle (i.e., it seeks to please the id’s drive in realistic ways that will benefit in the long term rather than bringing grief). Finally, the super-ego aims for perfection. It comprises that organized part of the personality structure, mainly but not entirely unconscious, that includes the individual’s ego, ideals, spiritual goals, and the psychic agency that criticizes and prohibits his or her drives, fantasies, feelings, and actions.
4.3.5: Piaget
Piaget’s theory of cognitive development is a comprehensive theory about the nature and development of human intelligence.
Learning Objective
Analyze the differences between accommodation and assimilation, in relation to Piaget’s stages
Key Points
- Jean Piaget was a French-speaking Swiss developmental psychologist and philosopher known for his epistemological studies with children. His theory of cognitive development and epistemological view are together called “genetic epistemology,” the study of the origins of knowledge.
- Piaget argued that all people undergo a series of stages and transformations. Transformations refer to all manners of changes that a thing or person can experience, while states refer to the conditions or the appearances in which things or persons can be found between transformations.
- Piaget identified four stages of cognitive development: sensorimotor, pre-operational, concrete operational, and formal operational. Through these stages, children progress in their thinking and logical processes.
- Piaget’s theory of cognitive development is a comprehensive theory about the nature and development of human intelligence that explains how individuals perceive and adapt to new information through the processes of assimilation and accommodation.
- Assimilation is the process of taking one’s environment and new information and fitting it into pre-existing cognitive schemas. Accommodation is the process of taking one’s environment and new information, and altering one’s pre-existing schemas in order to fit in the new information.
- Object permanence is the understanding that objects continue to exist even when they cannot be seen, heard, or touched.
- Object permanence is the understanding that objects continue to exist even when they cannot be seen, heard, or touched.
- The concrete operational stage is the third of four stages of cognitive development in Piaget’s theory.
- The final stage is known as formal operational stage (adolescence and into adulthood): Intelligence is demonstrated through the logical use of symbols related to abstract concepts.
Key Terms
- accommodation
-
Accommodation, unlike assimilation, is the process of taking one’s environment and new information, and altering one’s pre-existing schemas in order to fit in the new information.
- object permanence
-
The understanding (typically developed during early infancy) that an object still exists even when it disappears from sight, or other senses.
- genetic epistemology
-
Genetic epistemology is a study of the origins of knowledge. The discipline was established by Jean Piaget.
Jean Piaget was a French-speaking Swiss developmental psychologist and philosopher known for his epistemological studies with children. His theory of cognitive development and epistemological view are together called “genetic epistemology. ” He believed answers for the epistemological questions at his time could be better addressed by looking at their genetic components. This led to his experiments with children and adolescents in which he explored the thinking and logic processes used by children of different ages.
Piaget’s theory of cognitive development is a comprehensive theory about the nature and development of human intelligence. Piaget believed that reality is a dynamic system of continuous change and as such, it is defined in reference to the two conditions that define dynamic systems. Specifically, he argued that reality involves transformations and states. Transformations refer to all manners of changes that a thing or person can undergo. States refer to the conditions or the appearances in which things or persons can be found between transformations.
Piaget explains the growth of characteristics and types of thinking as the result of four stages of development. The stages are as follows:
- The sensorimotor stage is the first of the four stages in cognitive development that “extends from birth to the acquisition of language. ” In this stage, infants construct an understanding of the world by coordinating experiences with physical actions–in other words, infants gain knowledge of the word from the physical actions they perform. The development of object permanence is one of the most important accomplishments of this stage.
- The pre-operational stage is the second stage of cognitive development. It begins around the end of the second year. During this stage, the child learns to use and to represent objects by images, words, and drawings. The child is able to form stable concepts, as well as mental reasoning and magical beliefs.
- The third stage is called the “concrete operational stage” and occurs approximately between the ages of 7 and 11 years. In this stage, children develop the appropriate use of logic and are able to think abstractly, make rational judgments about concrete phenomena, and systematically manipulate symbols related to concrete objects.
- The final stage is known as the “formal operational stage” (adolescence and into adulthood). Intelligence is demonstrated through the logical use of symbols related to abstract concepts. At this point, the person is capable of hypothetical and deductive reasoning.
When studying the field of education Piaget identified two processes: accommodation and assimilation. Assimilation describes how humans perceive and adapt to new information. It is the process of taking one’s environment and new information and fitting it into pre-existing cognitive schemas. Accommodation, unlike assimilation, is the process of taking one’s environment and new information and altering one’s pre-existing schemas in order to fit in the new information.
Jean Piaget
Jean Piaget was a French-speaking Swiss developmental psychologist and philosopher known for his epistemological studies with children.
4.3.6: Levinson
Daniel J. Levinson was one of the founders of the field of positive adult development.
Learning Objective
Summarize Daniel Levinson’s theory of positive adult development and how it influenced changes in the perception of development during adulthood
Key Points
- As a theory, positive adult development asserts that development continues after adolescence, long into adulthood.
- In positive adult development research, scientists question not only whether development ceases after adolescence, but also a notion, popularized by many gerontologists, that a decline occurs after late adolescence.
- Positive adult developmental processes are divided into at least six areas of study: hierarchical complexity, knowledge, experience, expertise, wisdom, and spirituality.
Key Terms
- positive adult development
-
Positive adult development is one of the four major forms of adult developmental study that can be identified.
- stasis
-
inactivity; a freezing, or state of motionlessness
- decline
-
downward movement, fall
Daniel Levinson
Daniel J. Levinson, an American psychologist, was one of the founders of the field of positive adult development. He was born in New York City on May 28, 1920, and completed his dissertation at the University of California, Berkeley, in 1947. In this dissertation, he attempted to develop a way of measuring ethnocentrism. In 1950, he moved to Harvard University. From 1966 to 1990, he was a professor of psychology at the Yale University School of Medicine.
Levinson’s two most important books were Seasons of a Man’s Life and Seasons of a Woman’s Life, which continue to be highly influential works. His multidisciplinary approach is reflected in his work on the life structure theory of adult development.
Positive Adult Development
Positive adult development is one of the four major forms of adult developmental study. The other three are directionless change, stasis, and decline. Positive adult developmental processes are divided into the following six areas of study:
- hierarchical complexity
- knowledge
- experience
- expertise
- wisdom
- spirituality
Research in this field questions not only whether development ceases after adolescence, but also the notion, popularized by many gerontologists, that a decline occurs after late adolescence. Research shows that positive development does still occur during adulthood. Recent studies indicate that such development is useful in predicting things such as an individual’s health, life satisfaction, and ability to contribute to society.
Now that there is scientific proof that individuals continue to develop as adults, researchers have begun investigating how to foster such development. Rather than just describing, as phenomenon, the fact that adults continue to develop, researchers are interested in aiding and guiding that development. For educators of adults in formal settings, this has been a priority in many ways already. More recently, researchers have begun to experiment with hypotheses about fostering positive adult development. These methods are used in organizational and educational setting. Some use developmentally-designed, structured public discourse to address complex public issues.
Positive Adult Development
Research in Positive Adult Development questions not only whether development ceases after adolescence, but also the notion, popularized by many gerontologists, that a decline occurs after late adolescence.
4.4: Learning Personality, Morality, and Emotions
4.4.1: Sociology of Emotion
The sociology of emotions applies sociological theorems and techniques to the study of human emotions.
Learning Objective
Examine the origins of the sociology of emotions through the work of Marx, Weber, and Simmel, and its development by T. David Kemper, Arlie Hochschild, Randall Collins, and David R. Heise
Key Points
- Emotions impact society on both the micro level (everyday social interactions) and the macro level (social institutions, discourses, and ideologies).
- Ethnomethodology revealed emotional commitments to everyday norms through purposeful breaching of the norms.
- We try to regulate our emotions to fit in with the norms of the situation, based on many, and sometimes conflicting demands upon us.
Key Terms
- ethnomethodology
-
An academic discipline that attempts to understand the social orders people use to make sense of the world through analyzing their accounts and descriptions of their day-to-day experiences.
- The sociology of emotions
-
The sociology of emotion applies sociological theorems and techniques to the study of human emotions.
Examples
- An example of individuals emotions’ impacting social interactions and institutions is how a Board of Directors will fail to be productive if the members are angry with one another.
- An example of the operations of a guilt society is how Americans expect a husband to feel guilty if he forgets his wife’s birthday. This expectation is the product of a guilt society.
- According to doctrine, many members of the Catholic church believe that people should feel shame for masturbating. This expectation is the product of a shame society.
The sociology of emotions applies sociological theorems and techniques to the study of human emotions. As sociology emerged, primarily as a reaction to the negative affects of modernity, many normative theories deal in some sense with “emotion” without forming a part of any specific subdiscipline: Marx described capitalism as detrimental to personal “species-being,” Simmel wrote of the deindividualizing tendencies of “the metropolis,” and Weber’s work dealt with the rationalizing effect of modernity in general.
Emotions operate on both micro and macro levels. On the micro level, social roles, norms, and feeling rules structure’s everyday social interactions. On a macro level, these same emotional processes structure social institutions, discourses, and ideologies. We try to regulate our emotions to fit in with the norms of the situation, based on many, and sometimes conflicting demands upon us. Systematic observations of group interaction found that a substantial portion of group activity is devoted to the socio-emotional issues of expressing affect and dealing with tension. Simultaneously, field studies of social attraction in groups revealed that feelings of individuals about each other collate into social networks, a discovery that still is being explored in the field of social network analysis.
Ethnomethodology revealed emotional commitments to everyday norms through purposeful breaching of the norms. In one study, a sociologist sent his students home and instructed them to act as guests rather than family members. Students reported others’ astonishment, bewilderment, shock, anxiety, embarrassment, and ange and family members accused the students of being mean, inconsiderate, selfish, nasty, or impolite.
Important theories and theoreticians relating to the sociology of emotion include:
- T. David Kemper: He proposed that people in social interaction have positions on two relational dimensions: status and power. Emotions emerge as interpersonal events, change or maintain individuals’ status and power.
- Arlie Hochschild: She proposed that individuals manage their feelings to produce acceptable displays according to ideological and cultural standards.
- Peggy Thoits: She divided emotion management techniques into implementation of new events and reinterpretation of past events. Thoits noted that emotions also can be managed with drugs, by performing faux gestures and facial expressions, or by cognitive reclassifications of one’s feelings.
- Thomas J. Scheff: He established that many cases of social conflict are based on a destructive and often escalating, but stoppable and reversible shame-rage cycle–when someone results or feels shamed by another, their social bond comes under stress.
- Randall Collins: He stated that emotional energy is the main motivating force in social life, for love and hatred, investing, working or consuming, and rendering cult or waging war.
- David R. Heise. He developed the Affect Control Theory, which proposes that social actions are designed by their agents to create impressions that befit sentiments reigning in a situation.
For more information on emotions, watch this video:
Social Significance of Emotion
The sociology of emotion suggests that individual emotional reactions, such as this girl’s happiness and excitement, impact social interactions and institutions.
4.4.2: Informal Social Control
Social control refers to societal processes that regulate individual and group behaviour in an attempt to gain conformity.
Learning Objective
Give examples of the difference between informal and formal means of social control
Key Points
- Informal control typically involves an individual internalizing certain norms and values. This process is known as socialization.
- Formal means of social control typically involve the state. External sanctions are enforced by the government to prevent chaos, violence, or anomie in society. Some theorists, such as Émile Durkheim, refer to this form of control as regulation.
- The social values present in individuals are products of informal social control, exercised implicitly by a society through particular customs, norms, and mores. Individuals internalize the values of their society, whether conscious or not of this indoctrination.
- Contemporary Western society uses shame as one modality of control, but its primary dependence rests on guilt, and, when that does not work, the criminal justice system.
Key Terms
- sanction
-
a penalty, or some coercive measure, intended to ensure compliance; especially one adopted by several nations, or by an international body
- conformity
-
the ideology of adhering to one standard or social uniformity
- compliance
-
the tendency of conforming with or agreeing to the wishes of others
Example
- An example of affect control theory in practice is how people behave at funerals. Even if someone didn’t know the person who passed away particularly well, the social situation encourages one to comport himself as if he were grieving.
Social Control
Social control refers to societal and political mechanisms that regulate individual and group behaviour in an attempt to gain conformity and compliance to the rules of a given society, state, or social group. Sociologists identify two basic forms of social control – informal control and formal control.
Formal Control
Formal social control typically involves the state. External sanctions are enforced by the government to prevent chaos, violence, or anomie in society. An example of this would be a law preventing individuals from committing theft. Some theorists, like Émile Durkheim, refer to this type of control as regulation.
Informal Control
Informal control typically involves an individual internalizing certain norms and values. This process is called socialization. The social values present in individuals are products of informal social control, exercised implicitly by a society through particular customs, norms, and mores. Individuals internalize the values of their society, whether conscious or not of this indoctrination.
Informal sanctions may include shame, ridicule, sarcasm, criticism, and disapproval, which can cause an individual to conform to the social norms of the society. In extreme cases, sanctions may include social discrimination, exclusion, and violence. Informal social control has the potential to have a greater impact on an individual than formal control. When social values become internalized, they become an aspect of an individual’s personality.
Informal sanctions check ‘deviant’ behavior. An example of a negative sanction is depicted in a scene in ‘The Wall,’ a film by Pink Floyd. In this scene, a young protagonist is ridiculed and verbally abused by a high school teacher for writing poetry in a mathematics class. Another example occurs in the movie ‘About a Boy. ” In this film, a young boy hesitates to jump from a high springboard and is ridiculed for his fear. Though he eventually jumps, his behaviour is controlled by shame, not by his internal desire to jump.
Informal means of control
At funerals, people tend to comport themselves to look as if they are grieving, even if they did not know the person who passed away. This is example of a social situation controlling an individual’s emotions.
4.5: Agents of Socialization
4.5.1: Family
A family serves to reproduce society biologically, through procreation, and socially, through the socialization of children.
Learning Objective
Analyze the pivotal role a family plays in the socialization of children and the continuation of society through procreation
Key Points
- Although a family can fulfill a variety of other functions, not all of these are universal or obligatory.
- The incest taboo, which prohibits sexual relations between family members, is a form of exogamy and may help promote social solidarity.
- The family of orientation refers to the role of the family in providing children with a position in society and socialize them.
- From the parents’ perspective, the family of procreation refers to the family’s role is to produce and socialize children.
- Exogamy is a social arrangement according to which marriages can only occur with members outside of one’s social group.
- Exogamy is a social arrangement according to which marriages can only occur with members outside of one’s social group.
Key Terms
- exogamy
-
Marriage to a person belonging to a tribe or group other than your own as required by custom or law.
- bridewealth
-
Bridewealth is the amount of money, wealth, or property paid by the family of the groom to the bride’s parents upon the marriage of the couple. The amount paid generally indicates the perceived value of the bride.
- family of procreation
-
the idea that the goal of a family is to produce and enculturate and socialize children
- family of orientation
-
This refers to the family in which an individual grows up.
The primary function of the family is to reproduce society, both biologically through procreation and socially through socialization. Given these functions, the individual’s experience of his or her family shifts over time. From the perspective of children, the family is a family of orientation: the family functions to locate children socially, and plays a major role in their socialization. From the point of view of the parent(s), the family is a family of procreation: The family functions to produce and socialize children. In some cultures, marriage imposes upon women the obligation to bear children. In northern Ghana, for example, payment of bridewealth, which is an amount of money, wealth, or property paid to the bride’s parents by the groom’s family, signifies a woman’s requirement to bear children, and women using birth control face substantial threats of physical abuse and reprisals.
Producing offspring is not the only function of the family. Marriage sometimes establishes the legal father of a woman’s child; establishes the legal mother of a man’s child; gives the husband or his family control over the wife’s sexual services, labor, and/or property; gives the wife or her family control over the husband’s sexual services, labor, and/or property; establishes a joint fund of property for the benefit of children; establishes a relationship between the families of the husband and wife. None of these functions are universal, nor are all of them inherent to any one society. In societies with a sexual division of labor, marriage, and the resulting relationship between a husband and wife, is necessary for the formation of an economically productive household. In modern societies, marriage entails particular rights and privileges which encourage the formation of new families even when there is no intention of having children.
In most societies, marriage between brothers and sisters is forbidden. In many societies, marriage between some first cousins is preferred, while at the other extreme, the medieval Catholic Church prohibited marriage even between distant cousins. The present day Catholic Church still maintains a standard of required distance for marriage.
These sorts of restrictions can be classified as an incest taboo, which is a cultural norm or rule that forbids sexual relations between family members and relatives. Incest taboo may serve to promote social solidarity and is a form of exogamy. Exogamy can be broadly defined as a social arrangement according to which marriages can only occur with members outside of one’s social group. One exception to this pattern is in ancient Egypt, where marriage between brothers and sisters was permitted in the royal family, as it was also the case in Hawaii and among the Inca. This privilege was denied commoners and may have served to concentrate wealth and power in one family.
Family
Families have strong ties and, therefore, are powerful agents of socialization.
4.5.2: Neighborhood
A neighborhood is a geographically localized community within a larger city, town, or suburb.
Learning Objective
Justify the importance of neighborhoods and communities as units of socialization, especially when specialized, such as by ethnicity or religion
Key Points
- Ethnic neighborhoods were important in many historical cities, and they remain common in modern cities.
- Rural-to-urban migration contributed to neighborhood distinctiveness and social cohesion in historical cities.
- A community is a group of interacting people, living in some proximity. Community usually refers to a social unit—larger than a household—that shares common values and has social cohesion.
- Social capital refers to a sense of connectedness due to the formation of social networks in a given community.
Key Terms
- community
-
A group sharing a common understanding and often the same language, manners, tradition and law. See civilization.
- ethnic enclave
-
An ethnic enclave is an ethnic community which retains some cultural distinction from a larger, surrounding area, it may be a neighborhood, an area or an administrative division based on ethnic groups.
- social capital
-
The good will, sympathy, and connections created by social interaction within and between social networks.
Example
- Chinatown is an example of an ethnic neighborhood.
A neighborhood is a geographically localized community within a larger city, town, or suburb. Neighborhoods are often social communities with considerable face-to-face interaction among members. Neighborhoods are typically generated by social interaction among people living near one another. In this sense, they are local social units larger than households, but not directly under the control of city or state officials. In some preindustrial urban traditions, basic municipal functions such as protection, social regulation of births and marriages, cleaning, and upkeep are handled informally by neighborhoods and not by urban governments; this pattern is well documented for historical Islamic cities. In addition to social neighbourhoods, most ancient and historical cities also had administrative districts used by officials for taxation, record-keeping, and social control.
Specialization and Differentiation
Neighborhoods in preindustrial cities often had some degree of social specialization or differentiation. Ethnic enclaves were important in many past cities and remain common in cities today. Economic specialists, including craft producers, merchants, and others could be concentrated in neighborhoods. Other neighborhoods were united by religious persuasion. One factor contributing to neighborhood distinctiveness and social cohesion was the role of rural to urban migration. This was a continual process for preindustrial cities in which migrants tended to move in with relatives and acquaintances from their rural past.
On another level, a community is a group of interacting people, living in some proximity. Community usually refers to a social unit—larger than a household—that shares common values and has social cohesion. The sense of community and formation of social networks comprise what has become known as social capital.
Chelsea
This image is of Chelsea neighborhood of Manhattan in New York City.
4.5.3: School
Education is the process by which society transmits its accumulated knowledge, skills, customs and values from one generation to another.
Learning Objective
Explain the role of both formal and informal education in the socialization process, such as learning norms and expectations, as well as gaining social equality
Key Points
- The sociology of education is the study of how public institutions and individual experiences affect education and its outcomes.
- A systematic sociology of education began with Émile Durkheim’s work on moral education as a basis for organic solidarity.
- Socialization is the process by which the new generation learns the knowledge, attitudes and values that they will need as productive citizens.
- The hidden curriculum is a subtler, but nonetheless powerful, indoctrination of the norms and values of the wider society.
Key Terms
- hidden curriculum
-
A curriculum that goes beyond the explicit demands of the formal curriculum. The goals and requirements of the hidden curriculum are unstated, but inflexible. They concern not what students learn but how and when they learn.
- the sociology of education
-
The sociology of education is the study of how public institutions and individual experiences affect education and its outcomes.
- socialization
-
The process of learning one’s culture and how to live within it.
Example
- When teaching kindergarten, a teacher may assign students to practice addition with one another. This lesson both educates children in basic mathematics and in the social values of teamwork and reciprocity. In this example, teamwork and reciprocity are examples of the “hidden curriculum. “
Education is the means through which the aims and habits of a group of people is transmitted from one generation to the next. Generally, it occurs through any experience that has a formative effect on the way one thinks, feels, or acts. In its narrow, technical sense, education is the formal process by which society deliberately transmits its accumulated knowledge, skills, customs and values from one generation to another. The sociology of education is the study of how public institutions and individual experiences affect education and its outcomes. It is most concerned with the public schooling systems of modern industrial societies, including the expansion of higher, adult, and continuing education.
Education has often been seen as a fundamentally optimistic human endeavor characterized by aspirations for progress and betterment. It is understood by many to be a means of overcoming limitations, achieving greater equality and acquiring wealth and social status. Education is perceived as an endeavor that enables children to develop according to their unique needs and potential. It is also perceived as one of the best means of achieving greater social equality. Some take a particularly negative view, arguing that the education system is intentionally designed to perpetuate the social reproduction of inequality.
A systematic sociology of education began with Émile Durkheim’s work on moral education as a basis for organic solidarity. It was after World War II, however, that the subject received renewed interest around the world: from technological functionalism in the US, egalitarian reform of opportunity in Europe, and human-capital theory in economics. These all implied that, with industrialization, the need for a technologically-skilled labor force undermines class distinctions and other ascriptive systems of stratification, and that education promotes social mobility.
Structural functionalists believe that society leans towards social equilibrium and social order. Socialization is the process by which the new generation learns the knowledge, attitudes and values that they will need as productive citizens. Although this aim is stated in the formal curriculum, it is mainly achieved through “the hidden curriculum”, a subtler, but nonetheless powerful, indoctrination of the norms and values of the wider society. Students learn these values because their behavior at school is regulated until they gradually internalize and accept them. For example, most high school graduates are socialized to either enter college or the workforce after graduation. This is an expectation set forth at the beginning of a student’s education.
Education also performs another crucial function. As various jobs become vacant, they must be filled with the appropriate people. Therefore, the other purpose of education is to sort and rank individuals for placement in the labor market. Those with high achievement will be trained for the most skilled and intellectually tasking jobs and in reward, be given the highest income. On the other hand, those who achieve the least, will be given the least demanding jobs, and hence the least income.
School
School serves as a primary site of education, including the inculcation of “hidden curricula” of social values and norms.
4.5.4: Day Care
Day care, in which children are cared for by a person other than their legal guardians, contributes to their socialization.
Learning Objective
Discuss how the use of day care (ranging from relative care to preschools) impacts the socialization of children in both a positive and negative way
Key Points
- Studies have shown that while bad day care can result in physical and emotional problems, good day care is not harmful to noninfants and may even lead to better outcomes.
- The day care industry is a continuum from personal parental care to large, regulated institutions.
- Early childhood education is the formal education and care of young children by people other than their family in settings outside of their homes and before the age of normal schooling.
Key Term
- early childhood education
-
The formal teaching and care of young children by people other than their family in settings outside of the home and before the age of normal schooling.
Example
- Examples of day care range from the next door neighbor watching one’s children to hiring a babysitter to large day care facilities that resemble preschools.
Day care is the care of a child during the day by a person other than the child’s legal guardians, typically performed by someone outside the child’s immediate family. Day care is typically a service during specific periods, such as when parents are at work . Child care is provided in nurseries or crèches, or by a nanny or family child care provider caring for children in their own homes. It can also take on a more formal structure, with education, child development, discipline, and even preschool education falling into the fold of services.
Day Care
A mother who works in construction drops her child off at daycare prior to work.
The day care industry is a continuum from personal parental care to large, regulated institutions. The vast majority of childcare is still performed by the parents, in house nanny, or through informal arrangements with relatives, neighbors, or friends. Another factor favoring large corporate day cares is the existence of childcare facilities in the workplace. Large corporations will not handle this employee benefit directly themselves and will seek out large corporate providers to manage their corporate daycares. Most smaller, for-profit day cares operate out of a single location.
Independent studies suggest that good day care for non-infants is not harmful. Some advocate that day care is inherently inferior to parental care. In some cases, good daycare can provide different experiences than parental care does, especially when children reach two and are ready to interact with other children. Bad day care puts the child at physical, emotional, and attachment risk. Higher quality care is associated with better outcomes. Children in higher quality child care had somewhat better language and cognitive development during the first 4½ years of life than those in lower quality care. They were also somewhat more cooperative than those who experienced lower quality care during the first three years of life.
As a matter of social policy, consistent, good daycare may ensure adequate early childhood education for children of less skilled parents. From a parental perspective, good daycare can complement good parenting. Early childhood education is the formal teaching and care of young children by people other than their family in settings outside of the home. “Early childhood” is usually defined as before the age of normal schooling – five years in most nations, though the U.S. National Association for the Education of Young Children (NAEYC) instead defines “early childhood” as before the age of eight.
4.5.5: Peer Groups
A peer group, whose members have interests, social positions, and age in common, have an influence on the socialization of group members.
Learning Objective
Analyze the importance of the peer group in terms of childhood and adolescent socialization
Key Points
- This is where children can escape supervision and learn to form relationships on their own.
- The influence of the peer group typically peaks during adolescence.
- However, peer groups generally only affect short term interests unlike the family, which has long term influence.
- Peer groups can also serve as a venue for teaching members gender roles.
- Adolescent peer groups provide support for children and teens as they assimilate into the adult society decreasing dependence on parents, increasing feeling of self-sufficiency, and connecting with a much larger social network.
- The term “peer pressure” is often used to describe instances where an individual feels indirectly pressured into changing their behavior to match that of their peers.
Key Terms
- Peer group
-
A peer group is a social group whose members have interests, social positions, and age in common.
- gender roles
-
Sets of social and behavioral norms that are generally considered appropriate for either a man or a woman in a social or interpersonal relationship.
- peer pressure
-
Peer pressure is the influence exerted by a peer group, encouraging individuals to change their attitudes, values, or behaviors in order to conform to group norms.
Example
- Teenagers encouraging their friends to smoke, drink, or engage in other risky behavior is an example of peer pressure. Peer pressure can also work in positive ways by encouraging teenagers to practice, study, or engage in other positive behaviors.
A peer group is a social group whose members have interests, social positions, and age in common. This is where children can escape supervision and learn to form relationships on their own. The influence of the peer group typically peaks during adolescence. However, peer groups generally only affect short term interests unlike the family, which has long term influence.
Unlike the family and the school, the peer group lets children escape the direct supervision of adults. Among peers, children learn to form relationships on their own. Peer groups also offer the chance to discuss interests that adults may not share with their children (such as clothing and popular music) or permit (such as drugs and sex).
Peer groups have a significant influence on psychological and social adjustments for group individuals. They provide perspective outside of individual’s viewpoints. Members inside peer groups also learn to develop relationships with others in the social system. Peers, particularly group members, become important social referents for teaching members’ customs, social norms, and different ideologies.
Peer groups can also serve as a venue for teaching members gender roles. Through gender-role socialization group members learn about sex differences, social and cultural expectations. While boys and girls differ greatly there is not a one to one link between sex and gender role with males always being masculine and female always being feminine. Both genders can contain different levels of masculinity and femininity.
Adolescent peer groups provide support for children and teens as they assimilate into the adult society decreasing dependence on parents, increasing feeling of self-sufficiency, and connecting with a much larger social network. Peer groups cohesion is determined and maintained by such factors as group communication, group consensus, and group conformity concerning attitude and behavior. As members of peer groups interconnect, and agree, a normative code arises. This normative code can become very rigid deciding group behavior and dress. Peer group individuality is increased by normative codes, and intergroup conflict. Member deviation from the strict normative code can lead to rejection from the group. The term “peer pressure” is often used to describe instances where an individual feels indirectly pressured into changing their behavior to match that of their peers. Taking up smoking and underage drinking are two of the best known examples. In spite of the often negative connotations of the term, peer pressure can be used positively.
4.5.6: Mass Media and Technology
Since mass media has enormous effects on our attitudes and behavior, it contributes to the socialization process.
Learning Objective
Analyze the connection between media, technology and society
Key Points
- Mass media is the means for delivering impersonal communications directed to a vast audience.
- The term media comes from Latin meaning, “middle,” suggesting that the media’s function is to connect people.
- Media bias refers to the bias of journalists and news producers within mass media. Bias exists in the selection of events and stories that are reported and how they are covered.
- A technique used to avoid bias is the “round table,” an adversarial format in which representatives of opposing views comment on an issue.
- A technique used to avoid bias is the “round table”, an adversarial format in which representatives of opposing views comment on an issue.
Key Terms
- mass media
-
Collectively, the communications media, especially television, radio, and newspapers, that reach the mass of the people.
- media bias
-
A political bias in journalistic reporting, in programming selection, or otherwise in mass communications media.
- round table
-
A conference at which participants of similar status discuss and exchange views
Example
- Television programs, movies, magazines, and advertisements are all examples of different forms of mass media.
Mass media is the means for delivering impersonal communications directed to a vast audience. The term media comes from Latin meaning, “middle,” suggesting that the media’s function is to connect people. Since mass media has enormous effects on our attitudes and behavior, notably in regards to aggression, it contributes to the socialization process.
Media Bias
Media bias refers the bias of journalists and news producers within the mass media. Bias exists in the selection of events and stories that are reported and how they are covered. The term “media bias” implies a pervasive or widespread bias contravening the standards of journalism, rather than the perspective of an individual journalist or article. The direction and degree of media bias in various countries is widely disputed .
Media Bias
A panel in the Newseum in Washington, DC shows the September 12 headlines in America and around the world. Note the different treatment of 9/11 by different sources.
A technique employed to avoid bias is the “round table,” an adversarial format in which representatives from opposing views comment on an issue. This approach theoretically allows diverse views to appear in the media. However, the person organizing the report still has the responsibility to choose people who really represent the breadth of opinion, to ask them non-prejudicial questions, and to edit their comments fairly. When done carelessly, a point/counterpoint can be as unfair as a simple biased report, by suggesting that the “losing” side lost on its merits.
The apparent bias of media is not always specifically political in nature. The news media tend to appeal to a specific audience. This means stories that affect a large number of people on a global scale often receive less coverage in some markets than local stories, such as a public school shooting, a celebrity wedding, a plane crash, or similarly glamorous or shocking stories. Millions of deaths in an ethnic conflict in Africa might be afforded scant mention in American media, while the shooting of five people in a high school is analyzed in-depth. The reason for these types of bias is a function of what the public wants to watch and/or what producers and publishers believe the public wants to watch.
Video Game Violence
Debates have been going on for years about the problem and effect of violent video games. Many people believe that violent video games, when played regularly, lead to real-life violence. In fact, video game violence can lead to an increase in a person’s thoughts and behaviors. There have been incidents of children acting out the violence they see in a game, often with dire consequences. The key is being involved in other activities; when teenagers who played violent video games also participated in sports or clubs, there was less indication they would become violent in any potential situation.
4.5.7: Workplace
The workplace performs its socialization process through onboarding, through which employees acquire skills to adjust to their new role.
Learning Objective
Analyze the process of onboarding as it relates to workplace socialization
Key Points
- Tactics used in the onboarding process include formal meetings, lectures, videos, printed materials and computer-based orientations.
- Employees with certain personality traits and experiences adjust to an organization more quickly. These include employees with a proactive personality, “Big Five” personality traits, curiosity, and greater experience levels.
- Information seeking occurs when new employees ask questions of their co-workers to learn about the company’s norms, expectations, procedures and policies.
- Also called networking, relationship building involves an employee’s efforts to develop camaraderie with co-workers and even supervisors.
- Employee experience levels also affect the onboarding process such that more experienced members of the workforce tend to adapt to a new organization differently from, for example, a new college graduate starting his or her first job.
- Information seeking occurs when new employees ask questions of their co-workers and superiors in an effort to learn about their new job and the company’s norms, expectations, procedures, and policies.
- Also called networking, relationship building involves an employee’s efforts to develop camaraderie with co-workers and even supervisors.
Key Terms
- networking
-
the act of meeting new people in a business or social context.
- curiosity
-
Inquisitiveness; the tendency to learn about things by asking questions, investigating or exploring.
- onboarding
-
The process of bringing a new employee on board, incorporating training and orientation.
The workplace performs its socialization function through onboarding. This is the mechanism through which new employees acquire the necessary knowledge, skills and behaviors to become effective organizational members. Tactics used in this process include formal meetings, lectures, videos, printed materials, or computer-based orientations. Research has demonstrated that these socialization techniques lead to positive outcomes for new employees including higher job satisfaction, better job performance, greater organizational commitment, and reduction in stress. These outcomes are particularly important to an organization looking to retain a competitive advantage in an increasingly mobile and globalized workforce .
Employees with certain personality traits and experiences adjust to an organization more quickly. These traits are a proactive personality, the “Big Five” traits, curiosity and greater experience levels. “Proactive personality” refers to the tendency to take charge of situations and achieve control over one’s environment. This type of personality predisposes some workers to engage in behaviors like information seeking that accelerate the socialization process. The Big Five personality traits—openness, conscientiousness, extraversion, agreeableness, and neuroticism—have been linked to onboarding success. Specifically, new employees who are extraverted or particularly open to experience are more likely to seek out information, feedback, acceptance and relationships with co-workers.
Curiosity also plays a substantial role in the newcomer adaptation process. It is defined as the “desire to acquire knowledge” that energizes individual exploration of an organization’s culture and norms. Individuals with a curious disposition eagerly seek out information to help them make sense of their new organizational surroundings, which leads to a smoother onboarding experience. Employee experience levels also affect the onboarding process. For example, more experienced members of the workforce tend adapt to a new organization differently from a college graduate starting his or her first job. This is because seasoned employees can draw from past experiences to help them adjust to their new work settings. They may be less affected by specific socialization efforts because they have (a) a better understanding of their own needs and requirements at work and (b) are more familiar with what is acceptable in the work context.
Employees that build relationships and seek information can help facilitate the onboarding process. Newcomers can also speed up their adjustment by demonstrating behaviors that assist them in clarifying expectations, learning organizational values and norms, and gaining social acceptance. Information seeking occurs when new employees ask questions in an effort to learn about the company’s norms, expectations, procedures and policies. Also called networking, relationship building involves an employee’s efforts to develop camaraderie with co-workers and supervisors. This can be achieved informally through talking to their new peers during a coffee break, or through more formal means like pre-arranged company events. Research has shown relationship building to be a key part of the onboarding process, leading to outcomes like greater job satisfaction, better job performance and decreased stress.
Organization Socialization Model
A model of onboarding (adapted from Bauer & Erdogan, 2011).
4.5.8: Religion
Religion is a collection of cultural systems, belief systems, and worldviews that relate humanity to spirituality and moral values.
Learning Objective
Explain how people come to be socialized in terms of religion and how parental influence is a key factor in religiosity
Key Points
- Sociology of religion is the study of the beliefs, practices, and organizational forms of religion using the tools and methods of the discipline of sociology.
- Agents of socialization differ in effects across religious traditions. Some believe religion is like an ethnic or cultural category, making it less likely for the individuals to break from religious affiliations and be more socialized in this setting.
- Belief in God is attributable to a combination of the above factors, but is also informed by a discussion of socialization. The biggest predictor of adult religiosity is parental religiosity; if a person’s parents were religious when he was a child, he is likely to be religious when he grows up.
- In their thesis, Altemeyer and Hunsberger found some interesting cases where secular people converted to religion, and religious people became secular.
Key Terms
- religion
-
an organized collection of belief systems, cultural systems, and world views that relate humanity to spirituality and, sometimes, to moral values
- sociology of religion
-
Sociology of religion is the study of the beliefs, practices, and organizational forms of religion using the tools and methods of the discipline of sociology.
- agents of socialization
-
Agents of socialization, or institutions that can impress social norms upon an individual, include the family, religion, peer groups, economic systems, legal systems, penal systems, language, and the media.
- parental religiosity
-
The biggest predictor of adult religiosity is parental religiosity; if a person’s parents were religious when he was a child, he is likely to be religious when he grows up.
Example
- In Catholic confession, a churchgoer confesses his sins to a member of the clergy. This ritual, in which the clergy responds to the churchgoer’s conduct, socializes the churchgoer to the faith’s perspective on his actions. This is an example of socialization through religion.
Religion is a collection of cultural systems, belief systems, and worldviews that relate humanity to spirituality and, sometimes, to moral values. Many religions have narratives, symbols, traditions, and sacred histories that are intended to give meaning to life or to explain the origin of life or the universe. They tend to derive morality, ethics, religious laws, or a preferred lifestyle from their ideas about the cosmos and human nature.
Sociology of religion is the study of the beliefs, practices, and organizational forms of religion, using the tools and methods of the discipline of sociology. This objective investigation may include the use of both quantitative methods (surveys, polls, demographic, and census analysis) and qualitative approaches, such as participant observation, interviewing, and analysis of archival, historical, and documentary materials.
Agents of socialization differ in effects across religious traditions. Some believe religion is like an ethnic or cultural category, making it less likely for the individuals to break from religious affiliations and be more socialized in this setting. Parental religious participation is the most influential part of religious socialization–more so than religious peers or religious beliefs. For example, children raised in religious homes are more likely to have some degree of religiosity in their lives. They are also likely to raise their own children with religion and to participate in religious ceremonies, such as baptisms and weddings.
Belief in God is attributable to a combination of the above factors but is also informed by a discussion of socialization. The biggest predictor of adult religiosity is parental religiosity; if a person’s parents were religious when he was a child, he is likely to be religious when he grows up. Children are socialized into religion by their parents and their peers and, as a result, they tend to stay in religions. Alternatively, children raised in secular homes tend not to convert to religion. This is the underlying premise of Altemeyer and Hunsberger’s main thesis–they found some interesting cases where just the opposite seemed to happen. Secular people converted to religion and religious people became secular. Despite these rare exceptions, the process of socialization is certainly a significant factor in the continued existence of religion.
Socialization through Religious Ceremonies
Religious ceremonies, such as Catholic mass, socialize members of the faith to the practices and beliefs of the religion.
4.5.9: The Division of Labor
Division of labor is the specialization of cooperative labor in specific, circumscribed tasks and similar roles.
Learning Objective
Interpret Durkheim’s division of labor theory in terms of mechanical and organic solidarity, as well as progression from primitive to advanced societies
Key Points
- An increasingly complex division of labor is historically closely associated with the growth of total output and trade, the rise of capitalism, and of the complexity of industrialization processes.
- Durkheim classified societies as primitive or advanced based on their division of labor.
- According to Durkheim, in primitive societies where there is little or no division of labor, people act and think alike with a collective conscience. In advanced societies with high division of labor, social ties are relatively homogeneous and weak.
- Labor hierarchy is a very common feature of the modern workplace structure.
- It is often agreed that the most equitable principle in allocating people within hierarchies is that of true competency or ability. This important Western concept of meritocracy could be interpreted as an explanation or as a justification of why a division of labor is the way it is.
Key Terms
- labor hierarchy
-
Labor hierarchy is a very common feature of the modern workplace structure, but of course the way these hierarchies are structured can be influenced by a variety of different factors.
- meritocracy
-
Rule by merit, and talent. By extension, now often used to describe a type of society where wealth, income, and social status are assigned through competition.
- industrialization
-
A process of social and economic change whereby a human society is transformed from a pre-industrial to an industrial state
Example
- An assembly line is an example of the division of labor.
Division of labor is the specialization of cooperative labor in specific, circumscribed tasks and roles. Historically, an increasingly complex division of labor is closely associated with the growth of total output and trade, the rise of capitalism, and of the complexity of industrialization processes. Division of labor was also a method used by the Sumerians to categorize different jobs and divide them between skilled members of a society.
Emilie Durkheim was a driving force in developing the theory of the division of labor in socialization. In his dissertation, Durkheim described how societies maintained social order based on two very different forms of solidarity (mechanical and organic), and analyzed the transition from more “primitive” societies to advanced industrial societies.
Durkheim suggested that in a “primitive” society, mechanical solidarity, with people acting and thinking alike and sharing a collective or common conscience, allows social order to be maintained. In such a society, Durkheim viewed crime as an act that “offends strong and defined states of the collective conscience”. Because social ties were relatively homogeneous and weak throughout society, the law had to be repressive and penal, to respond to offenses of the common conscience.
In an advanced, industrial, capitalist society, the complex division of labor means that people are allocated in society according to merit and rewarded accordingly; social inequality reflects natural inequality. Durkheim argued that in this type of society moral regulation was needed to maintain order (or organic solidarity). He thought that transition of a society from “primitive” to advanced may bring about major disorder, crisis, and anomie. However, once society has reached the “advanced” stage, it becomes much stronger and is done developing.
In the modern world, those specialists most preoccupied with theorizing about the division of labor are those involved in management and organization. In view of the global extremes of the division of labor, the question is often raised about what manner of division of labor would be ideal, most efficient, and most just. It is widely accepted that the division of labor is to a great extent inevitable, simply because no one can perform all tasks at once. Labor hierarchy is a very common feature of the modern workplace structure, but the structure of these hierarchies can be influenced by a variety of factors.
Division of Labor
An assembly line is a good example of a system that incorporates the division of labor; each worker is completing a discrete task to increase efficiency of overall production.
4.5.10: The Incest Taboo, Marriage, and the Family
An incest taboo is any cultural rule or norm that prohibits sexual relations between relatives.
Learning Objective
Analyze the different constructs of the incest taboo, ranging from biological (the Westermarck effect) to cultural (endogamy and exogamy)
Key Points
- Incest taboo is a cultural norm or rule that forbids sexual relations between relatives.
- Inbreeding is reproduction resulting from the mating of two genetically-related individuals.
- The Westermarck effect is essentially a psychological phenomenon that serves to discourage inbreeding. Through this effect, people who have grown up together are less likely to feel sexually attracted to one another later in life.
- Exogamy is a social arrangement in which marriage is permitted only with members from outside the social group.
- Endogamy is a social arrangement in which marriage can occur only within the same social group.
Key Terms
- inbreeding
-
Breeding between members of a relatively small population, especially one in which most members are related.
- exogamy
-
Marriage to a person belonging to a tribe or group other than your own as required by custom or law.
- endogamy
-
The practice of marrying or being required to marry within one’s own ethnic, religious, or social group.
Example
- The vast majority of people tend to feel disgust when considering incest. This emotional reaction is an example of how powerfully the incest taboo socializes individuals against interfamilial relations.
An incest taboo is any cultural rule or norm that prohibits sexual relations between relatives. All human cultures have norms regarding who is considered suitable and unsuitable as sexual or marriage partners. Usually certain close relatives are excluded from being possible partners. Little agreement exists among cultures about which types of blood relations are permissible partners and which are not. In many cultures, certain types of cousin relations are preferred as sexual and marital partners, whereas others are taboo.
One potential explanation for the incest taboo sees it as a cultural implementation of a biologically evolved preference for sexual partners without shared genes, as inbreeding may have detrimental outcomes . The most widely held hypothesis proposes that the so-called Westermarck effect discourages adults from engaging in sexual relations with individuals with whom they grew up. The existence of the Westermarck effect has achieved some empirical support. The Westermarck effect, first proposed by Edvard Westermarck in 1891, is the theory that children reared together, regardless of biological relationship, form a sentimental attachment that is by its nature non-erotic.
Inbreeding
An intensive form of inbreeding where an individual S is mated to his daughter D1, granddaughter D2 and so on, in order to maximise the percentage of S’s genes in the offspring. D3 would have 87.5% of his genes, while D4 would have 93.75%.
Another school argues that the incest prohibition is a cultural construct that arises as a side effect of a general human preference for group exogamy. Intermarriage between groups construct valuable alliances that improve the ability for both groups to thrive. According to this view, the incest taboo is not necessarily a universal, but it is likely to arise and become stricter under cultural circumstances that favor exogamy over endogamy; it likely to become more lax under circumstances that favor endogamy. This hypothesis has also achieved some empirical support.
Societies that are stratified often prescribe different degrees of endogamy. Endogamy is the opposite of exogamy; it refers to the practice of marriage between members of the same social group. A classic example is seen in India’s caste system, in which unequal castes are endogamous. Inequality between ethnic groups and races also correlates with endogamy. Class, caste, ethnic and racial endogamy typically coexists with family exogamy and prohibitions against incest.
4.5.11: Ideology
Ideology is a coherent system of ideas that constitutes one’s goals, expectations, and actions.
Learning Objective
Explain the purpose of an ideology and how it is used in various contexts (i.e. religion or politics) to create change or conformity in society
Key Points
- Ideology can be used either to initiate change in society or to encourage continued adherence to a set of ideals in a situation where conformity already exists.
- According to Karl Marx, ideology is an instrument for social reproduction, as those who control the means of production (the ruling class) are able to establish the dominant ideology within a society.
- Louis Althusser proposed a materialistic conception of ideology using the concept of Ideological State Apparatus.
- Ideological State Apparatuses are institutions, such as the family, media, religious organizations, education system, etc., that together comprise ideological practice, the sphere which has the defining property of constituting individuals as subjects.
- Many political parties base their political action and program on an ideology. Political ideology consists of two dimensions: goals and methods.
Key Terms
- superstructure
-
The ideas, philosophies, and culture that are built upon the means of production.
- ideology
-
the doctrine, philosophy, body of beliefs or principles belonging to an individual or group
An ideology is a set of ideas that constitute one’s goals, expectations, and actions. An ideology can be thought of as a comprehensive vision, as a way of looking at things, as in several philosophical tendencies, or a set of ideas proposed by the dominant class of a society to all members of this society. The main purpose behind an ideology is to offer either change in society, or adherence to a set of ideals where conformity already exists, through a normative thought process. Ideologies are systems of abstract thought applied to public matters and thus make this concept central to politics.
In the Marxist account of ideology, it serves as an instrument of social reproduction. In the Marxist economic base and superstructure model of society, base denotes the relations of production, and superstructure denotes the dominant ideology (religious, legal, political systems). The economic base of production determines the political superstructure of a society. Ruling class-interests determine the superstructure and the nature of the justifying ideology—actions feasible because the ruling class control the means of production. Similarly, Louis Althusser proposed a materialistic conception of ideology using the concept of the ideological state apparatus. For Althusser, beliefs and ideas are the products of social practices, not the reverse. What is ultimately important for Althusser are not the subjective beliefs held in the “minds” of human individuals, but rather the material institutions, rituals, and discourses that produce these beliefs.
Many political parties base their political action and program on an ideology. A political ideology is a certain ethical set of ideals, principles, doctrines, myths, or symbols of a social movement, institution, class, or large group that explains how society should work and offers some political and cultural blueprint for a certain social order. A political ideology largely concerns itself with how to allocate power and to what ends it should be used. Some parties follow a certain ideology very closely, while others may take broad inspiration from a group of related ideologies without specifically embracing any one of them.
4.5.12: Resocialization and Total Institutions
A total institution is a place where a group of people is cut off from the wider community and their needs are under bureaucratic control.
Learning Objective
Review Goffman’s five types of social institutions and their functions, including their processes of resocialization
Key Points
- The term total institution was coined by the American sociologist Erving Goffman.
- Resocialization is defined as radically changing an inmate’s personality by carefully controlling his or her environment.
- Resocialization is a two-part process. First, the staff of the institution tries to erode the residents’ identities and independence. Second, the resocialization process involves the systematic attempt to build a different personality or self.
Key Terms
- total institution
-
It is an institution that controls almost all aspects of its members’ lives. Boarding schools, orphanages, military branches, juvenile detention, and prisons are examples of total institutions.
- Erving Goffman
-
Erving Goffman (June 11, 1922 – November 19, 1982) was a Canadian-born sociologist and writer. The 73rd president of American Sociological Association, Goffman’s greatest contribution to social theory was his study of symbolic interaction in the form of dramaturgical analysis. This began with his 1959 book, The Presentation of Self in Everyday Life.
- Resocialization
-
Resocialization is defined as radically changing an inmate’s personality by carefully controlling the environment.
A total institution is a place of work and residence where a great number of similarly situated people, cut off from the wider community for a considerable time, lead an enclosed, formally administered life together. The term was coined by the American sociologist Erving Goffman. Within a total institution, the basic needs of a entire bloc of people are under bureaucratic control. These needs are handled in an impersonal and bureaucratic manner.
Goffman divided total institutions into five different types:
- Institutions established to care for harmless or incapable people, including orphanages, poor houses and nursing homes
- Institutions established to care for people that are incapable of looking after themselves and are also a threat to the community, including leprosarium, mental hospitals, and tuberculosis sanitariums
- Institutions organized to protect the community against perceived intentional dangers, with the welfare of the sequestered people not the immediate issue, including concentration camps, prisoner of war camps, penitentiaries and jails
- Institutions purportedly established to pursue some task, including colonial compounds, work camps, boarding schools, and ships
- Institutions designed as retreats from the world while also often serving as training stations for the religious, including convents, abbeys, and monasteries
The goal of total institutions is resocialization, the radical alteration of residents’ personalities by deliberately manipulating their environment. Key examples include the process of resocializing new recruits into the military so that they can operate as soldiers. Resocialization is a two-part process. First, the staff of the institution tries to erode the residents’ identities and independence. Second, resocialization involves the systematic attempt to build a different personality or self. This is generally done through a system of reward and punishment. The privilege of reading a book, watching television, or making a phone call can be a powerful motivator to conform. Conformity occurs when individuals change their behavior to fit in with the expectations of an authority figure or the expectations of a larger group.
Total Institutions
Prisons are examples of total institutions.
4.6: Gender Socialization
4.6.1: Gender Socialization
Gender socialization is the process of teaching people how to behave as men or women.
Learning Objective
Analyze how the process of gender socialization has an impact on the lifespan development of a person, specifically related to stereotypes between men and women
Key Points
- Gender socialization begins even before a baby is born.
- Gender is socialized through media messages, school instruction, family expectations, and experiences in the workplace.
- The process of gender socialization continues as adolescents enter the workforce. Research has found that adolescents encounter stereotypes of gendered performance in their first jobs.
Key Term
- gender
-
The socio-cultural phenomenon of the division of people into various categories such as male and female, with each having associated roles, expectations, stereotypes, etc.
Examples
- Disney movies present clear narratives of how men and women are supposed to behave, socializing young children.
- Disney movies present clear narratives of how men and women are supposed to behave, socializing young children.
Sociologists and other social scientists generally attribute many of the behavioral differences between men and women to socialization. Socialization is the process of transferring norms, values, beliefs, and behaviors to future group members. In regards to gender socialization, the most common groups people join are the gender categories male and female. Even the categorical options of gender an individual may choose is socialized; social norms act against selecting a gender that is neither male or female. Thus, gender socialization is the process of educating and instructing potential men and women how to behave as members of that particular group.
Socialization Before Birth
Preparations for gender socialization begin even before the birth of the child. One of the first questions people ask of expectant parents is whether the baby will be a boy or girl. This is the beginning of a social categorization process that continues throughout life. Preparations for the birth of the child often take the expected sex into consideration, such as painting the infant’s room pink or blue.
Early Life Socialization
One illustration of early life gender socialization can be seen in preschool classrooms. Children in preschool classrooms where teachers were told to emphasize gender differences saw an increase in stereotyped views of what activities are appropriate for boys or girls, while children with teachers who did not emphasize gender showed no increase in stereotyped views. This clearly demonstrates the influence of socialization on the development of gender roles; subtle cues that surround us in our everyday lives strongly influence gender socialization.
Adolescent Socialization
The process of gender socialization continues as adolescents enter the workforce. Research has found that adolescents encounter stereotypes of gendered performance in the workforce in their first jobs. First jobs are significantly segregated by sex. Girls work fewer hours and earn less per hour than boys. Hourly wages are higher in job types dominated by boys while girls are more frequently assigned housework and childcare duties. The impact of these first experiences in the professional world will shape adolescents’ perspectives on how men and women behave differently in the workforce.
Gender Socialization in Infants
Preparations for the birth of the child often take the expected sex into consideration, such as painting the infant’s room pink or blue.
4.6.2: Learning the Gender Gap
The gender pay gap, or the difference between male and female earnings, is primarily due to discriminatory social processes.
Learning Objective
Discuss the impact the gener pay/wage gap can have on both men and, in particular, women, in the economic world
Key Points
- There is a debate as to what extent the gender pay gap is the result of gender differences, implicit discrimination due to lifestyle choices, or because of explicit discrimination.
- The unadjusted wage gap refers to a measure of the wage gap that does not take into account differences in personal and workplace characteristics between men and women.
- We can assume that the adjusted wage gap represents the gap due to implicit discrimination. In other words, the social forces that cause women to stay home with children more frequently than men or to be less aggressive, are responsible for this part of the wage gap.
- The difference between the unadjusted and the adjusted wage gap is due to explicit discrimination or the fact that on average, a woman will make less than an identical man in the exact same occupation.
- Studies have shown that the majority of the gender wage gap is due to implicit, not explicit, discrimination.
Key Terms
- The gender pay gap
-
The gender pay gap is the difference between male and female earnings expressed as a percentage of male earnings, according to the OECD.
- glass ceiling
-
An unwritten, uncodified barrier to further promotion or progression for a member of a specific demographic group.
Example
- Men are paid more per hour and are promoted more frequently than women, both examples of the gender pay gap.
The gender pay gap is the difference between male and female earnings expressed as a percentage of male earnings, according to the Organisation for Economic Co-operation and Development (OECD). The European Commission defines it as the average difference between men and women’s hourly earnings. There is a debate as to what extent this is the result of gender differences, implicit discrimination due to lifestyle choices, or because of explicit discrimination. If it is a result of gender differences, then the pay gap is not a problem; men are simply better equipped to perform more valuable work than women. If it is a result of implicit discrimination due to lifestyle choices, then women’s lower earnings result from the fact that women typically take more time off when having children or choose to work fewer hours. If it is explicit discrimination, then the pay gap is a result of stereotypical beliefs, conscious or unconscious, from those who hire and set salaries.
Most who study the gender wage gap assume that it is not due to differences in ability between genders – while in general men may be better at physical labor, the pay gap persists in other employment sectors as well. This implies that the gender gap stems from social, rather than biological, origins.
In order to determine whether the gender gap is a result of implicit or explicit discrimination, we can look at the adjusted and unadjusted wage gap. The unadjusted wage gap refers to a measure of the wage gap that does not take into account differences in personal (e.g., age, education, the number of children, job tenure, occupation, and occupational crowding) and workplace (e.g., the economic sector and place of employment) characteristics between men and women. Parts of the raw pay gap can be attributed to the fact that women, for instance, tend to engage more often in part-time work and tend to work in lower paid industries. The remaining part of the raw wage gap that cannot be explained by variables that are thought to influence pay is then referred to as the adjusted gender pay gap and may be explicitly discriminatory.
The total wage gap in the United States is 20.4 percent. A study commissioned by the United States Department of Labor, prepared by Consad Research Corp, asserts that there are “observable differences in the attributes of men and women that account for most of the wage gap. Statistical analysis that includes those variables has produced results that collectively account for between 65.1 and 76.4 percent of a raw gender wage gap of 20.4 percent, and thereby leave an adjusted gender wage gap that is between 4.8 and 7.1 percent. ” Thus, only a relatively small part of the wage gap is due to explicit discrimination .
Gender Pay Gap in the United States, 1980-2009
This graph depicts the female-to-male earnings ratio, median yearly earnings among full-time, year-round workers from 1980 to 2009.
We can assume that the remainder (the gap attributed to the measured variables) is the result of implicit discrimination, that is, social forces that pressure women into working part time, to stay home with their children, to be less aggressive in pursuing promotions or raises, etc. A 2010 report by the European Foundation for the Improvement of Living and Working Conditions, for example, pointed out that “the major reasons for this gap are very often related to both horizontal and vertical segregation – or the fact that women tend to choose lower-paid professions, reach a ‘glass ceiling’ in their careers, or have their jobs valued less favourably. The origins of these factors could be judged as being discriminatory in themselves, that is, when they are rooted in gender stereotypes of male and female occupations . “
4.6.3: Gender Messages in the Family
Gender role theory posits that boys and girls learn the appropriate behavior and attitudes from the family with which they grow up.
Learning Objective
Justify how the family acts as the most important agent of gender socialization for children and adolescents
Key Points
- Primary socialization – the socialization that occurs during childhood and depends mostly on a child’s family members – is typically the most long-lasting and influential phase of socialization. Therefore, the gender roles learned from family will endure.
- The family is the most important agent of socialization because it serves as the center of the child’s life.
- The division of labor between men and women contributes to the creation of gender roles, which in turn, lead to gender-specific social behavior.
- In the adult years the demands of work and family overwhelm most peer group relations and the influence of peers seriously declines as an agent of socialization, only to return during the elderly years.
- The division of labor creates gender roles, which in turn, lead to gendered social behavior.
Key Terms
- primary socialization
-
The socialization that takes place early in life, as a child and adolescent.
- Division of labor
-
A division of labour is the dividing and specializing of cooperative labour into specifically circumscribed tasks and roles.
- gender role theory
-
The idea that boys and girls learn the behavior and attitudes about how to perform one’s biologically assigned gender.
Example
- Families divide responsibilities between parents. In many American families, the father serves as the breadwinner, while the mother maintains the household. This teaches their children that the expected gender roles for men and women require the father to work and the mother to remain in the domestic sphere.
Gender role theory posits that boys and girls learn the appropriate behavior and attitudes from the family and overall culture in which they grow up, and that non-physical gender differences are a product of socialization. Social role theory proposes that social structure is the underlying force behind gender differences, and that the division of labor between two sexes within a society motivates the differences in their respective behavior. Division of labor creates gender roles, which in turn, lead to gender-specific social behavior.
Family is the most important agent of socialization because it serves as the center of a child’s life. Socialization theory tells us that primary socialization – the process that occurs when a child learns the attitudes, values and actions expected of individuals within a particular culture – is the most important phase of social development, and lays the groundwork for all future socialization. Therefore, the family plays a pivotal role in the child’s development, influencing both the attitudes the child will adopt and the values the child will hold. Socialization can be intentional or unintentional; the family may not be conscious of the messages it transmits, but these messages nonetheless contribute to the child’s socialization. Children learn continuously from the environment that adults create, including gender norms.
For example, a child who grows up in a two-parent household with a mother who acts as a homemaker and a father who acts as the breadwinner may internalize these gender roles, regardless of whether or not the family is directly teaching them. Likewise, if parents buy dolls for their daughters and toy trucks for their sons, the children will learn to value different things.
4.6.4: Gender Messages from Peers
Peer groups can serve as a venue for teaching gender roles, especially if conventional gender social norms are strongly held.
Learning Objective
Discuss how peer groups can have a major impact on the gender socialization of a person, particularly children and adolescents
Key Points
- Gender roles refer to the set of social and behavioral norms that are considered socially appropriate for individuals of a specific sex in the context of a specific culture.
- Through gender-role socialization, group members learn about sex differences, and social and cultural expectations.
- Early on, children begin to almost restrict themselves to same-gendered groups. Boys tend to participate in more active and forceful activities in larger groups, away from adults, while girls were more likely to play in small groups, near adults.
- The stereotypes are less prominent when the groups are mixed-gendered, because the difference is not salient.
- A girl who wishes to take karate class instead of dance lessons may be called a “tomboy,” facing difficulty gaining acceptance from both male and female peer groups.
Key Terms
- gender roles
-
Sets of social and behavioral norms that are generally considered appropriate for either a man or a woman in a social or interpersonal relationship.
- stereotype
-
A conventional, formulaic, and oversimplified conception, opinion, or image of a group of people or things.
- Peer groups
-
Peer groups can serve as a venue for teaching members gender roles.
Example
- If a boy acts “too feminine,” he may be called a sissy and have difficulty gaining acceptance from other boys. If a girl acts “too masculine,” she may be called a tomboy and have difficulty gaining acceptance from other girls.
Gender role theory posits that boys and girls learn the appropriate behavior and attitudes from the family and overall culture in which they grow up, and so non-physical gender differences are a product of socialization. Social role theory proposes that the social structure is the underlying force for gender differences. Social role theory proposes that sex-differentiated behavior is motivated by the division of labor between two sexes within a society. Division of labor creates gender roles, which in turn lead to gendered social behavior.
Peer groups can serve as a venue for teaching members gender roles. Gender roles refer to the set of social and behavioral norms that are considered socially appropriate for individuals of a specific sex in the context of a specific culture, and which differ widely across cultures and historical periods.
Through gender-role socialization, group members learn about sex differences, and social and cultural expectations. Biological males are not always masculine and biological females are not always feminine. Both genders can contain different levels of masculinity and femininity. Peer groups can consist of all males, all females, or both males and females.
Peer groups can have great influence on each other’s gender role behavior depending on the amount of pressure applied. If a peer group strongly holds to a conventional gender social norm, members will behave in ways predicted by their gender roles, but if there is not a unanimous peer agreement, gender roles do not correlate with behavior. There is much research that has been done on how gender affects learning within student peer groups. The purpose of a large portion of this research has been to see how gender affects peer cooperative groups, how that affects the relationships that students have within the school setting, and how gender can then affect attainment and learning. One thing that is an influence on peer groups is student behavior.
Knowing early on that children begin to almost restrict themselves to same-gendered groups, it is interesting to see how those interactions within groups take place. Boys tend to participate in more active and forceful activities in larger groups, away from adults, while girls were more likely to play in small groups, near adults. These gender differences are also representative of many stereotypical gender roles within these same-gendered groups. The stereotypes are less prominent when the groups are mixed-gendered.
When children do not conform to the appropriate gender role, they may face negative sanctions such as being criticized or marginalized by their peers. Though many of these sanctions are informal, they can be quite severe. For example, a girl who wishes to take karate class instead of dance lessons may be called a “tomboy,” facing difficulty gaining acceptance from both male and female peer groups. Boys, especially, are subject to intense ridicule for gender nonconformity.
Female Peer Groups
Teenage cliques exert influence upon their members to conform to group standards, including group mores about gender.
4.6.5: Gender Messages in Mass Media
In mass media, women tend to have less significant roles than men, and are often portrayed in stereotypical roles, such as wives or mothers.
Learning Objective
Discuss the types of gender socialization people get from viewing various types of media
Key Points
- Gender socialization occurs through four major agents: family, education, peer groups, and mass media.
- Television commercials and other forms of advertising reinforce inequality and gender-based stereotypes.
- Particularly concerning are instances when women are depicted in dehumanizing, violent, and oppressive ways, especially in music videos.
- The mass media is able to deliver impersonal communications to a vast audience.
Key Terms
- Gender socialization
-
The process of educating and instructing males and females as to the norms, behaviors, values, and beliefs of group membership as men or women.
- impersonal communications
-
The mass media are the means for delivering impersonal communications directed to a vast audience, and include radio, advertising, television, and music.
- television commercials
-
Television commercials and other forms of advertising also reinforce inequality and gender-based stereotypes.
Example
- An example of a music video spreading gender norms is the video for 50 Cent’s “Pimp. ” Much like most other rap music videos, women appear as servile, scantily-clad objects that exist chiefly to serve men.
Gender socialization occurs through four major agents: family, education, peer groups, and mass media. Because mass media has enormous effects on our attitude and behavior, notably in regards to aggression, it is an important contributor to the socialization process. This is particularly true with regards to gender. In television and movies, women tend to have less significant roles than men. They are often portrayed as wives or mothers, rather than as main characters. When women are given a lead role, they are often one of two extremes: either a wholesome, saint-like figure or a malevolent, hyper-sexual figure. This same inequality is similarly pervasive in children’s movies. Research indicates that among the 101 top-grossing, G-rated movies released between 1990 and 2005, three out of every four characters were male. Out of those movies, only seven films were even close to having a balanced cast of characters, with a ratio of less than 1.5 male characters per 1 female character.
Television commercials and other forms of advertising reinforce inequality and gender-based stereotypes. Women almost exclusively appear in ads that promote cooking, cleaning, or childcare-related products. In general, women are underrepresented in roles, or ads, that reference leadership, intelligence, or a balanced psyche. Particularly concerning are instances when women are depicted in dehumanizing, oppressive ways, especially in music videos. The music video for “Pimp,” a song by 50 Cent, Snoop Dogg, and G-Unit, demonstrates how harmful gender messages can be disseminated through mass media. In the video, women are objectified and portrayed as only existing to serve men. They wear little clothing and are walked around on leashes by men, as if they were dogs and not humans.
Gender Messages in Mass Media
Traditional images of American gender roles reinforce the idea that women should be subordinate to men.
4.7: Socialization Throughout the Life Span
4.7.1: Socialization Throughout the Life Span
Socialization is the lifelong process of preparing an individual to live within his or her own society.
Learning Objective
Discuss the concept of both primary and secondary socialization as a lifelong process which begins in infancy and continues into late adulthood
Key Points
- Socialization is the lifelong process of inheriting and disseminating norms, customs and ideologies, providing an individual with the skills and habits necessary for participating within his or her own society.
- Socialization is the process by which human infants acquire the skills necessary to perform as a functioning member of their society, a process that continues throughout an individual’s life.
- The socialization process can be divided into primary and secondary socialization. Primary socialization occurs when a child learns the attitudes, values and actions appropriate to individuals as members of a particular culture. This is mainly influenced by the immediate family and friends.
- Secondary socialization is the process of learning what is the appropriate behavior as a member of a smaller group within the larger society. It is the behavioral patterns reinforced by socializing agents of society. like schools and workplaces.
- The life course approach was developed in the 1960s for analyzing people’s lives within structural, social and cultural contexts.
Key Terms
- socialization
-
The process of learning one’s culture and how to live within it.
- agent
-
One who exerts power, or has the power to act; an actor.
Example
- Any study that focuses on how cultural context influences individual development is an example of the life course approach. An example would be Erving Goffman’s work Asylum, which discusses how individuals are molded by institutions.
Socialization refers to the lifelong process of inheriting and disseminating norms, customs and ideologies that provide an individual with the skills necessary for participating within society. Socialization is a process that continues throughout an individual’s life. Some social scientists say socialization represents the process of learning throughout life and is a central influence on the behavior, beliefs and actions of adults as well as of children.
George Herbert Mead (1902–1994) developed the concept of self as developed with social experience. Since social experience is the exchange of symbols, people find meaning in every action, and seeking meaning leads people to imagine the intention of others from the others’ point of view. In effect, others are a mirror in which we can see ourselves. Charles Horton Cooley (1902-1983) coined the term “looking glass self;” the self-image based on how we think others see us. According to Mead, the key to developing the self is learning to take the role of the other. With limited social experience, infants can only develop a sense of identity through imitation. Children gradually learn to take the roles of several others. The final stage is the generalized other; the widespread cultural norms and values we use as a reference for evaluating others.
Primary and Secondary Socialization
The socialization process can be divided into primary and secondary socialization. Primary socialization occurs when a child learns the attitudes, values and actions appropriate to individuals as members of a particular culture. This is mainly influenced by the immediate family and friends. Secondary socialization is the process of learning what is the appropriate behavior as a member of a smaller group within the larger society. It is the behavioral patterns reinforced by socializing agents of society like schools and workplaces. For example, as new employees become socialized in an organization, they learn about its history, values, jargon, culture and procedures.
The Life Course Approach
The life course approach was developed in the 1960s for analyzing people’s lives within structural, social and cultural contexts. Origins of this approach can be traced to such pioneering studies as Thomas’s and Znaniecki’s “The Polish Peasant in Europe and America” from the 1920s or Mannheim’s essay on the “Problem of generations. ” The life course approach examines an individual’s life history and how early events influence future decisions.
Life Course Approach
The life course approach studies the impact that sociocultural contexts have on an individual’s development, from infancy until old age.
4.7.2: The Life Course
The life course approach analyzes people’s lives within structural, social, and cultural contexts.
Learning Objective
Explain the life course perspective as it relates to a person’s development from infancy to old age, in terms of structural, social and cultural contexts
Key Points
- The life course approach refers to an approach developed in the 1960s for analyzing people’s lives within structural, social, and cultural contexts.
- The life course approach examines an individual’s life history and sees for example how early events influence future decisions and events, giving particular attention to the connection between individuals and the historical and socioeconomic context in which they lived.
- In a more general reading of the life course, human life is seen as divided into stages, which are somewhat arbitrary, but capture periods of life that are similar across cultures. These stages of life often inform and are reinforced by legal definitions of life stages.
Key Terms
- life course
-
the sequence of events, roles and age categories that people pass through from birth until death, all of which are culturally defined
- age
-
Mature age; especially, the time of life at which one attains full personal rights and capacities.
- socioeconomic
-
Of or pertaining to social and economic factors.
The life course approach, also known as the life course perspective, or life course theory, refers to an approach developed in the 1960s for analyzing people’s lives within structural, social, and cultural contexts. Origins of this approach can be traced to pioneering studies such as Thomas’s and Znaniecki’s “The Polish Peasant in Europe and America” from the 1920s or Mannheim’s essay on the “Problem of generations”.
The life course approach examines an individual’s life history and sees for example how early events influence future decisions and events, giving particular attention to the connection between individuals and the historical and socioeconomic context in which they have lived. It holds that the events and roles that are part of the person’s life course do not necessarily proceed in a given sequence, but rather constitute the sum total of the person’s actual experience.
In a more general reading, human life is seen as often divided into various age spans such as infancy, toddler, childhood, adolescence, young adult, prime adulthood, middle age, and old age . These divisions are somewhat arbitrary, but generally capture periods of life that reflect a certain degree of similarity in development across cultures.
Old Age
This man is well into his later years and depicts life in its final stages.
In many countries, such as Sweden and the United States, adulthood legally begins at the age of eighteen. This is a major age milestone that is marked by significantly different attitudes toward the person who undergoes the transition. This is an example that demonstrates the influence of developmental stages on legal determinations of life stages, and thus, attitudes towards people at different stages of the human life course.
Infant
This picture depicts an individual at the earliest of life stages.
4.7.3: Anticipatory Socialization and Resocialization
Anticipatory socialization comes from an individual’s desire to join a group while resocialization is imposed upon an individual by a group.
Learning Objective
Explain the two steps associated with the resocialization process and how people use anticipatory socialization as a means to gain entrance into desired social groups
Key Points
- Anticipatory socialization is the process of changing one’s attitudes and behaviors in preparation for a shift in one’s role.
- The process of anticipatory socialization is facilitated by social interactions with the group they aspire to join.
- Resocialization is radically changing an inmate’s personality by carefully controlling their environment.
- Resocialization is a two-part process. First, the staff of the institution tries to erode the residents’ identities and independence. Second, there is a systematic attempt to build a different personality or self.
Key Terms
- Anticipatory socialization
-
Anticipatory socialization is the process, facilitated by social interactions, in which non-group-members learn to take on the values and standards of groups that they aspire to join, so as to ease their entry into the group and help them interact competently once they have been accepted by it.
- Social interactions
-
It refers to a relationship between two (i.e. a dyad), three (i.e. a triad) or more individuals (e.g. a social group).
Example
- An example of anticipatory socialization includes law school students learning how to behave like lawyers. An example of resocialization is the process of bringing new recruits into the military.
Anticipatory Socialization
Anticipatory socialization is the process by which non-group-members adopt the values and standards of groups that they aspire to join, so as to ease their entry into the group and help them interact appropriately once they have been accepted. It involves changing one’s attitudes and behaviors in preparation for a shift in one’s role. Practices commonly associated with anticipatory socialization include grooming, play-acting, training, and rehearsing. Examples of anticipatory socialization include law school students learning how to behave like lawyers, older people preparing for retirement, and Mormon boys getting ready to become missionaries.
Anticipatory socialization was first defined by sociologist Robert K. Merton. It has its origins in a 1949 study of the United States military which found that privates who modeled their attitudes and behaviors on those of officers were more likely to be promoted than those who did not.
When people are blocked from access to a group they might have wanted to join, they reject that group’s values and norms. Instead, they begin an anticipatory socialization process with groups that are more receptive to them. One example of this is the case of economically disadvantaged teenagers who seek to become drug dealers rather than professionals. While some critics would claim that these individuals lack motivation, some sociologists say they are simply making a pragmatic adjustment to the opportunities available to them.
Resocialization
Resocialization is defined as radically changing someone’s personality by carefully controlling their environment. Total institutions aim to radically alter residents’ personalities through deliberate manipulation of their environment. Key examples include the process of resocializing new recruits into the military so that they can operate as soldiers (or, in other words, as members of a cohesive unit) and the reverse process, in which those who have become accustomed to such roles return to society after military discharge. Resocialization may also be required for inmates who come out of prison and need to acclimate themselves back into civilian life.
Resocialization is a two-part process. First, the staff of the institution tries to erode the residents’ identities and sense of independence. Strategies include forcing individuals to surrender all personal possessions, cut their hair in a uniform manner, and wear standardized clothing. Independence can be eroded by subjecting residents to humiliating and degrading procedures. Examples include strip searches, fingerprinting, and replacing residents’ given names with serial numbers or code names. Second, resocialization involves the systematic attempt to build a different personality or self. This is generally accomplished through a system of rewards and punishments. The privilege of reading a book, watching television, or making a phone call can be powerful motivation to conform. Conformity occurs when individuals change their behavior to fit the expectations of an authority figure or the expectations of a larger group.
Guitar Lessons
The young woman is interacting with her professor in anticipation of being associated with other guitarists
4.7.4: Stages of Socialization Throughout the Life Span
The socialization process can be separated into two main stages: primary socialization and secondary socialization.
Learning Objective
Give examples of how the socialization process progresses throughout a person’s life
Key Points
- The life process of socialization is generally divided into two parts: primary and secondary socialization.
- Primary socialization takes place early in life, as a child and adolescent. This is when an individual develops their core identity.
- Secondary socialization takes place throughout an individual’s life, both as a child and as one encounters new groups. This involves more specific changes in response to the acquisition of new group memberships and roles and differently structured social situations.
- Some of the more significant contributors to the socialization process are: parents, guardians, friends, schools, siblings or other family members, social clubs (like religions or sports teams), life partners (romantic or platonic), and co-workers.
Key Terms
- primary socialization
-
The socialization that takes place early in life, as a child and adolescent.
- secondary socialization
-
The socialization that takes place throughout one’s life, both as a child and as one encounters new groups that require additional socialization.
Socialization is a life process, but is generally divided into two parts: primary and secondary socialization.
Primary Socialization
The nuclear family serves as the primary force of socialization for young children.
Primary socialization takes place early in life, as a child and adolescent. Secondary socialization refers to the socialization that takes place throughout one’s life, both as a child and as one encounters new groups that require additional socialization. While there are scholars who argue that only one or the other of these occurs, most social scientists tend to combine the two, arguing that the basic or core identity of the individual develops during primary socialization, with more specific changes occurring later—secondary socialization—in response to the acquisition of new group memberships and roles and differently structured social situations. The need for later-life socialization may stem from the increasing complexity of society with its corresponding increase in varied roles and responsibilities.
Secondary Socialization
By the time individuals are in their preteen or teenage years, peer groups play a more powerful role in socialization than family members.
Mortimer and Simmons outline three specific ways these two parts of socialization differ:
- Content: Socialization in childhood is thought to be concerned with the regulation of biological drives. In adolescence, socialization is concerned with the development of overarching values and the self-image. In adulthood, socialization involves more overt and specific norms and behaviors, such as those related to the work role as well as more superficial personality features.
- Context: In earlier periods, the socializee (the person being socialized) more clearly assumes the status of learner within the context of the initial setting (which may be a family of orientation, an orphanage, a period of homelessness, or any other initial social groups at the beginning of a child’s life), the school (or other educational context), or the peer group. Also, relationships in the earlier period are more likely to be affectively charged, i.e., highly emotional. In adulthood, though the socializee takes the role of student at times, much socialization occurs after the socializee has assumed full incumbency of the adult role. There is also a greater likelihood of more formal relationships due to situational contexts (e.g., work environment), which moderates down the affective component.
- Response: The child and adolescent may be more easily malleable than the adult. Also, much adult socialization is self-initiated and voluntary; adults can leave or terminate the process at any time if they have the proper resources (symbolic, financial, and social) to do so.
Socialization is, of course, a social process. As such, it involves interactions between people. Socialization, as noted in the distinction between primary and secondary, can take place in multiple contexts and as a result of contact with numerous groups. Some of the more significant contributors to the socialization process are: parents, guardians, friends, schools, siblings or other family members, social clubs (like religions or sports teams), life partners (romantic or platonic), and co-workers. Each of these groups include a culture that must be learned and to some degree appropriated by the socializee in order to gain admittance to the group.
4.7.5: Childhood
Childhood has been constructed in different ways over time, though modern childhood is often defined by play, learning and socializing.
Learning Objective
Evaluate the importance of childhood (early, middle and adolescence) in terms of socialization and acceptance in society
Key Points
- Contemporary conceptions of childhood generally divide the period into three main stages: early childhood (toddlerhood), middle childhood, and adolescence.
- Childhood is not an absolute concept defined by age and experience. Instead, childhood as a concept has been conceived in very different manners over time.
- American culture figures outdoor play as an essential part of childhood, though the reality is that children are increasingly playing indoors.
Key Terms
- middle childhood
-
It is the school age and begins at around seven or eight.
- adolescence
-
The transitional period of physical and psychological development between childhood and maturity.
- toddlerhood
-
The period of one’s life in which one is a toddler
Childhood is the age span ranging from birth to adolescence. In developmental psychology, childhood is divided up into the developmental stages of toddlerhood (learning to walk), early childhood (play age), middle childhood (school age), and adolescence (puberty through post-puberty).
Age Ranges of Childhood
The term childhood is non-specific and can imply a varying range of years in human development, depending on biological, personal, religious, cultural, or national interpretations. Developmentally and biologically, it refers to the period between infancy and puberty. In common terms, childhood is considered to start from birth. Some consider that childhood, as a concept of play and innocence, ends at adolescence. In the legal systems of many countries, there is an age of majority at which point childhood officially ends and a person legally becomes an adult. Globally, the age of majority ranges anywhere from 15 to 21, with 18 being the most common.
Developmental Stages of Childhood
Early childhood follows the infancy stage and begins with toddlerhood, reached when the child begins speaking or taking steps independently. Toddlerhood ends around age three when the child becomes less dependent on parental assistance for basic needs and early childhood continues approximately through years seven or eight. According to the National Association for the Education of Young Children, early childhood spans the from birth to age eight.
In most western societies, middle childhood begins at around age seven or eight, approximating primary school age and ends around puberty, which typically marks the beginning of adolescence.
Adolescence is usually determined by the onset of puberty. However, puberty may also begin in preadolescents. The end of adolescence and the beginning of adulthood varies by country. Even within a single nation-state or ethic group there may be different conceptions of when an individual is considered to be (chronologically and legally) mature enough to be entrusted by society with certain tasks.
Modern Concepts of Childhood
The concept of childhood appears to evolve and change shape as lifestyles change and adult expectations alter. Some believe that children should not have any worries and should not have to work; life should be happy and trouble-free. Childhood is generally a time of playing, learning, socializing, exploring, and worrying in a world without much adult interference, aside from parents. It is a time of learning about responsibilities without having to deal with adult responsibilities.
Childhood is often retrospectively viewed as a time of innocence. According to this view, children have yet to be negatively influenced by society and are naive, rather than ignorant. A “loss of innocence” is a common concept, and is often seen as an integral part of coming of age. It is usually thought of as an experience or period in a child’s life that widens their awareness of evil, pain or the world around them. This theme is demonstrated in the novels To Kill a Mockingbird and Lord of the Flies. The fictional character Peter Pan is the embodiment of a childhood that never ends.
Play
Play is essential to the cognitive, physical, social, and emotional well-being of children. It offers children opportunities for physical (running, jumping, climbing, etc.), intellectual (social skills, community norms, ethics, and general knowledge) and emotional development (empathy, compassion, and friendships). Unstructured play encourages creativity and imagination and allows children to interact with the world around them. Playing and interacting with other children, as well as with some adults, provides opportunities for friendships, social interactions, practicing adult roles, and resolving conflicts.
Play
Play is essential for the cognitive, physical, and social wellbeing of children.
Undirected play allows children to learn how to work in groups, to share, to negotiate, to resolve conflicts, and to learn self-advocacy skills. However, when play is controlled by adults, children acquiesce to adult rules and concerns and lose some of the benefits play offers them, particularly in developing creativity, leadership, and group skills.
Play is considered to be so important to optimal child development that it has been recognized by the United Nations High Commission for Human Rights as a right of every child. Raising children in a hurried and pressured style may limit the benefits they would gain from child-driven play.
American culture considers outdoor play as an essential part of childhood. However, the reality is that children are increasingly playing indoors. Nature Deficit Disorder, a term coined by Richard Louv in his 2005 book Last Child in the Woods, refers to the alleged trend in the United States that children are spending less time outdoors, resulting in a wide range of behavioral problems. With the advent of the computer, video games, and television, children have more reasons to stay inside rather than outdoors exploring. On average, American children spend forty-four hours per week with electronic media. Parents are also keeping children indoors in order to protect them from their growing fear of stranger danger.
Kids playing in the street
Children street culture transforms seemingly normal places into imaginative worlds
4.7.6: Adolescence
Adolescence is a period of significant cognitive, physical and social development, including changes in family and peer relationships.
Learning Objective
Discuss the influences on, and significance of, adolescent socialization and development, culminating in the development of autonomy
Key Points
- In addition to biological and social development, adolescents are also subject to a varied experiences across cultures depending on norms and expectations around sexuality, autonomy and occupation.
- Today, media has a significant influence on the experience and conceptions of adolescents, particularly in Westernized societies.
- The experience of adolescence is influenced by external factors like cultural norms and the media.
Key Terms
- adolescence
-
The transitional period of physical and psychological development between childhood and maturity.
- puberty
-
the age at which a person is first capable of sexual reproduction
Adolescence is a transitional stage of physical and psychological human development, generally occurring between puberty and legal adulthood. Though the period of adolescence is most closely associated with the teenage years, chronological age provides only a rough marker of adolescence, and scholars have found it difficult to agree upon a precise definition. Thus, a thorough understanding of adolescence depends on information from various perspectives, most importantly from the areas of psychology, biology, history, sociology, education, and anthropology. Within all of these disciplines, adolescence is viewed as a transitional period between childhood with the purpose of preparing children for adult roles.
The end of adolescence and the beginning of adulthood varies by country and by function. Even within a single nation-state or culture, there can be different ages at which an individual is considered to be (chronologically and legally) mature enough to handle certain tasks. In the west, such “coming of age” milestones include driving a vehicle, having legal sexual relations, serving in the armed forces or on a jury, purchasing and drinking alcohol, voting, entering into contracts, completing certain levels of education, and marrying. Adolescence is usually accompanied by increased independence and less supervision by parents or legal guardians.
The study of adolescent development often involves interdisciplinary collaborations. For example, researchers in neuroscience or bio-behavioral health might focus on pubertal changes in brain structure and its effects on cognition or social relations. Sociologists interested in adolescence might focus on the acquisition of social roles (e.g., worker or romantic partner) and how this varies across cultures or social conditions. Developmental psychologists might focus on changes in relations with parents and peers as a function of school structure and pubertal status.
Peer Relationships
Peer groups are especially important during adolescence, a period of development characterized by a dramatic increase in time spent with peers and a decrease in adult supervision. Adolescents also associate with friends of the opposite sex much more than in childhood and tend to identify with larger groups of peers based on shared characteristics.
Peer groups offer members the opportunity to develop various social skills like empathy, sharing and leadership. They can have positive influences on an individual, including academic motivation and performance. They can also have negative influences and lead to an increase in experimentation with drugs, drinking, vandalism, and stealing. Susceptibility to peer pressure increases during early adolescence, peaks around age 14, and declines thereafter.
During early adolescence, adolescents often associate in cliques; exclusive, single-sex groups of peers with whom they are particularly close. Toward late adolescence, cliques often merge into mixed-sex groups as teenagers begin romantically engaging with one another. These small friend groups break down even further as socialization becomes more couple-oriented. Despite the common notion that cliques are an inherently negative influence, they may help adolescents become socially acclimated and form a stronger sense of identity.
Romance and Sexual Activity
Romantic relationships tend to increase in prevalence throughout adolescence. By age 15, 53 percent of adolescents have had a romantic relationship that lasted at least one month over the course of the previous 18 months. A 2002 American study found that the average age of first sexual intercourse was 17 for males and 17.3 for females. As individuals develop into mature adolescents, there is an increase in the likelihood of a long-term relationship, which can be explained by sexual maturation and the development of cognitive skills necessary to maintain a romantic bond (e.g. caregiving, appropriate attachment). Long-term relationships allow adolescents to gain skills necessary for high-quality relationships later in life and contribute to development of feelings of self-worth.
Adolescence marks a time of sexual maturation, which impacts the types of social interactions adolescents maintain. While adolescents may engage in casual sexual encounters (often referred to as hookups in the United States), most sexual experience during this period of development takes place within romantic relationships.
Autonomy
Adolescents strive for autonomy. According to McElhaney et al., there are three ways in which autonomy can be described:
- Emotional autonomy is the development of more adult-like close relationship with adults and peers
- Behavioral autonomy, is the ability to make independent decisions and follow through with them
- Cognitive autonomy is characterized as the manifestation of an independent set of beliefs, values and opinions
Adolescent Flirtation
Adolescence is a time of social and sexual exploration.
4.7.7: Transitional Adulthood
Coming of age traditions, while different across the world, are seen in almost every society.
Learning Objective
Discuss how a young person “comes of age”, particularly in the context of religion or rituals
Key Points
- In many cultures, the transition from childhood to adulthood is marked by a coming of age tradition. In some, such traditions are associated with the arrival of sexual maturity in early adolescence; in others, it is associated with the arrival of new religious responsibilities.
- Often, coming of age traditions are religious, and signify that the individual is taking on a different role in his or her religious life, such as the Jewish bar mitzvah or Hindu ceremonies celebrating maturity.
- Other times these traditions are secular in nature and can range from legal benefits to extravagant parties.
Key Term
- coming of age
-
A person’s journey from childhood or adolescence to adulthood.
“Coming of age” refers to a young person’s transition from childhood to adulthood. The age at which this transition takes place varies among different societies, as does the nature of the transition. It can be a simple legal convention or can be part of a larger ritual. In some societies today, such changes are associated with the arrival of sexual maturity in early adolescence; in others, it is associated with the arrival of an age at which point one carries religious responsibilities. In western societies, legal conventions stipulate points in late adolescence or early adulthood that mark the age of maturity are the focus of the transition. Still, many cultures retain ceremonies to confirm the coming of age and benefits come with the change.
Religion
Religion is often a determinant of when and how individuals come of age.
When members of the Baha’i faith turn 15, they reach the “age of maturity” and are considered spiritually mature, and are responsible for individually determining whether they wish to remain members of Baha’i. Those who declare that they wish to remain members of Baha’i are expected to begin observing certain Baha’i laws, such as obligatory prayer and fasting.
In many Christian churches, a young person celebrates his or her coming of age with the Sacrament of Confirmation. Some traditions withhold the rite of Holy Communion from those not yet at the age of accountability on the grounds that children do not understand what the sacrament means. In some denominations, full membership in the church, if not bestowed at birth, often must wait until the age of accountability, and is frequently granted only after a period of preparation known as catechesis. The time of innocence before one has the ability to understand truly the laws of God, and during which God sees one as innocent, is also seen as applying to individuals who suffer from a mental disability which prevents them from ever reaching a time when they are capable of understanding the laws of God. These individuals are thus seen as existing in a perpetual state of innocence by the grace of God.
In Hinduism, coming of age generally signifies that a boy or girl are mature enough to understand his responsibility towards family and society. Hinduism also has the sacred thread ceremony for Dvija (twice-born) boys that marks their coming of age to do religious ceremonies. Women often celebrate their coming to age by having a ceremony. This ceremony includes dressing themselves in saris and announcing their maturity to the community
In Islam, children are not required to perform any obligatory acts of Islamic teachings prior to reaching the age of puberty, although they should be encouraged to begin praying at the age of seven. Before reaching puberty it is recommended to pray in obeisance to Allah and to exemplify Islamic customs, but as soon as one exhibits any characteristic of puberty, that person is required to perform the prayers and other obligations of Islam.
In the Jewish faith, boys reach religious maturity at the age of 13, signified by their bar mitzvah ceremony. Girls are believed to mature earlier and can have their bat mitzvah at the age of 12. Once the ritual is done, the new men and women are looked upon as adults and are expected to uphold the Jewish commandments and laws.
Professional Initiatory Rituals
Coming of age initiation rituals can occur in various professional organizations. In many universities of Europe, South America and India, first year students are made to undergo tests or humiliation before being accepted as students. Perhaps the oldest of these is “Raisin Monday,” which is still ongoing is at the University of St. Andrews in Scotland. A senior student will take a new student and show him or her around the university. In gratitude, the new student will give the senior student a pound of raisins, for which the senior student gave receipts. If a new student later fails to produce the receipt that demonstrated his gift upon command, he could be thrown into a fountain.
Universities in Chile follow an annual ritual called “Mechoneo” (the act of pulling somebody’s hair). First year students are initiated by theatrical “punishment. ” Freshmen are tied together while upperclassmen throw them eggs, flour, water, etc. Some universities have traditional ways of initiating freshmen. In the United States, these sorts of initiation rituals are most commonly found in fraternities and sororities. Greek organizations may have different processes for associate members, also known as pledges, to become a member.
Bar Mitzvah
This thirteen-year-old boy is dressed in the religious garb and symbols of the Jewish faith on the day of his bar mitzvah. He is about to be recognized as an adult by the Jewish community.
4.7.8: Marriage and Responsibility
People marry for love, for socioeconomic stability, to start a family, and to create obligations between one another.
Learning Objective
Assess the importance of the institution of marriage, as well as the various reasons why people enter into a marriage
Key Points
- Marriage rituals and traditions have changed significantly over time and vary across cultures.
- Marriage is a personal and sentimental act as well as one that often has religious and legal implications and significance.
- As of 2003, one’s level of educational attainment was a significant predictor of the educational attainment of one’s spouse.
Key Terms
- marriage
-
The union of two (or sometimes more) people, usually to the exclusion of all others.
- same-sex marriage
-
A marriage that unifies two people of the same sex either legally or only symbolically.
- procreation
-
The sexual activity of conceiving and bearing biological offspring.
Marriage is a governmentally, socially, or religiously recognized interpersonal relationship, usually intimate and sexual, that is often created as a form of contract. The most frequently occurring form of marriage is between a woman and a man, where the feminine term wife and the masculine husband are generally used to describe the parties to the contract. Some countries and American states recognize same-sex marriage, but gaining recognition for these unions is a legal battle occurring around the world .
Same-Sex Marriage
In some states and countries, homosexual couples can get legally married.
The ceremony in which a marriage is enacted and announced to the community is called a wedding. The reasons people marry vary widely, but usually include publicly and formally declare their love, the formation of a single household unit, legitimizing sexual relations and procreation, social and economic stability, and the education and nurturing of children. A marriage can be declared by a wedding ceremony, which may be performed either by a religious officiator or through a similar government-sanctioned secular process. The act of marriage creates obligations between the individuals involved and, in some societies, between the parties’ extended families. Marriages are perpetual agreements with legal consequences, terminated only by the death of one party or by formal dissolution processes, such as divorce and annulment.
Schwartz and Mare examined trends in marriage over time and found that the old maxim “opposites attract” is less accurate of marriage than the maxim “birds of a feather flock together. ” Their research focused on one specific similarity in marital partners: education. They found that the correlation of educational levels of American married couples decreased in similarity slightly after World War II, but has since increased substantially. As of 2003, one’s level of educational attainment was a significant predictor of the educational attainment of one’s spouse. People without a high school diploma are unlikely to marry someone with more educational attainment and people with a college degree are likely to marry people with a similar level of educational attainment. Part of the reason why education is so influential in determining the level of education of one’s spouse is because people tend to form groups based on levels of education. First, there are the groups formed in the process of becoming educated; many people meet their spouses at school. But jobs after one completes his or her education also tend to be grouped by level of education. As a result, people spend more time with individuals of a similar level of educational attainment. As most people tend to marry or partner with individuals with whom they spend a lot of time, it is not surprising that there is significant educational similarity between spouses.
One well-known attribute of marriage is that it tends to have health benefits. Happily married people tend to be healthier than unmarried people. However, unhappily married couples may not receive the same health benefits and may actually be less healthy than their single peers.
Wedding
In many countries, heterosexual weddings have the women dress in traditional white with a veil and the men in a tuxedo.
4.7.9: The Middle Years
Middle adulthood is generally accompanied by a decline in physical health and fertility, and an increase in ability to cope with stress.
Learning Objective
Discuss the implications of middle age in terms of fading physical health and mortality concerns
Key Points
- There is much debate over the definition of the period in a person’s life called “middle years” or “middle adulthood,” but is generally thought to be experienced between the ages of 40 and 65.
- During this time, health begins to decline, but the middle-aged benefit from greater life experiences and less volatile responses to stress.
- Both male and female fertility begin to decline with middle age. Additionally, in developed countries, mortality begins to increase more noticeably each year from age 40 onwards.
Key Terms
- middle age
-
The period of life between youth and old age; midlife.
- aging
-
The process of becoming older or more mature.
- advanced maternal age
-
Increases the risk of a child being born with some disorders, such as Down syndrome.
Middle age is the period of age beyond young adulthood but before the onset of old age. Various attempts have been made to define this age, which is around the third quarter of the average life span. The U.S. Census lists middle age as including people aged from 35 to 54, while developmental psychologist Erik Erikson argues that middle adulthood occurs from the age of 40 until 65.
Middle-aged adults often show visible signs of aging such as the loss of skin elasticity and the graying of hair. Physical fitness usually wanes, with a 5–10 kg (10-20 lb) accumulation of body fat, reduction in aerobic performance and a decrease in maximal heart rate. Strength and flexibility also decrease throughout middle age. However, people age at different rates and there can be significant differences between individuals of the same age.
Both male and female fertility declines with advancing age. Advanced maternal age increases the risk of a child being born with some disorders, such as Down syndrome. Advanced paternal age sharply increases the risk of miscarriage, as well as Down syndrome, schizophrenia, autism, and bipolar disorder. Middle aged women will experience menopause, which ends natural fertility, in their late 40s or early 50s.
In developed countries, mortality begins to increase more noticeably each year from age 40 onwards, mainly due to age-related health problems, such as heart disease and cancer. However, the majority of middle-age people in industrialized nations can expect to live into old age. In general, life expectancy in developing countries is much lower and the risk of death at all ages is higher.
However, well-being involves more than merely physical factors, and middle age is not experienced as a “time of decline” for healthy people. Middle-aged people benefit from greater life experience than they had when they were young; this contributes to happiness and makes emotional responses to stress less volatile.
Middle Age
Diana DeGette, a politician from Colorado, was born in 1957 and is in the middle age stage of life.
4.7.10: Parenthood
Parenting is the process of supporting the physical, emotional, social, and intellectual development of a child from infancy to adulthood.
Learning Objective
Contrast the four parenting styles: authoritarian, authoritative, permissive, and uninvolved
Key Points
- Parenting is defined by a range of different skills and styles. It is also a continuously changing process as the child grows and develops.
- Parenting challenges and techniques transform continuously over the lifespan of a child.
- Parenting is guided by different philosophies and practices, which inform parenting styles and family structure.
- Developmental psychologist Diana Baumrind identified three main parenting styles in early child development; these were later expanded to four: authoritarian, authoritative, permissive, and uninvolved.
- It is important to realize that parenting doesn’t end when a child turns 18. Support is needed in a child’s life well beyond the adolescent years and continues into middle and later adulthood.
Key Terms
- authoritarian parenting style
-
Parenting styles can be very rigid and strict. Parents who practice authoritarian style parenting have a strict set of rules and expectations and require rigid obedience.
- family planning
-
Birth control, especially when carried out by monogamous heterosexual couples.
- parenting
-
Process of raising and educating a child from birth until adulthood.
Parenting is the process of promoting and supporting the physical, emotional, social, and intellectual development of a child from infancy to adulthood. Parenting refers to the aspects of raising a child aside from the biological relationship. Parenting is usually carried out by the biological parents of the child in question, although governments and society take a role as well.
Social class, wealth, and income have the strongest impact on what methods of child rearing parents use. Understanding parenting styles help us understand how those styles contribute to the behavior and development of children.
Parenting Styles
Developmental psychologist Diana Baumrind identified three main parenting styles in early child development: authoritarian, authoritative, and permissive. These parenting styles were later expanded to four, including an uninvolved style. These four styles of parenting involve combinations of acceptance and responsiveness on the one hand, and demand and control on the other.
- Authoritarian parenting styles can be very rigid and strict. If rules are not followed, punishment is most often used to ensure obedience. There is usually no explanation for punishment except that the child is in trouble and should listen accordingly.
- Authoritative parenting relies on positive reinforcement and infrequent use of punishment. Parents are more aware of a child’s feelings and capabilities, and they support the development of a child’s autonomy within reasonable limits. There is a give-and-take atmosphere involved in parent-child communication, and both control and support are exercised in authoritative parenting.
- Permissive or Indulgent parenting is most popular in middle class families in Western culture. In these family settings, a child’s freedom and autonomy are valued and parents tend to rely mostly on reasoning and explanation. There tends to be little if any punishment or rules in this style of parenting and children are said to be free from external constraints.
- An uninvolved parenting style is when parents are often emotionally absent and sometimes even physically absent. They have no little to no expectation of the child and regularly have no communication. They are not responsive to a child’s needs to do not demand anything of them in their behavioral expectations.
There is no single or definitive model of parenting. What may be right for one family or one child may not be suitable for another, although research shows that the authoritative parenting style is extremely effective and yields self-reliant, cheerful, and friendly children.
Various Parenting Practices
- Attachment Parenting: working strengthen the intuitive, psychological, and emotional bond between the primary caregiver and the child
- Helicopter Parenting: over-parenting; parents are constantly involving themselves, interrupting the child’s ability to function on their own
- Narcissistic Parenting: parents are driven by their own needs; their children are an extension of their own identity; use their children to live out their dreams
- Positive Parenting: unconditional support, guiding them and supporting them for healthy development
- Slow Parenting: allowing the child to develop their own interests and allowing them to grow into their own person; lots of family time; allowing children to make their own decisions; limit electronics, simplistic toys
- Spiritual Parenting: respecting the child’s individuality; making space for child to develop a sense of their own beliefs through their personality and their own potentials
- Strict Parenting: focused on strict discipline; demanding, with high expectations from the parents
- Toxic Parenting: poor parenting; complete disruption of the child’s ability to identify one’s self and reduced self-esteem; neglecting the needs of the child and abuse is sometimes seen in this parenting style
- Unconditional Parenting: giving unconditional positive encouragement
Parenting across the Lifespan
Family planning is the decision whether and when to become parents, including planning, preparing, and gathering resources. Parents should assess whether they have the required financial resources (the raising of a child costs around $16,198 yearly in the United States). They should also assess whether their family situation is stable enough and whether they themselves are responsible and qualified enough to raise a child. Reproductive health and preconceptional care affect pregnancy, reproductive success, and maternal and child physical and mental health. During pregnancy, the unborn child is affected by many decisions that his or her parents make, particularly choices linked to their lifestyle. The health and diet decisions of the mother can have either a positive or negative impact on the child in utero.
It is important to realize that parenting doesn’t end when a child turns 18. Support is needed in a child’s life well beyond the adolescent years and continues into middle and later adulthood. Parental support is crucial in helping children figure out who they are and where they fit in the world. Parenting is a lifelong process.
Parenting
Parents have to tend more to children’s basic needs when they are young.
4.7.11: Career Development: Vocation and Identity
A vocation is an occupation to which an individual is particularly drawn.
Learning Objective
Define the meaning of the word “vocation” and how it impacts the choices people make as far as occupations are concerned
Key Points
- The word “vocation” is often used in a Christian religious context where a vocation is a call by God to the individual.
- A person’s vocation is a profession that helps define a person’s identity and directs a person’s interests.
- Since the origination of Vocational Guidance in 1908, by the engineer Frank Parsons, the use of the term ‘vocation’ has evolved to include the notion of using our talents and capabilities to good-effect in choosing and enjoying a career.
Key Terms
- career
-
One’s calling in life; a person’s occupation; one’s profession.
- vocation
-
An occupation for which a person is suited, trained, or qualified.
Example
- Any profession, such as a doctor, lawyer, or social worker, is an example of a vocation.
A vocation is a term for an occupation to which a person is especially drawn or for which he or she is suited, trained, or qualified. Though now often used in non-religious contexts, the meanings of the term originated in Christianity.
Use of the word “vocation” before the sixteenth century referred firstly to the “call” by God to the individual, or calling of all humankind to salvation, particularly in the Vulgate, and more specifically to the “vocation to the priesthood,” which is still the usual sense in Roman Catholicism.
The idea of vocation is central to the Christian belief that God has created each person with gifts and talents oriented toward specific purposes and a way of life. This idea of vocation is especially associated with a divine call to service to the Church and humanity through particular vocational life commitments, such as marriage to a particular person, consecration as a religious, ordination to priestly ministry in the Church, and even a holy life as a single person. In the broader sense, Christian vocation includes the use of one’s gifts in their profession, family life, church, and civic commitments for the sake of the greater common good.
Since the origination of Vocational Guidance in 1908, by the engineer Frank Parsons, the use of the term “vocation” has evolved to include the notion of using our talents and capabilities to good effect in choosing and enjoying a career. This semantic expansion has meant some diminishment of reference to the term’s religious meanings in everyday usage.
Professional Vocations
In common parlance, a vocation refers to one’s professional line of work or career, such as being a doctor.
4.7.12: The Older Years
Old age cannot be exactly defined, but it is often associated with certain activities, such as becoming a grandparent or entering retirement.
Learning Objective
Discuss some of the implications of old age, particularly in relation to Erikson’s “Eight Stages of Life” and age discrimination
Key Points
- Erik Erikson characterizes old age as a period of “Integrity vs. Despair,” during which a person focuses on reflecting back on their life. Those who are unsuccessful during this phase will feel that their life has been wasted and will experience many regrets.
- Those who feel proud of their accomplishments will feel a sense of integrity. Successfully completing this phase means looking back with few regrets and a general feeling of satisfaction.
- Old age presents some social problems, such as age discrimination. Elderly people are more likely to be victims of abuse, and negative stereotypes are also very common.
Key Terms
- self-neglect
-
It refers to behaviors that threaten the person’s own health and safety.
- abuse
-
Physical or verbal maltreatment; injury.
Example
- An example of self-neglect would be an elderly person who forgets to take his medication. An example of a stereotype of an elderly people is that they are weak and incapable.
The boundary between middle age and old age cannot be defined exactly because it does not have the same meaning in all societies. People can be considered old because of certain changes in their activities or social roles. For example, people may be considered old when they become grandparents, or when they begin to do less or different work (retirement). Traditionally, the age of 60 was generally seen as the beginning of old age. Most developed world countries have accepted the chronological age of 65 years as a definition of an “elderly” or older person.
According to Erik Erikson’s “Eight Stages of Life” theory, the human personality is developed in a series of eight stages that take place from the time of birth and continue on throughout an individual’s complete life. He characterizes old age as a period of “Integrity vs. Despair,” during which a person focuses on reflecting back on their life. Those who are unsuccessful during this phase will feel that their life has been wasted and will experience many regrets. The individual will be left with feelings of bitterness and despair. Those who feel proud of their accomplishments will feel a sense of integrity. Successfully completing this phase means looking back with few regrets and a general feeling of satisfaction. These individuals will attain wisdom, even when confronting death.
Age discrimination is a prevalent social problem facing the elderly. While discrimination toward the young is primarily visible through behavioral restrictions, discrimination toward the elderly ranges from behavioral restrictions to physical abuse. Abuse of the elderly is a serious problem in the U.S. There are nearly two million cases of elder abuse and self-neglect in the U.S. every year. Abuse refers to psychological/emotional abuse, physical abuse, sexual abuse, and caregiver neglect or financial exploitation, while self-neglect refers to behaviors that threaten the person’s own health and safety.
4.7.13: Are We Prisoners of Socialization?
Who we are as people is determined by both our genes (nature) and our socialization (nurture).
Learning Objective
Discuss socialization in terms of the nature (biology) versus nurture (social) debate
Key Points
- Some experts assert that who we are is a result of nurture—the relationships and caring that surround us—while others argue that who we are is based entirely in genetics, or “nature.”
- Twin studies can provide useful insight into how much a certain trait is due to nurture vs. nature.
- Research demonstrates that who we are is affected by both nature (our genetic and hormonal makeup) and nurture (the social environment in which we are raised). Sociology is most concerned with the way that society’s influence affects our behavior patterns, made clear by the way behavior varies across class and gender.
Key Term
- socialization
-
The process of learning one’s culture and how to live within it.
Some experts assert that who we are is a result of nurture—the relationships and caring that surround us. Others argue that who we are is based entirely in genetics. According to this belief, our temperaments, interests, and talents are set before birth. From this perspective, then, who we are depends on nature.
One way that researchers attempt to prove the impact of nature is by studying twins. Some studies followed identical twins who were raised separately. The pairs shared the same genetics, but, in some cases, were socialized in different ways. Instances of this type of situation are rare, but studying the degree to which identical twins raised apart are the same and different can give researchers insight into how our temperaments, preferences, and abilities are shaped by our genetic makeup versus our social environment.
For example, in 1968, twin girls born to a mentally ill mother were put up for adoption. However, they were also separated from each other and raised in different households. The parents, and certainly the babies, did not realize they were one of five pairs of twins who were made subjects of a scientific study (Flam 2007).
In 2003, the two women, then age 35, reunited. Elyse Schein and Paula Bernstein sat together in awe, feeling like they were looking into a mirror. Not only did they look alike, but they behaved alike, using the same hand gestures and facial expressions (Spratling 2007). Studies like these point to the genetic roots of our temperament and behavior.
Though genetics and hormones play an important role in human behavior, sociology’s larger concern is the effect that society has on human behavior, the “nurture” side of the nature versus nurture debate. What race were the twins? From what social class were their parents? What about gender? Religion? All of these factors affect the lives of the twins as much as their genetic makeup and are critical to consider as we look at life through the sociological lens.
Nature or Nurture?
“Nature versus nurture” describes the debate over the influence of biological versus social influences in socialization.
Sociologists all recognize the importance of socialization for healthy individual and societal development. But how do scholars working in the three major theoretical paradigms approach this topic? Structural functionalists would say that socialization is essential to society, both because it trains members to operate successfully within it and because it perpetuates culture by transmitting it to new generations. Without socialization, a society’s culture would perish as members died off. A conflict theorist might argue that socialization reproduces inequality from generation to generation by conveying different expectations and norms to those with different social characteristics. For example, individuals are socialized differently by gender, social class, and race. As in the illustration of Chris Langan, this creates different (unequal) opportunities. An interactionist studying socialization is concerned with face-to-face exchanges and symbolic communication. For example, dressing baby boys in blue and baby girls in pink is one small way that messages are conveyed about differences in gender roles.
Socialization is important because it helps uphold societies and cultures; it is also a key part of individual development. Research demonstrates that who we are is affected by both nature (our genetic and hormonal makeup) and nurture (the social environment in which we are raised). Sociology is most concerned with the way that society’s influence affects our behavior patterns, made clear by the way behavior varies across class and gender.
4.8: Childhood Socialization
4.8.1: Child Socialization
Primary and secondary socialization are two forms of socialization that are particularly important for children.
Learning Objective
Justify the importance of socialization for children, in terms of both primary and secondary socialization
Key Points
- Socialization refers to the lifelong process of inheriting and disseminating norms, customs, and ideologies, providing an individual with the skills and habits necessary for participating within his or her own society.
- Primary socialization for a child is very important because it sets the groundwork for all future socialization.
- Primary socialization occurs when a child learns the attitudes, values, and actions appropriate to individuals as members of a particular culture.
- Secondary socialization refers to the process of learning what is the appropriate behavior as a member of a smaller group within the larger society.
- Sigmund Freud’s psychosexual stages describe the progression of an individual’s unconscious desires.
- Lawrence Kohlberg’s stages of moral development describe how individuals develop in and through reasoning about morals.
- Jane Loevinger developed a theory with stages of ego development.
- Margaret Mahler’s psychoanalytic developmental theory contained three phases regarding the child’s object relations.
- Jean Piaget’s theory of cognitive development describes how children reason and interact with their surroundings.
- James Marcia’s theory focuses on identity achievement and has four identity statuses.
Key Terms
- primary socialization
-
Primary socialization in sociology is the acceptance and learning of a set of norms and values established through the process of socialization.
- secondary socialization
-
Secondary socialization refers to the process of learning what is the appropriate behavior as a member of a smaller group within the larger society.
- socialization
-
Socialization is the process of transferring norms, values, beliefs, and behaviors to future group members.
Socialization is a term used by sociologists, social psychologists, anthropologists, political scientists, and educationalists to refer to the lifelong process of inheriting and disseminating norms, customs, and ideologies, providing an individual with the skills and habits necessary for participating within his or her own society. Socialization is thus “the means by which social and cultural continuity are attained. ” There are many different forms of socialization, but two types are particularly important for children. These two types are known as primary and secondary socialization.
Primary socialization in sociology is the acceptance and learning of a set of norms and values established through the process of socialization. Primary socialization for a child is very important because it sets the groundwork for all future socialization. Primary socialization occurs when a child learns the attitudes, values, and actions appropriate to individuals as members of a particular culture. It is mainly influenced by the immediate family and friends. For example if a child saw his or her mother expressing a discriminatory opinion about a minority group, then that child may think this behavior is acceptable and could continue to have this opinion about minority groups.
Secondary socialization refers to the process of learning what is the appropriate behavior as a member of a smaller group within the larger society. Basically, it is the behavioral patterns reinforced by socializing agents of society. Secondary socialization takes place outside the home. It is where children and adults learn how to act in a way that is appropriate for the situations they are in . Schools require very different behavior from the home, and children must act according to new rules. New teachers have to act in a way that is different from pupils and learn the new rules from people around them. Secondary socialization is usually associated with teenagers and adults, and involves smaller changes than those occurring in primary socialization.
Girl on a Playground
Playgrounds and other social situations contribute to secondary child socialization.
4.8.2: Theoretical Perspectives on Childhood Socialization
Theories of childhood socialization and development study the elements of the cognitive and social development that occur in childhood.
Learning Objective
Contrast the various theories of childhood development, such as Freud’s psychosexual theory, Piaget’s stages of development and ecological systems theory
Key Points
- Childhood is a unique time period of accelerated development and has been studied by many theorists.
- Sigmund Freud developed a psychosexual theory of human development that describes how sexual fixation and satisfaction moves psychological development forward.
- Jean Piaget developed a theory of cognitive development that explains how children learn differently at different stages in development.
- Urie Bronfenbrenner developed ecological systems theory that explains how human development is influenced by the context of the developing child.
Key Terms
- Ecological Systems Theory
-
Ecological systems theory, also called development in context or human ecology theory, specifies four types of nested environmental systems, with bi-directional influences within and between the systems.
- Theory of Cognitive Development
-
Piaget’s theory of cognitive development posits that children learn by actively constructing knowledge through hands-on experience.
- Psychosexual Theory of Human Development
-
This theory is divided into five stages, each association with sexual satisfaction through a particular body part.
Since the nineteenth century, childhood has been perceived as a unique phase in an individual’s life, and sociological theories reflect this. The main theories that psychologists and social scientists rely on today were developed in the twentieth century and beyond. These theories seek to understand why childhood is a unique period in one’s life and the elements of the cognitive and social development that occur in childhood. This chapter seeks to give a brief introduction to various theoretical perspectives on childhood.
Twentieth-century Austrian psychologist Sigmund Freud was one of the first psychologists to theorize childhood and the significance of developmental stages. Freud believed that sexual drive, or libido, was the driving force of all human behavior and, accordingly, developed a psychosexual theory of human development. Children progress through five stages, each association with sexual satisfaction through a particular body part.
Sigmund Freud
Sigmund Freud developed the psychosexual theory of human development.
One of the most widely applied theories of childhood is Jean Piaget’s theory of cognitive development. Piaget posited that children learn actively through play. He suggested that the adult’s role in helping a child learn is to provide appropriate materials for the child to interact and construct. He encouraged adults to make childhood learning through play even more effective by asking the child questions to get them to reflect upon behaviors. He believed it was instructive for children to see contradictions in their explanations. His approach to childhood development has been embraced by schools, and the pedagogy of preschools in the United States.
Piaget’s Four Stages of Development
Piaget outlined four stages in one’s development to adulthood:
- The first of Piaget’s stages of development is the sensorimotor stage, which lasts from birth until about age two. During this stage, the child learns about himself and his environment through motor and reflex actions. The child learns that he is separate from his environment and that aspects of his environment, such as his parents or a toy, continue to exist even though they may be outside of his sensory field. This observation is called object permanence.
- The sensorimotor stage is followed by the preoperational stage, which begins about the time that the child begins to talk and lasts until about age seven. The developments associated with the preoperational phase all extend from the child learning how to deploy his new linguistic capabilities. The child begins to use symbols to represent objects. Children absorb information and fit it into preexisting categories in their minds.
- Next, children progress to the concrete operational phase, which lasts from about first grade to early adolescence. During this stage, children more easily accommodate ideas that do not fit their preexisting worldview. The child begins to think abstractly and make rational decisions based on observable or concrete phenomena.
- Finally, children enter the formal operational stage, which begins in adolescence and carries them through adulthood. This person no longer requires concrete objects to make rational judgements and is capable of hypothetical and deductive reasoning.
Ecological Systems Theory
In 1979, psychologist Urie Bronfenbrenner published The Ecology of Human Development, setting forth his theory known as ecological systems theory. Also called development in context theory or human ecology theory, the ecology systems theory specifies five different types of nested environmental systems: the microsystem, the mesosystem, the exosystem, the macrosystem, and the chronosystem. Each of these systems exerts influence on an individual, particularly children as they are robustly socialized.
- The microsystem refers to the institutions and groups that most immediately and directly impact the child’s development, including the child’s family, school, religious institution, neighborhood, and peer group.
- The mesosystem recognizes that no microsystem can be entirely discrete and refers to the relationship between microsystems. For example, a child who has been completely abandoned by his family might find it difficult to bond with teachers.
- The exosystem describes the link between a social setting in which the individual does not have an active role an the individual’s immediate context. For example, a child’s experience at home may be impacted by a mother’s experience at work.
- The macrosystem refers to the culture in which individuals live. A child, his school, and his parents are all part of a cultural context whose constituents are united by a sense of common identity, heritage, and values. Microsystems, and therefore mesosystems and exosystems, are impossible to understand when divorced from their macrosystemic context.
- The chronosystem refers to the patterning of environmental events and transitions over one’s life course, as well as broader sociohistorical developments. For example, the impact of divorces on children has varied over history. When divorce was more culturally stigmatized, it had a different effect on children than today, when many children have divorced parents.
4.8.3: Identity Formation
Identity formation is the development of an individual’s distinct personality by which he or she is recognized or known.
Learning Objective
Discuss the formation of a person’s identity, as well as the ideas of self-concept and self-consciousness
Key Points
- Cultural identity is the feeling of identity with a group or culture, or of an individual as far as he or she is influenced by his or her belonging to a group or culture.
- An ethnic identity is an identification with a certain ethnicity, usually on the basis of a presumed common genealogy or ancestry.
- National identity is an ethical and philosophical concept whereby all humans are divided into groups called nations.
- A religious identity is the set of beliefs and practices generally held by an individual, involving adherence to codified beliefs and rituals and study of ancestral or cultural traditions.
- Self-concept is the sum of a being’s knowledge and understanding of his or her self.
- Cultural identity is the feeling of identity of a group or culture, or of an individual as far as he or she is influenced by his or her belonging to a group or culture.
- An ethnic identity is the identification with a certain ethnicity, usually on the basis of a presumed common genealogy or ancestry.
- National identity is an ethical and philosophical concept whereby all humans are divided into groups called nations.
- A religious identity is the set of beliefs and practices generally held by an individual, involving adherence to codified beliefs and rituals and study of ancestral or cultural traditions
Key Terms
- religious identity
-
The set of beliefs and practices generally held by an individual,involving adherence to codified beliefs and rituals and study of ancestral or cultural traditions, writings, history, and mythology, as well as faith and mystic experience.
- cultural identity
-
One’s feeling of identity affiliation to a group or culture.
- national identity
-
An ethical and philosophical concept whereby all humans are divided into groups called nations.
Example
- An example of national identity is the way in which Americans are united on the Fourth of July. In celebrating national independence, one feels a connection to all other Americans. Indeed, the holiday would make little sense if one did not possess a sense of national identity. An example of religious identity would be if someone identifies as belonging to a particular religious faith.
Identity formation is the development of an individual’s distinct personality, which is regarded as a persisting entity in a particular stage of life by which a person is recognized or known. This process defines individuals to others and themselves. Pieces of the individual’s actual identity include a sense of continuity, a sense of uniqueness from others, and a sense of affiliation. Identity formation clearly influences personal identity by which the individual thinks of him or herself as a discrete and separate entity. This may be through individuation whereby the undifferentiated individual tends to become unique, or undergoes stages through which differentiated facets of a person’s life tend toward becoming a more indivisible whole.
Individuals gain a social identity and group identity by their affiliations. Self-concept is the sum of a being’s knowledge and understanding of himself. Self-concept is different from self-consciousness, which is an awareness of one’s self. Components of self-concept include physical, psychological, and social attributes, which can be influenced by the individual’s attitudes, habits, beliefs, and ideas. Cultural identity is one’s feeling of identity affiliation to a group or culture.
Similarly, an ethnic identity is the identification with a certain ethnicity, usually on the basis of a presumed common genealogy or ancestry. Further, national identity is an ethical and philosophical concept whereby all humans are divided into groups called nations. Members of a nation share a common identity and usually a common origin in their sense of ancestry, parentage, or descent. Lastly, a religious identity is the set of beliefs and practices generally held by an individual, involving adherence to codified beliefs and rituals and study of ancestral or cultural traditions, writings, history, and mythology, as well as faith and mystic experience.
National Identity
Fourth of July celebrations, during which Americans dress in red, white, and blue, are manifestations of national identity. Fourth of July is only meaningful as a celebration of independence for individuals who share a sense of national identity as Americans.
Chapter 3: Culture
3.1: Culture and Society
3.1.1: Culture and Biology
Culture relates to nature (our biology and genetics) and nurture (our environment and surroundings that also shape our identities).
Learning Objective
Examine the ways culture and biology interact to form societies, norms, rituals and other representations of culture
Key Points
- “Culture” encompasses objects and symbols, the meaning given to those objects and symbols, and the norms, values, and beliefs that pervade social life.
- Values reflect an individual’s or society’s sense of right and wrong or what “ought” to be.
- Humans also have biological drives—hunger, thirst, need for sleep—whose unfulfillment can result in death.
- Because of our biology and genetics, we have a particular form and we have certain abilities. These set essential limits on the variety of activities that humans can express culture, but there is still enormous diversity in this expression.
- Culture refers to the way we understand ourselves as individuals and as members of society, including stories, religion, media, rituals, and even language itself.
- Social Darwinism was the belief that the closer a cultural group was to the normative Western European standards of behavior and appearance, the more evolved they were.
- Culture is the non-biological or social aspects of human life.
- Culture refers to the way we understand ourselves as individuals and as members of society, including stories, religion, media, rituals, and even language itself.
- Social Darwinism hinged on the belief that the closer cultural groups were to the normative Western European standards of behavior and appearance, the more evolved they were.
Key Terms
- culture
-
The beliefs, values, behavior, and material objects that constitute a people’s way of life.
- Social Darwinism
-
a theory that the laws of evolution by natural selection also apply to social structures.
Example
- According to early beliefs about biology and culture, biological difference led to necessary cultural differences among racial groups. Today, sociologists and anthropologists recognize that the relationship between culture and biology is far more complex. For instance, prior to about 4000 BCE, most humans did not produce a protein that allowed them to digest lactose after being weaned. After Europeans began to drink the milk of domesticated animals, the genetic adaptation favoring lactose consumptions spread rapidly throughout the continent. This is an example in which cultural shifts (the domestication of animals) can lead to changes in behavior that affect biology (genetic adaptation to process lactose).
Culture and Biology
Human beings are biological creatures. We are composed of blood and bones and flesh. At the most basic level, our genes express themselves in physical characteristics, affecting bodily aspects such as skin tone and eye color. Yet, human beings are much more than our biology, and this is evident particularly in the way humans generate, and live within, complex cultures.
Defining Culture
Culture is a term used by social scientists, like anthropologists and sociologists, to encompass all the facets of human experience that extend beyond our physical fact. Culture refers to the way we understand ourselves both as individuals and as members of society, and includes stories, religion, media, rituals, and even language itself.
It is critical to understand that the term culture does not describe a singular, fixed entity. Instead, it is a useful heuristic, or way of thinking, that can be very productive in understanding behavior. As a student of the social sciences, you should think of the word culture as a conceptual tool rather than as a uniform, static definition. Culture necessarily changes, and is changed by, a variety of interactions, with individuals, media, and technology, just to name a few.
The History of Culture as a Concept
Culture is primarily an anthropological term. The field of anthropology emerged around the same time as Social Darwinism, in the late 19th and early 20th century. Social Darwinism was the belief that the closer a cultural group was to the normative, Western, European standards of behavior and appearance, the more evolved that group was. As a theory of the world, it was essentially a racist concept that persists in certain forms up to this day. If you have ever heard someone reference people of African descent as being from, or close to, the jungle, or the wilderness, you’ve encountered a type of coded language that is a modern incarnation of Social Darwinist thought.
During the late 19th and early 20th century time period, the positivist school also emerged in sociological thought. One of the key figures in this school, Cesare Lombroso, studied the physical characteristics of prisoners, because he believed that he could find a biological basis for crime. Lombroso coined the term atavism to suggest that some individuals were throwbacks to a more bestial point in evolutionary history. Lombroso used this concept to claim that certain individuals were more weak-willed, and more prone to criminal activity, than their supposedly more evolved counterparts.
In accordance with the hegemonic beliefs of the time, anthropologists first theorized culture as something that evolves in the same way biological organisms evolve. Just like biological evolution, cultural evolution was thought to be an adaptive system that produced unique results depending on location and historical moment. However, unlike biological evolution, culture can be intentionally taught and thus spread from one group of people to another.
Initially, anthropologists believed that culture was a product of biological evolution, and that cultural evolution depended exclusively on physical conditions. Today’s anthropologists no longer believe it is this simple. Neither culture nor biology is solely responsible for the other. They interact in very complex ways, which biological anthropologists will be studying for years to come.
Guildford Cathedral relief (UK)
People began domesticating cattle many years before they developed the genes for lactose tolerance.
3.1.2: Culture and Society
Culture is what differentiates one group or society from the next; different societies have different cultures.
Learning Objective
Differentiate between the various meanings of culture within society
Key Points
- Different societies have different cultures; a culture represents the beliefs and practices of a group, while society represents the people who share those beliefs and practices.
- Material culture refers to the objects or belongings of a group of people, such as automobiles, stores, and the physical structures where people worship. Nonmaterial culture, in contrast, consists of the ideas, attitudes, and beliefs of a society.
- In 18th and 19th century Europe, the term “culture” was equated with civilization and considered a unique aspect of Western society. Remnants of that colonial definition of culture can be seen today in the idea of “high culture”.
- During the Romantic Era, culture became equated with nationalism and gave rise to the idea of multiple national cultures.
- Today, social scientists understand culture as a society’s norms, values, and beliefs; as well as its objects and symbols, and the meaning given to those objects and symbols.
Key Terms
- popular culture
-
The prevailing vernacular culture in any given society, including art, cooking, clothing, entertainment, films, mass media, music, sports, and style
- civilization
-
An organized culture encompassing many communities, often on the scale of a nation or a people; a stage or system of social, political or technical development.
- high culture
-
The artistic entertainment and material artifacts associated with a society’s aristocracy or most learned members, usually requiring significant education to be appreciated or highly skilled labor to be produced.
- nationalism
-
The idea of supporting one’s country and culture; patriotism.
Example
- Classical music was once considered a marker of culture itself. Societies that had not developed classical music (or other such practices) were considered uncultured and uncivilized. Popular and indigenous music were not considered part of culture. Today, however, all music is considered part of culture. Different cultures may embrace different types of music, from pop or rock to world music or classical music.
Culture encompasses human elements beyond biology: for example, our norms and values, the stories we tell, learned or acquired behaviors, religious beliefs, art and fashion, and so on. Culture is what differentiates one group or society from the next.
Different societies have different cultures; however it is important not to confuse the idea of culture with society. A culture represents the beliefs and practices of a group, while society represents the people who share those beliefs and practices. Neither society nor culture could exist without the other.
Defining Culture
Almost every human behavior, from shopping to marriage to expressions of feelings, is learned. Behavior based on learned customs is not necessarily a bad thing – being familiar with unwritten rules helps people feel secure and confident that their behaviors will not be challenged or disrupted. However even the simplest actions – such as commuting to work, ordering food from a restaurant, and greeting someone on the street – evidence a great deal of cultural propriety.
Material culture refers to the objects or belongings of a group of people (such as automobiles, stores, and the physical structures where people worship). Nonmaterial culture, in contrast, consists of the ideas, attitudes, and beliefs of a society. Material and nonmaterial aspects of culture are linked, and physical objects often symbolize cultural ideas. A metro pass is a material object, but it represents a form of nonmaterial culture (namely capitalism, and the acceptance of paying for transportation). Clothing, hairstyles, and jewelry are part of material culture, but the appropriateness of wearing certain clothing for specific events reflects nonmaterial culture. A school building belongs to material culture, but the teaching methods and educational standards are part of education’s nonmaterial culture.
These material and nonmaterial aspects of culture can vary subtly from region to region. As people travel farther afield, moving from different regions to entirely different parts of the world, certain material and nonmaterial aspects of culture become dramatically unfamiliar. As we interact with cultures other than our own, we become more aware of our own culture – which might otherwise be invisible to us – and to the differences and commonalities between our culture and others.
The History of “Culture”
Some people think of culture in the singular, in the way that it was thought of in Europe during the 18th and early 19th centuries: as something achieved through evolution and progress. This concept of culture reflected inequalities within European societies and their colonies around the world; in short, it equates culture with civilization and contrasts both with nature or non-civilization. According to this understanding of culture, some countries are more “civilized” than others, and some people are therefore more “cultured” than others.
When people talk about culture in the sense of civilization or refinement, they are really talking about “high culture,” which is different from the sociological concept of culture. High culture refers to elite goods and activities, such as haute cuisine, high fashion or couture, museum-caliber art, and classical music. In common parlance, people may refer to others as being “cultured” if they know about and take part in these activities. Someone who uses culture in this sense might argue that classical music is more refined than music by working-class people, such as jazz or the indigenous music traditions of aboriginal peoples. Popular (or “pop”) culture, by contrast, is more mainstream and influenced by mass media and the common opinion. Popular culture tends to change as tastes and opinions change over time, whereas high culture generally stays the same throughout the years. For example, Mozart is considered high culture, whereas Britney Spears is considered pop culture; Mozart is likely to still be popular in 100 years, but Britney Spears will likely be forgotten by all but a few.
Aboriginal culture
Early colonial definitions of culture equated culture and civilization and characterized aboriginal people as uncivilized and uncultured.
This definition of culture only recognizes a single standard of refinement to which all groups are held accountable. Thus, people who differ from those who believe themselves to be “cultured” in this sense are not usually understood as having a different culture; they are understood as being uncultured.
Although we still see remnants of this idea of high culture today, it has largely fallen out of practice. Its decline began during the Romantic Era, when scholars in Germany – especially those concerned with nationalism – developed the more inclusive notion of culture as a distinct worldview. Although more inclusive, this approach to culture still allowed for distinctions between so-called “civilized” and “primitive” cultures. By the late 19th century, anthropologists changed the concept of culture to include a wider variety of societies, ultimately resulting in the concept of culture adopted by social scientists today: objects and symbols, the meaning given to those objects and symbols, and the norms, values, and beliefs that pervade social life.
This new perspective has also removed the evaluative element of the concept of culture; it distinguishes among different cultures, but does not rank them. For instance, the high culture of elites is now contrasted with popular or pop culture. In this sense, high culture no longer refers to the idea of being “cultured,” as all people have culture. High culture simply refers to the objects, symbols, norms, values, and beliefs of a particular group of people; popular culture does the same.
High culture
Ballet is traditionally considered a form of “high culture”.
3.1.3: Cultural Universals
A cultural universal is an element, pattern, trait, or institution that is common to all human cultures worldwide.
Learning Objective
Discuss cultural universals in terms of the various elements of culture, such as norms and beliefs
Key Points
- Cultural universals are elements, patterns, traits, or institutions that are common to all human cultures worldwide.
- There is a tension in cultural anthropology and cultural sociology between the claim that culture is a universal and that it is also particular. The idea of cultural universals runs contrary in some ways to cultural relativism which was, in part, a response to Western ethnocentrism.
- Ethnocentrism may take obvious forms. For example, the belief that one people’s culture is the most beautiful and true. Franz Boas understood “culture” to include not only certain tastes in food, art, and music, or beliefs about religion but instead assumed a much broader notion of culture.
- Among the cultural universals listed by Donald Brown (1991) are abstract speech, figurative speech and metaphors, antonyms and synonyms, and units of time.
- Among the cultural universals listed by Brown, some were investigated by Franz Boas. For example, Boas saw language as a means of categorizing experiences. Thus, although people may perceive visible radiation similarly, people who speak different languages slice up the continuum in different ways.
- Since Franz Boas, two debates have dominated cultural anthropology.
Key Terms
- universal
-
Common to all society; worldwide.
- particular
-
A specific case; an individual thing as opposed to a whole class.
- culture
-
The beliefs, values, behavior, and material objects that constitute a people’s way of life.
Example
- The incest taboo is often cited as an example of a cultural universal. Though there are a few counterexamples, nearly every society since the beginning of recorded history has had a taboo against sibling or parent/child marriages.
The sociology of culture concerns culture—usually understood as the ensemble of symbolic codes used by a society—as it is manifested in society. The elements of culture include (1) symbols (anything that carries particular meaning recognized by people who share the same culture); (2) language (system of symbols that allows people to communicate with one another); (3) values (culturally-defined standards that serve as broad guidelines for social living; (4) beliefs (specific statements that people hold to be true); and (5) norms (rules and expectations by which a society guides the behavior of its members). While these elements of culture may be seen in various contexts over time and across geography, a cultural universal is an element, pattern, trait, or institution that is common to all human cultures worldwide. Taken together, the whole body of cultural universals is known as the human condition. Among the cultural universals listed by Donald Brown (1991) are abstract speech, figurative speech and metaphors, antonyms and synonyms, and units of time.
First-Cousin Marriage Laws in the U.S.
In states marked dark blue, first-cousin marriage is legal. Light blue signifies that it is legal but has restrictions or exceptions. Pink signifies that it is banned with exceptions; red signifies that it is banned via statute, and dark red signifies that it is a criminal offense.
The concept of a cultural universal has long been discussed in the social sciences. Cultural universals are elements, patterns, traits, or institutions that are common to all human cultures worldwide. There is a tension in cultural anthropology and cultural sociology between the claim that culture is a universal (the fact that all human societies have culture), and that it is also particular (culture takes a tremendous variety of forms around the world). The idea of cultural universals—that specific aspects of culture are common to all human cultures—runs contrary to cultural relativism. Cultural relativism was, in part, a response to Western ethnocentrism. Ethnocentrism may take obvious forms, in which one consciously believes that one people’s arts are the most beautiful, values the most virtuous, and beliefs the most truthful. Franz Boas argued that one’s culture may mediate and thus limit one’s perceptions in less obvious ways. He understood “culture” to include not only certain tastes in food, art, and music, or beliefs about religion but instead assumed a much broader notion of culture.
Among the cultural universals listed by Donald Brown, some of these were investigated by Franz Boas. For example, Boas called attention to the idea that language is a means of categorizing experiences, hypothesizing that the existence of different languages suggests that people categorize, and thus experience, language differently. Therefore, although people may perceive visible radiation the same way, in terms of a continuum of color, people who speak different languages slice up this continuum into discrete colors in different ways.
3.1.4: Culture Shock
Culture shock is the personal disorientation a person may feel when experiencing an unfamiliar way of life in a new country.
Learning Objective
Discuss culture shock in terms of its four phases – honeymoon, negotiation, adjustment and mastery
Key Points
- Culture shock is the personal disorientation a person may feel when experiencing an unfamiliar way of life due to immigration or a visit to a new country.
- Culture shock can be described as consisting of at least one of four distinct phases: honeymoon, negotiation, adjustment, and mastery.
- During the honeymoon phase, the differences between the old and new culture are seen in a romantic light.
- After some time (usually around three months, depending on the individual), differences between the old and new culture become apparent and may create anxiety. This is the mark of the negotiation phase.
- In the adjustment phase, one grows accustomed to the new culture and develops routines.
- Lastly, in the mastery stage, assignees are able to participate fully and comfortably in the host culture.
- In the Adjustment phase, one grows accustomed to the new culture and develops routines.
- One knows what to expect in most situations and the host country no longer feels all that new.
- Lastly, in the Mastery stage, assignees are able to participate fully and comfortably in the host culture.
Key Term
- biculturalism
-
The state or quality of being bicultural.
Culture shock is the personal disorientation a person may feel when experiencing an unfamiliar way of life due to immigration or a visit to a new country, or to a move between social environments. One of the most common causes of culture shock involves individuals in a foreign country. There is no true way to entirely prevent culture shock, as individuals in any society are personally affected by cultural contrasts differently.
Culture shock can be described as consisting of at least one of four distinct phases: honeymoon, negotiation, adjustment, and mastery. During the honeymoon phase, the differences between the old and new culture are seen in a romantic light. During the first few weeks, most people are fascinated by the new culture. They associate with nationals who speak their language, and who are polite to the foreigners. This period is full of observations and new discoveries. Like most honeymoon periods, this stage eventually ends.
After some time (usually around three months, depending on the individual), differences between the old and new culture become apparent and may create anxiety. This is the mark of the negotiation phase. Excitement may eventually give way to unpleasant feelings of frustration and anger as one continues to experience unfavorable events that may be perceived as strange and offensive to one’s cultural attitude. Still, the most important change in the period is communication. People adjusting to a new culture often feel lonely and homesick because they are not yet used to the new environment and meet people with whom they are not familiar every day.
Again, after some time, one grows accustomed to the new culture and develops routines, marking the adjustment phase. One knows what to expect in most situations and the host country no longer feels all that new. One becomes concerned with basic living again and things become more normal. One starts to develop problem-solving skills for dealing with the culture and begins to accept the culture’s ways with a positive attitude. The culture begins to make sense and negative reactions and responses to the culture are reduced.
In the mastery stage, assignees are able to participate fully and comfortably in the host culture. Mastery does not mean total conversion. People often keep many traits from their earlier culture, such as accents and languages. It is often referred to as the biculturalism stage.
Culture Shock
Enthusiastic welcome offered to the first Indian student to arrive in Dresden, East Germany (1951).
3.1.5: Ethnocentrism and Cultural Relativism
Ethnocentrism, in contrast to cultural relativism, is the tendency to look at the world primarily from the perspective of one’s own culture.
Learning Objective
Examine the concepts of ethnocentrism and cultural relativism in relation to your own and other cultures in society
Key Points
- Ethnocentrism often entails the belief that one’s own race or ethnic group is the most important or that some or all aspects of its culture are superior to those of other groups.
- Within this ideology, individuals will judge other groups in relation to their own particular ethnic group or culture, especially with concern to language, behavior, customs, and religion.
- Cultural relativism is the belief that the concepts and values of a culture cannot be fully translated into, or fully understood in, other languages; that a specific cultural artifact (e.g., a ritual) has to be understood in terms of the larger symbolic system of which it is a part.
- Cultural relativism is the principle that an individual person’s beliefs and activities should be understood by others in terms of that individual’s own culture.
Key Terms
- ethnocentrism
-
The tendency to look at the world primarily from the perspective of one’s own culture.
- cultural relativism
-
Cultural relativism is a principle that was established as axiomatic in anthropological research by Franz Boas in the first few decades of the twentieth century, and later popularized by his students. Boas first articulated the idea in 1887: “…civilization is not something absolute, but … is relative, and … our ideas and conceptions are true only so far as our civilization goes. “
Example
- Cultural relativism can be difficult to maintain when we’re confronted with cultures whose practices or beliefs conflict with our own. For example, in France, head scarfs worn by many Islamic women have been banned. To the French, banning head scarfs is important because it helps maintain a secular society and gender equality. But imposing these values on people with a different culture is ethnocentric and, therefore, has become controversial.
Ethnocentrism, a term coined by William Graham Sumner, is the tendency to look at the world primarily from the perspective of your own ethnic culture and the belief that that is in fact the “right” way to look at the world. This leads to making incorrect assumptions about others’ behavior based on your own norms, values, and beliefs. For instance, reluctance or aversion to trying another culture’s cuisine is ethnocentric. Social scientists strive to treat cultural differences as neither inferior nor superior. That way, they can understand their research topics within the appropriate cultural context and examine their own biases and assumptions at the same time.
This approach is known as “cultural relativism.” Cultural relativism is the principle that an individual person’s beliefs and activities should be understood by others in terms of that individual’s own culture. A key component of cultural relativism is the concept that nobody, not even researchers, comes from a neutral position. The way to deal with our own assumptions is not to pretend that they don’t exist but rather to acknowledge them, and then use the awareness that we are not neutral to inform our conclusions.
An example of cultural relativism might include slang words from specific languages (and even from particular dialects within a language). For instance, the word “tranquilo” in Spanish translates directly to “calm” in English. However, it can be used in many more ways than just as an adjective (e.g., the seas are calm). Tranquilo can be a command or suggestion encouraging another to calm down. It can also be used to ease tensions in an argument (e.g., everyone relax) or to indicate a degree of self-composure (e.g., I’m calm). There is not a clear English translation of the word, and in order to fully comprehend its many possible uses, a cultural relativist would argue that it would be necessary to fully immerse oneself in cultures where the word is used.
Cultural context
Depending on your cultural background, this may or may not look delicious.
3.1.6: Material Culture
In the social sciences, material culture is a term that refers to the relationship between artifacts and social relations.
Learning Objective
Give examples of material culture and how it can help sociologist understand a particular society
Key Points
- Studying a culture’s relationship to materiality is a lens through which social and cultural attitudes can be discussed. People’s relationship to and perception of objects are socially and culturally dependent.
- A view of culture as a symbolic system with adaptive functions, varying from place to place, led anthropologists to conceive of different cultures as having distinct patterns of enduring conventional sets of meaning.
- Anthropologists distinguish between material culture and symbolic culture, not only because each reflects different kinds of human activity, but also because they constitute different kinds of data and require different methodologies to study.
- This view of culture, which came to dominate anthropology between World War I and World War II, implied that each culture was bounded and had to be understood as a whole, on its own terms.
- The result is a belief in cultural relativism, which suggests that there are no ‘better’ or ‘worse’ cultures, just different cultures.
Key Terms
- Symbolic culture
-
Symbolic culture is a concept used by archaeologists, social anthropologists and sociologists to designate the cultural realm constructed and inhabited uniquely by Homo sapiens.
- material culture
-
In the social sciences, material culture is a term, developed in the late 19th and early 20th century, that refers to the relationship between artifacts and social relations.
Example
- Examples of material culture include fashion, clothes, magazines, newspapers, records, CDs, computer games, books, cars, houses and architecture—anything that people make or build.
In the social sciences, material culture refers to the relationship between artifacts and social relations. Material culture consists in physical objects that humans make. These objects inevitably reflect the historical, geographic, and social conditions of their origin. For instance, the clothes that you are wearing might tell researchers of the future about the fashions of today .
Clothes as Material Culture
Fashion is part of material culture.
People’s relationship to and perception of objects are socially and culturally dependent. Accordingly, social and cultural attitudes can be discussed through the lens of a culture’s relationship to materiality.
Material culture is also a term used by historians, sometimes termed “material history,” which refers to the study of ancient objects and artifacts in order to understand how a particular culture was organized and functioned over time.
This view of culture as a symbolic system with adaptive functions, varying from place to place, led anthropologists to view different cultures as having distinct patterns of enduring conventional sets of meaning. Anthropologists thus distinguish between material culture and symbolic culture, not only because each reflects different kinds of human activity, but also because they constitute different kinds of data and require different methodologies to study.
This view of culture, which came to dominate anthropology between World War I and World War II, implied that each culture was bounded and had to be understood as a whole, on its own terms. The result is a belief in cultural relativism, which suggests that there are no ‘better’ or ‘worse’ cultures, just different cultures .
Periodicals as Material Culture
Media, such as magazines, are part of material culture.
Computers as Material Culture
Computers are an increasingly common part of everyday life for most people. They constitute an increasingly significant part of our material culture.
3.1.7: Nonmaterial Culture
Non-material culture includes the behaviors, ideas, norms, values, and beliefs that contribute to a society’s overall culture.
Learning Objective
Analyze the different ways norms, values and beliefs interact to form non-material culture
Key Points
- In contrast to material culture, non-material culture does not include physical objects or artifacts.
- It includes things that have no existence in the physical world but exist entirely in the symbolic realm.
- Examples are concepts such as good and evil, mythical inventions such as gods and underworlds, and social constructs such as promises and football games.
- The concept of symbolic culture draws from semiotics and emphasizes the way in which distinctively human culture is mediated through signs and concepts.
- The symbolic aspect of distinctively human culture has been emphasized in anthropology by Emile Durkheim, Claude Lévi-Strauss, Clifford Geertz, and many others.
- Semiotics emphasises the way in which distinctively human culture is mediated through signs and concepts.
Key Term
- social construct
-
Social constructs are generally understood to be the by-products of countless human choices rather than laws resulting from divine will or nature.
Example
- Material and non-material culture are two parts of culture. They may overlap. For example, patriotism is a type of value, and is therefore part of non-material culture. But patriotism can be embodied in elements of material culture, such as flags or bumper stickers.
Culture as a general concept consists of both material and non-material culture. Material culture is a term developed in the late 19th and early 20th centuries, that refers to the relationship between artifacts and social relations. In contrast, non-material culture does not include physical objects or artifacts. Examples include any ideas, beliefs, values, or norms that shape a society.
When sociologists talk about norms, they are talking about what’s considered normal, appropriate, or ordinary for a particular group of people. Social norms are group-held beliefs about how members should behave in a given context. Sociologists describe norms as laws that govern society’s behaviors. Values are related to the norms of a culture, but they are more global and abstract than norms. Norms are rules for behavior in specific situations, while values identify what should be judged as good or evil. Flying the national flag on a holiday is a norm, but it exhibits patriotism, which is a value. Wearing dark clothing and appearing solemn are normative behaviors at a funeral. In certain cultures they reflect the values of respect and support of friends and family. Different cultures honor different values. Finally, beliefs are the way people think the universe operates. Beliefs can be religious or secular, and they can refer to any aspect of life. For instance, many people in the U.S. believe that hard work is the key to success.
Members take part in a culture even if each member’s personal values do not entirely agree with some of the normative values sanctioned in the culture. This reflects an individual’s ability to synthesize and extract aspects valuable to them from the multiple subcultures they belong to.
Norms, values, and beliefs are all deeply interconnected. Together, they provide a way to understand culture.
3.2: The Symbolic Nature of Culture
3.2.1: The Symbolic Nature of Culture
The symbolic systems that people use to capture and communicate their experiences form the basis of shared cultures.
Learning Objective
Relate the idea that culture is symbolically coded to arguments about the dynamism of cultures
Key Points
- A symbol is any object, typically material, which is meant to represent another (usually abstract), even if there is no meaningful relationship.
- Culture is based on a shared set of symbols and meanings. Symbolic culture enables human communication and must be taught.
- Symbolic culture is more malleable and adaptable than biological evolution.
- The belief that culture is symbolically coded and can be taught from one person to another means that cultures, although bounded, can change.
- According to sociologists, symbols make up one of the 5 key elements of culture; the other key elements are language, values, beliefs, and norms.
Key Terms
- Max Weber
-
(1864–1920) A German sociologist, philosopher, and political economist who profoundly influenced social theory, social research, and the discipline of sociology itself.
- symbol
-
Any object, typically material, which is meant to represent another (usually abstract), even if there is no meaningful relationship.
Example
- Although language is perhaps the most obvious system of symbols we use to communicate, many things we do carry symbolic meaning. Think, for example, of the way you dress and what it means to other people. The way you dress could symbolically communicate to others that you care about academics or that you are a fan of your school’s football team, or it might communicate that you have adopted an anarchist philosophy or are a fan of punk music. In certain urban environments, the symbolic meaning of people’s clothes can signal gang affiliation. Other gang members use these symbolic sartorial signals to recognize enemies and allies.
A symbol is any object, typically material, which is meant to represent another (usually abstract) object, even if there is no meaningful relationship. Anthropologists have argued that, through the course of their evolution, human beings evolved a universal human capacity to classify experiences, and encode and communicate them symbolically, such as with written language . Since these symbolic systems were learned and taught, they began to develop independently of biological evolution (in other words, one human being can learn a belief, value, or way of doing something from another, even if they are not biologically related). That this capacity for symbolic thinking and social learning is a product of human evolution confounds older arguments about nature versus nurture.
The Polish Alphabets
Cultures are shared systems of symbols and meanings. Alphabets are one example of a symbolic element of culture.
This view of culture argues that people living apart from one another develop unique cultures. Elements of different cultures, however, can easily spread from one group of people to another. The belief that culture is symbolically coded and can, therefore, be taught from one person to another, means that cultures, although bounded, can change. Culture is dynamic and can be taught and learned, making it a potentially rapid form of adaptation to changes in physical conditions. Anthropologists view culture as not only a product of biological evolution, but as a supplement to it; culture can be seen as the main means of human adaptation to the natural world.
This view of culture as a symbolic system with adaptive functions, which varies from place to place, led anthropologists to conceive of different cultures as defined by distinct patterns (or structures) of enduring (although arbitrary) conventional sets of meaning. These meanings took concrete form in a variety of artifacts such as myths and rituals, tools, the design of housing, and the planning of villages. Anthropologists distinguish between material culture and symbolic culture, not only because each reflects different kinds of human activity, but also because they constitute different kinds of data that require different methodologies to study.
The sociology of culture concerns culture as it is manifested in society: the ways of thinking, the ways of acting, and the material objects that together shape a people’s way of life. According to Max Weber, symbols are important aspects of culture: people use symbols to express their spirituality and the spiritual side of real events, and ideal interests are derived from symbols. According to sociologists, symbols make up one of the five key elements of culture, the others being language, values, beliefs, and norms.
3.2.2: The Origins of Language
The origin of language is a widely discussed and controversial topic due to very limited empirical evidence.
Learning Objective
Compare and contrast continuity-based theories and discontinuity-based theories about the origin of language
Key Points
- There is no consensus on the ultimate origin or age of human language.
- Continuity-based theories stress that language is so complex that it must have evolved from earlier pre-linguistic systems among pre-humans.
- Discontinuity-based theories stress that language is a unique human trait that appeared fairly suddenly in the transition from pre-hominids to early man.
Key Terms
- symbolic
-
Referring to something with an implicit meaning.
- language
-
A form of communication using words either spoken or gestured with the hands and structured with grammar, often with a writing system.
- prehistory
-
The history of human culture prior to written records.
Example
- Examples of languages that are specific systems of communication include English, French, and Mandarin.
The origin of language in the human species is a widely discussed topic. There is no consensus on ultimate origin or age. Empirical evidence is limited, and many scholars continue to regard the whole topic as unsuitable for serious study.
Language in daily life
The origin of language in the human species is a widely discussed topic.
Theories about the origin of language can be divided according to their basic assumptions. Some theories are based on the idea that language is so complex that one cannot imagine it simply appearing from nothing in its final form, but that it must have evolved from earlier pre-linguistic systems among our pre-human ancestors. These theories can be called continuity-based theories.
The opposite viewpoint is that language is such a unique human trait that it cannot be compared to anything found among non-humans and that it must therefore have appeared fairly suddenly in the transition from pre-hominids to early man. These theories can be defined as discontinuity-based.
Similarly, some theories see language mostly as an innate faculty that is largely genetically encoded, while others see it as a system that is largely cultural—that is, learned through social interaction. Currently the only prominent proponent of a discontinuity theory of human language origins is Noam Chomsky.
Continuity-based theories are currently held by a majority of scholars, but they vary in how they envision this development. Those who see language as being mostly innate, such as Steven Pinker, hold the precedents to be animal cognition, whereas those who see language as a socially learned tool of communication, such as Michael Tomasello, see it as having developed from animal communication, either primate gestural or vocal communication. Other continuity-based models see language as having developed from music.
Because the emergence of language is located in the early prehistory of man, the relevant developments have left no direct historical traces and no comparable processes can be observed today. Theories that stress continuity often look at animals to see if, for example, primates display any traits that can be seen as analogous to what pre-human language must have been like. Alternatively early human fossils can be inspected to look for traces of physical adaptation to language use or for traces of pre-linguistic forms of symbolic behaviour.
3.2.3: Language
Language may refer either to the human capacity for acquiring and using complex systems of communication, or to a specific instance of such.
Learning Objective
Compare the different ways in which language can be studied
Key Points
- The word “language” has at least two basic meanings: language as a general concept, and “a language” (a specific linguistic system, e.g. “French”), a distinction first made by Ferdinand de Saussure.
- Languages, understood as the particular set of speech norms of a particular community, are also a part of the larger culture of the community that speak them.
- Humans use language as a way of signalling identity with one cultural group and difference from others.
- The organic definition of language sees language primarily as the mental faculty that allows humans to undertake linguistic behavior–to learn languages and produce and understand utterances.
- The structuralist view of language sees language as a formal system of signs governed by grammatical rules of combination to communicate meaning.
- The functional theory of language sees language as a system of communication that enables humans to cooperate.
- Humans use language as a way of signalling identity with one cultural group and difference from others.
- The organic definition of language sees language primarily as the mental faculty that allows humans to undertake linguistic behaviour: to learn languages and produce and understand utterances. These kinds of definitions are often applied by studies of language within a cognitive science framework and in neurolinguistics.
- The structuralist view of language sees language as a formal system of signs governed by grammatical rules of combination to communicate meaning. This definition stresses the fact that human languages can be described as closed structural systems consisting of rules that relate particular signs to particular meanings.
- The functional theory of language sees language as a system of communication that enables humans to cooperate. This definition stresses the social functions of language and the fact that humans use it to express themselves and to manipulate objects in their environment.
Key Terms
- semiotics
-
The study of signs and symbols, especially as means of language or communication.
- linguistics
-
The scientific study of language.
Examples
- Examples of languages that are specific systems of communication include English, French, and Mandarin.
- An example of how language can define a group of people and act as the basis for identity is Mexico’s 1900s census policy of counting indigenous groups as groups of people who speak one of a set of official indigenous languages.
Language may refer either to the specifically human capacity for acquiring and using complex systems of communication, or to a specific instance of such a system of complex communication. The scientific study of language in any of its senses is called linguistics.
The word language has at least two basic meanings: language as a general concept, and a specific linguistic system (e.g. French). Ferdinand de Saussure first explicitly formulated the distinction, using the French word langage for language as a concept, and langue as the specific instance of language.
One definition sees language primarily as the mental faculty that allows humans to undertake linguistic behaviour–to learn languages and produce and understand utterances. These kinds of definitions are often applied by studies of language within a cognitive science framework and in neurolinguistics.
Another definition sees language as a formal system of signs governed by grammatical rules of combination to communicate meaning. This definition stresses the fact that human languages can be described as closed structural systems consisting of rules that relate particular signs to particular meanings.
Yet another definition sees language as a system of communication that enables humans to cooperate. This definition stresses the social functions of language and the fact that humans use it to express themselves and to manipulate objects in their environment.
When described as a system of symbolic communication, language is traditionally seen as consisting of three parts: signs, meanings, and a code connecting signs with their meanings. The study of how signs and meanings are combined, used, and interpreted is called semiotics.
Languages, understood as the particular set of speech norms of a particular community, are also a part of the larger culture of the community that speaks them. Humans use language as a way of signalling identity with one cultural group and difference from others. Human languages are usually referred to as natural languages, and the science of studying them falls under the purview of linguistics. Human language is unique in comparison to other forms of communication, such as those used by animals, because it allows humans to produce an infinite set of utterances from a finite set of elements.
A Bilingual Sign
Members of a culture usually share a common language. Here, a bilingual sign in Wales tells both English and Welsh speakers that smoking is prohibited.
3.2.4: Language and Perception
Various theories assume that language is not simply a representational tool; rather it fundamentally shapes our perception.
Learning Objective
Explain the Sapir-Whorf hypothesis
Key Points
- The principle of linguistic relativity holds that the structure of a language affects the ways in which its speakers conceptualize their world (i.e., world view), or otherwise influences their cognitive processes.
- A main point of debate in the discussion of linguistic relativity is the strength of correlation between language and thought. The strongest form of correlation is linguistic determinism, which holds that language entirely determines an individual’s range of possible cognitive processes.
- The hypothesis of linguistic determinism is now generally agreed to be false, although many researchers still study weaker forms of correlation, often producing positive empirical evidence for a correlation.
- The crucial question is whether human psychological faculties are mostly universal and innate, or whether they are mostly a result of learning, and, therefore, subject to cultural and social processes that vary between places and times.
Key Terms
- Perception
-
(cognition) That which is detected by the five senses; not necessarily understood (imagine looking through fog, trying to understand if you see a small dog or a cat); also that which is detected within consciousness as a thought, intuition, deduction, etc.
- relativity
-
The state of being relative to something else.
Various theories assume that language fundamentally shapes our perception. One example is the principle of linguistic relativity. This principle holds that the structure of a language affects the ways in which its speakers conceptualize his or her world (worldview) or otherwise influences their cognitive processes. Popularly known as the Sapir–Whorf hypothesis, or Whorfianism, the principle is often defined as having two versions:
- The strong version states that language determines thought and emotions/feelings, and linguistic categories limit and determine cognitive categories
- The weak version argues that linguistic categories and usage influence thought and certain kinds of non-linguistic behavior .
The concept of linguistic relativity describes different formulations of the principle that cognitive processes, such as thought, emotion/feelings and experience, may be influenced by the categories and patterns of the language a person speaks. Empirical research into the question has been associated mainly with the names of Benjamin Lee Whorf, who wrote on the topic in the 1930s, and his mentor Edward Sapir, who did not himself write extensively on the topic.
A main point of debate in the discussion of linguistic relativity is the strength of correlation between language and thought and emotion/feelings. The strongest form of correlation is linguistic determinism, which holds that language entirely determines the range of possible cognitive processes of an individual. The hypothesis of linguistic determinism is now generally agreed to be false, though many researchers are still studying weaker forms of correlation, often producing positive empirical evidence for a correlation .
The centrality of the question of the relation between thought or emotions/feelings and language has brought attention to the issue of linguistic relativity, not only from linguists and psychologists, but also from anthropologists, philosophers, literary theorists, and political scientists. For example, can people experience or feel something they have no word to explain it with?
The crucial question is whether human psychological faculties are mostly universal and innate, or whether they are mostly a result of learning, and, therefore, subject to cultural and social processes that vary between places and times. The Universalist view holds that all humans share the same set of basic faculties, and that variability due to cultural differences is negligible. This position often sees the human mind as mostly a biological construction, so that all humans sharing the same neurological configuration can be expected to have similar or identical basic cognitive patterns.
The contrary position can be described in several ways. The constructivist view holds that human faculties and concepts are largely influenced by socially constructed and learned categories that are not subject to many biological restrictions. The idealist view holds that the human mental capacities are generally unrestricted by their biological-material basis. The essentialist view holds that there may be essential differences in the ways the different individuals or groups experience and conceptualize the world. The relativist position, which basically refers to a kind of Cultural relativism, sees different cultural groups as having different conceptual schemes that are not necessarily compatible or commensurable, nor more or less in accord with the external reality.
Edward Sapir
Empirical research into the question of linguistic relativity has been associated mainly with the names of Benjamin Lee Whorf, who wrote on the topic in the 1930s, and his mentor Edward Sapir, who did not himself write extensively on the topic.
3.2.5: Symbols and Nature
Language is a symbolic system of communication based on a complex system of rules relating spoken, signed, or written symbols.
Key Points
- Human language is thought to be fundamentally different from and of much higher complexity than that of other species as it is based on a complex system of rules that result in an indefinite number of possible utterances from a finite number of elements.
- Written languages use visual symbols to represent the sounds of the spoken languages, but they still require syntactic rules that govern the production of meaning from sequences of words.
- Human language differs from communication used by animals because the symbols and grammatical rules of any particular language are largely arbitrary, so that the system can only be acquired through social interaction.
- The study of how signs and meanings are combined, used, and interpreted is called semiotics.
- Signs can be composed of sounds, gestures, letters, or symbols, depending on whether the language is spoken, signed, or written.
- Language is traditionally seen as consisting of three parts: signs, meanings, and a code connecting signs with their meanings.
Key Terms
- written language
-
A written language is the representation of a language by means of a writing system.
- human language
-
Human language is typically used for communication, and may be spoken, signed, or written.
- semiotics
-
The study of signs and symbols, especially as means of language or communication.
Example
- Semiotics is the study of signs. A sign is a symbol that stands for something else. In English, the word “car” is the sign we use to refer to a common personal vehicle. In French, the sign for the same object is “voiture”.
Language is traditionally thought to consist of three parts: signs, meanings, and a code connecting signs with their meanings. Semiotics is the study of how signs and meanings are combined, used, and interpreted. Signs can consist of sounds, gestures, letters, or symbols, depending on whether the language is spoken, signed, or written.
Language as a whole, therefore, is the human capacity for acquiring and using complex systems of communication. A single language is any specific example of such a system. Language is based on complex rules relating spoken, signed, or written symbols to their meanings. What results is an indefinite number of possible innovative utterances from a finite number of elements.
Human language is thought to be fundamentally different from and of much higher complexity than the communication systems of other species (). Human language differs from communication used by animals () because the symbols and grammatical rules of any particular language are largely arbitrary, meaning that the system can only be acquired through social interaction. ()
A Barking Dog
Animal sounds, like a dog’s bark, may serve basic communication functions, but they lack the symbolic elements of human language.
A Sentence Diagram
Human language’s grammatical structure makes it unique.
Written language is the representation of a language by means of a writing system. Written language exists only as a complement to a specific spoken language. Written languages use visual symbols to represent the sounds of the spoken languages, but they still require syntactic rules that govern the production of meaning from sequences of words.
A sign language is a language which, instead of acoustically conveying sound patterns, uses manual communication and body language to convey meaning. This can involve simultaneously combining hand shapes; orientation and movement of the hands, arms or body; and facial expressions to fluidly express a speaker’s thoughts. Sign languages, like spoken languages, organize elementary units into meaningful semantic units.
3.2.6: Gestures
A gesture is a form of non-verbal communication in which visible bodily actions communicate particular messages.
Learning Objective
Explain the role of gestures in the communication process
Key Points
- Gestures allow individuals to communicate a variety of feelings and thoughts, from contempt and hostility to approval and affection, often together with body language in addition to spoken words.
- The most familiar categories of gestures are the so-called emblems or quotable gestures. These are conventional, culture-specific gestures that can be used as replacement for words, such as the handwave used in the U.S. for “hello” and “goodbye”.
- Another broad category of gestures comprises those gestures used spontaneously when we speak. These gestures are closely coordinated with speech.
- Gestural languages such as American Sign Language and its regional siblings operate as complete natural languages that are gestural in modality.
- Gesturing is probably universal; there have been no reports of communities that do not gesture. Gestures are a crucial part of everyday conversation such as chatting, describing a route, or negotiating prices on a market.
Key Terms
- gestural languages
-
A gestural language is a language which, instead of acoustically conveyed sound patterns, uses manual communication and body language to convey meaning. This can involve simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker’s thoughts.
- gesture
-
A motion of the limbs or body, especially one made to emphasize speech.
- quotable gestures
-
Quotable gestures are conventional, culture-specific gestures that can be used as replacement for words.
Example
- Gestures are culturally specific. For example, the quotable gesture for “come along” varies among cultures. In the United States, the gesture is made with the arm outstretched, palm facing up. The gesturer then bends the arm up at the elbow and curls the fingers toward the body. In China, however, the gesture is made differently. The arm is dropped to the side, only slightly outstretched, with the palm facing down and fingers dangling. The gesturer then waves the fingers in toward the body.
A gesture is a form of non-verbal communication in which visible bodily actions communicate particular messages, either in place of speech or together and in parallel with spoken words. Gestures include movement of the hands, face, or other parts of the body. Gestures differ from physical non-verbal communication that does not communicate specific messages, such as purely expressive displays, proxemics, or displays of joint attention. Gestures allow individuals to communicate a variety of feelings and thoughts, from contempt and hostility to approval and affection, often together with body language in addition to spoken words.
The most familiar categories of gestures are the so-called emblems or quotable gestures. These are conventional, culture-specific gestures that can be used as replacement for words, such as the handwave used in the U.S. for “hello” and “goodbye. ” Another broad category of gestures comprises those gestures used spontaneously when we speak. These gestures are closely coordinated with speech. Gestural languages such as American Sign Language and its regional siblings operate as complete natural languages that are gestural .
Gestural Language
American Sign Language, or ASL, is a gestural language. This is how to sign the letters A-S-L.
Many animals, including humans, use gestures to initiate a mating ritual. This may include elaborate dances and other movements. Gestures play a major role in many aspects of human life. Gesturing is probably universal; there have been no reports of communities that do not gesture. Gestures are a crucial part of everyday conversation such as chatting, describing a route, or negotiating prices on a market; they are ubiquitous. Gestures have been documented in the arts such as in Greek vase paintings, Indian Miniatures, and European paintings.
Hand Gestures
Military air marshallers use hand and body gestures to direct flight operations aboard aircraft carriers.
Pointing
Pointing at another person with an extended finger is considered rude in many cultures.
3.2.7: Values
Cultures have values that are largely shared by their members, which identify what should be judged as good or evil.
Learning Objective
Contrast values and norms
Key Points
- The values of a society can often be identified by noting which people receive honor or respect.
- Values are related to the norms of a culture, but they are more global and abstract than norms.
- Norms are rules for behavior in specific situations, while values identify what should be judged as good or evil.
- Members take part in a culture even if each member’s personal values do not entirely agree with some of the normative values sanctioned in the culture.
- Values clarification is helping people clarify what their lives are for and what is worth working for.
- Cognitive moral education is based on the belief that students should learn to value things like democracy and justice as their moral reasoning develops.
Key Terms
- subculture
-
A portion of a culture distinguished from the larger society around it by its customs or other features.
- culture
-
The beliefs, values, behavior, and material objects that constitute a people’s way of life.
- norm
-
A rule that is enforced by members of a community.
Example
- Values are general principles or ideals upheld by a society. For example, in the United States, hard work and independence are fundamental values.
Values can be defined as broad preferences concerning appropriate courses of action or outcomes. Values reflect a person’s sense of right and wrong, or what “ought” to be. Some examples of values are the concepts of “equal rights for all,” “excellence deserves admiration,” and “people should be treated with respect and dignity. ” Values tend to influence attitudes and behavior.
Cultures have values that are largely shared by their members. Different cultures reflect different values. Noting which people receive honor or respect can provide clues to the values of a society. In the US, for example, some professional athletes are honored (in the form of monetary payment) more than college professors.
Values are related to the norms of a culture, but they are more global and abstract than norms. Norms are rules for behavior in specific situations, while values identify what should be judged as good or evil. Flying the national flag on a holiday is a norm, but it reflects the value of patriotism. Wearing dark clothing and appearing solemn are normative behaviors at a funeral; in certain cultures, this reflects the values of respect for and support of friends and family. Different cultures reflect different values.
Members take part in a culture even if each member’s personal values do not entirely agree with some of the normative values sanctioned in the culture. This reflects an individual’s ability to synthesize and extract aspects valuable to them from the multiple subcultures to which they belong. If a group member expresses a value that is in serious conflict with the group’s norms, the group’s authority may encourage conformity or stigmatize the non-conforming behavior of its members .
Punk
Punk social groups are often considered marginal and are excluded from certain mainstream social spaces.
Declaration of Independence
Independence and freedom are fundamental values in the U.S.
Punks as non-conformists
Members of the punk movement refused to conform to some of the normative values prevalent in Western culture.
The Liberty Bell
Many consider liberty to be a fundamental American value.
3.2.8: Norms
Social norms are the explicit or implicit rules specifying what behaviors are acceptable within a society or group.
Learning Objective
Explain the origin, reinforcement, and significance of social norms in a society or group
Key Points
- Norms can be defined as the shared ways of thinking, feeling, desiring, deciding, and acting which are observable in regularly repeated behaviours and are adopted because they are assumed to solve problems.
- Social norms are neither static nor universal; they change with respect to time and vary with respect to culture, social classes, and social groups.
- Social norms can be enforced formally (e.g., through sanctions) or informally (e.g., through body language and non-verbal communication cues).
- One form of norm adoption is the formal method, where norms are written down and formally adopted. However, social norms are more likely to be informal and emerge gradually (e.g., not wearing socks with sandals).
Key Terms
- social norms
-
Social norms are described by sociologists as being laws that govern society’s behaviors.
- social group
-
A collection of humans or animals that share certain characteristics, interact with one another, accept expectations and obligations as members of the group, and share a common identity.
- social classes
-
Social class (or simply “class”) is a set of concepts in the social sciences and political theory centered on models of social stratification in which people are grouped into a set of hierarchical social categories.
Examples
- An example of an explicit social norm is a law, such as a law that prohibits alcohol use by anyone under the age of 21. Explicit norms are often enforced by formal sanctions. In this case, the formal sanction may be a fine or jail time.
- An implicit social norm is an expectation that, though unstated, is commonly accepted by members of a group. An example is the use of deoderant. People are not required to wear deoderant, but most people in the United States expect others to do so and do so themselves. Implicit norms are usually enforced by informal sanctions. In this case, informal sanctions might include dirty looks or avoidance.
Social norms are the explicit or implicit rules specifying acceptable behaviors within a society or group. They define the expected or acceptable behavior in particular circumstances. Social norms can also be defined as the shared ways of thinking, feeling, desiring, deciding, and acting which are observable in regularly repeated behaviors and are adopted because they are assumed to solve problems.
Social norms are neither static nor universal; they change with respect to time and vary with respect to culture, social classes, and social groups. What is deemed acceptable dress, speech, or behavior in one social group may not be acceptable in another.
Deference to social norms maintains one’s acceptance and popularity within a particular group. Social norms can be enforced formally (e.g., through sanctions) or informally (e.g., through body language and non-verbal communication cues) . By ignoring or breaking social norms, one risks facing formal sanctions or quiet disapproval, finding oneself unpopular with or ostracized from a group.
Formal Sanctions
Norms may be enforced through informal sanctions, such as derision, or formal sanctions, such as arrest.
As social beings, individuals learn when and where it is appropriate to say certain things, use certain words, discuss certain topics, or wear certain clothes, and when it is not. Groups may adopt norms in two different ways. One form of norm adoption is the formal method, where norms are written down and formally adopted (e.g., laws, legislation, club rules). Social norms are much more likely to be informal and to emerge gradually (e.g., not wearing socks with sandals) .
Groups internalize norms by accepting them as reasonable and proper standards for behavior within the group. That said, while it is more likely that a new individual entering a group will adopt the group’s norms, values, and perspectives, newcomers to a group can also change a group’s norms.
Same-Sex Marriage and Social Norms
In most Western countries, norms have prohibited same-sex marriage, but those norms are now changing.
3.2.9: Sanctions
As opposed to forms of internal control, like norms and values, sociologists consider sanctions a form of external control.
Learning Objective
Differentiate between methods of formal and informal social control
Key Points
- Sanctions can either be positive (rewards) or negative (punishment).
- Sanctions can arise from either formal or informal control.
- With informal sanctions, ridicule or ostracism can realign a straying individual towards norms. Informal sanctions may include shame, ridicule, sarcasm, criticism, and disapproval.
- Groups, organizations, and societies of various kinds can promulgate rules that act as formal sanctions to reward or punish behavior. For example, government and organizations use law enforcement mechanisms and other formal sanctions such as fines and imprisonment.
- To maintain control and regulate their subjects, authoritarian organizations and governments use severe sanctions such as censorship, expulsion, and limits on political freedom.
Key Terms
- Informal sanctions
-
These are the reactions of individuals and groups that bring about conformity to norms and laws. These can include peer and community pressure, bystander intervention in a crime, and collective responses such as citizen patrol groups.
- sanction
-
a penalty, or some coercive measure, intended to ensure compliance; especially one adopted by several nations, or by an international body
- social control
-
any control, either formal or informal, that is exerted by a group, especially by one’s peers
Example
- Internal controls are a form of social control that we impose on ourselves. For example, you may choose to wear nice clothes to class instead of pajamas, not because there is a rule against pajamas or because anyone has directly exercised social sanctions against you, but because you have internalized the norm of dressing when you leave home.
Sanctions
Sanctions are mechanisms of social control. As opposed to forms of internal control, like cultural norms and values, sociologists consider sanctions a form of external control. Sanctions can either be positive (rewards) or negative (punishment), and can arise from either formal or informal control .
Informal Social Control and Deviance
The social values present in individuals are products of informal social control. This type of control emerges from society, but is rarely stated explicitly to individuals. Instead, it is expressed and transmitted indirectly, through customs, norms and mores. Whether consciously or not, individuals are socialized. With informal sanctions, ridicule or ostracism can cause a straying individual to realign behavior toward group norms. Informal sanctions may include shame, ridicule, sarcasm, criticism, and disapproval. In extreme cases, sanctions may include social discrimination and exclusion. If a young boy is caught skipping school, and his peers ostracize him for his deviant behavior, they are exercising an informal sanction on him. Informal sanctions can check deviant behavior of individuals or groups, either through internalization, or through disincentivizing the deviant behavior.
As with formal controls, informal controls reward or punish acceptable or unacceptable behavior, otherwise known as deviance. Informal controls are varied and differ from individual to individual, group to group, and society to society. To maintain control and regulate their subjects, groups, organizations, and societies of various kinds can promulgate rules that act as formal sanctions to reward or punish behavior. For example, in order to regulate behavior, government and organizations use law enforcement mechanisms and other formal sanctions such as fines and imprisonment . Authoritarian organizations and governments may rely on more directly aggressive sanctions. These actions might include censorship, expulsion, restrictions on political freedom, or violence. Typically, these more extreme sanctions emerge in situations where the public disapproves of either the government or organization in question.
A Prison Cell Block
Incarceration is a type of formal sanction.
Shame
Shame can be used as a type of informal sanction.
3.2.10: Folkways and Mores
Folkways and mores are informal norms that dictate behavior; however, the violation of mores carries heavier consequences.
Learning Objective
Differentiate between folkways and mores
Key Points
- Societal norms, or rules that are enforced by members of a community, can exist as both formal and informal rules of behavior. Informal norms can be divided into two distinct groups: folkways and mores.
- Both “mores” and “folkways” are terms coined by the American sociologist William Graham Sumner.
- Mores distinguish the difference between right and wrong, while folkways draw a line between right and rude. While folkways may raise an eyebrow if violated, mores dictate morality and come with heavy consequences.
Key Terms
- William Graham Sumner
-
An American academic with numerous books and essays on American history, economic history, political theory, sociology, and anthropology.
- mores
-
A set of moral norms or customs derived from generally accepted practices. Mores derive from the established practices of a society rather than its written laws.
- folkway
-
A custom or belief common to members of a society or culture.
Example
- Different regions of the United States have different folkways that govern how people greet one another. In many rural regions, people crossing paths in the street nod and say “hello” or “how are you? ” Drivers meeting one another on remote country roads give each other a quick wave. But in most urban regions, neither walkers nor drivers acknowledge one another unless provoked. Urban residents who travel to remote places may notice the difference and find the folkways unusual. The local residents may find the urban newcomers strange or a little cold if they do not offer greetings, but they will probably not sanction them formally or informally. Likewise, in the city, residents may think newcomers from the country a bit odd if they give unsolicited greetings, but those greetings will probably not draw sanctions.
Societal norms, or rules that are enforced by members of a community, can exist as both formal and informal rules of behavior. Informal norms can be divided into two distinct groups: folkways and mores. Folkways are informal rules and norms that, while not offensive to violate, are expected to be followed. Mores (pronounced more-rays) are also informal rules that are not written, but, when violated, result in severe punishments and social sanction upon the individuals, such as social and religious exclusions,.
William Graham Sumner , an early U.S. sociologist, recognized that some norms are more important to our lives than others. Sumner coined the term mores to refer to norms that are widely observed and have great moral significance. Mores are often seen as taboos; for example, most societies hold the more that adults not engage in sexual relations with children. Mores emphasize morality through right and wrong, and come with heavy consequences if violated.
William Graham Sumner, 1840-1910
William Graham Sumner coined the terms “folkways” and “mores. “
Sumner also coined the term folkway to refer to norms for more routine or casual interaction. This includes ideas about appropriate greetings and proper dress in different situations. In comparison to the morality of mores, folkways dictate what could be considered either polite or rude behavior. Their violation does not invite any punishment or sanctions, but may come with reprimands or warnings.
An example to distinguish the two: a man who does not wear a tie to a formal dinner party may raise eyebrows for violating folkways; were he to arrive wearing only a tie, he would violate cultural mores and invite a more serious response.
3.3: Culture and Adaptation
3.3.1: The Origins of Culture
Culture is a central concept in anthropology, encompassing the range of human phenomena that cannot be attributed to genetic inheritance.
Learning Objective
Paraphrase what is currently thought to be the reason for the development of language and complex culture
Key Points
- The term “culture” has two meanings: (1) the evolved human capacity to classify and represent experiences with symbols, and to act creatively; and (2) the distinct ways that people living in different parts of the world acted creatively and classified or represented their experiences.
- Distinctions are currently made between the physical artifacts created by a society, its so-called material culture and everything else, including the intangibles such as language, customs, etc. that are the main referent of the term “culture”.
- The origin of language, understood as the human capacity of complex symbolic communication, and the origin of complex culture are often thought to stem from the same evolutionary process in early man.
- Language and culture both emerged as a means of using symbols to construct social identity and maintain coherence within a social group too large to rely exclusively on the pre-human ways of building community (for example, grooming).
Key Terms
- community
-
A group sharing a common understanding and often the same language, manners, tradition and law. See civilization.
- horticulture
-
The art or science of cultivating gardens; gardening.
Example
- Adaptation is necessary for our continued survival, but biological adaptation can be slow. Culture allows humans to more quickly adapt. For example, humans face dangers from food-born illnesses, such as trichinosis from pork. A biologically adaptive strategy could emerge over many generations if humans developed immunity to trichinosis, but a culturally adaptive strategy could emerge more quickly if a group of people enforced a norm against eating pork.
Culture (Latin: cultura, lit. “cultivation”) is a modern concept based on a term first used in classical antiquity by the Roman orator, Cicero: “cultura animi. ” The term “culture” appeared first in its current sense in Europe in the 18th and 19th centuries, to connote a process of cultivation or improvement, as in agriculture or horticulture. In the 19th century, the term developed to refer first to the betterment or refinement of the individual, especially through education, and then to the fulfillment of national aspirations or ideals. In the mid-19th century, some scientists used the term “culture” to refer to a universal human capacity.
In the 20th century, “culture” emerged as a central concept in anthropology, encompassing the range of human phenomena that cannot be attributed to genetic inheritance. Specifically, the term “culture” in American anthropology had two meanings: (1) the evolved human capacity to classify and represent experiences with symbols, and to act imaginatively and creatively; and (2) the distinct ways that people living in different parts of the world acted creatively and classified or represented their experiences. Distinctions are currently made between the physical artifacts created by a society, its so-called material culture and everything else, including the intangibles such as language, customs, etc. that are the main referent of the term “culture. “
The origin of language, understood as the human capacity of complex symbolic communication, and the origin of complex culture are often thought to stem from the same evolutionary process in early man. Evolutionary anthropologist Robin I. Dunbar has proposed that language evolved as early humans began to live in large communities that required the use of complex communication to maintain social coherence. Language and culture then both emerged as a means of using symbols to construct social identity and maintain coherence within a social group too large to rely exclusively on pre-human ways of building community (for example, grooming).
However, languages, now understood as the particular set of speech norms of a particular community, are also a part of the larger culture of the community that speak them. Humans use language as a way of signalling identity with one cultural group and difference from others. Even among speakers of one language, several different ways of using the language exist, and each is used to signal affiliation with particular subgroups within a larger culture.
Nomads
Anthropologists rejected the idea that culture was unique to Western society and adopted a new definition of culture that applied to all societies, literate and non-literate, settled and nomadic.
3.3.2: Mechanisms of Cultural Change
The belief that culture can be passed from one person to another means that cultures, although bounded, can change.
Learning Objective
Describe at least two mechanisms which foster cultural change
Key Points
- Cultures are internally affected by both forces encouraging change and forces resisting change. These forces are related to social structures and natural events, and are involved in the perpetuation of cultural ideas and practices within current structures, which are themselves subject to change.
- Cultural change can have many causes, including the environment, technological inventions, and contact with other cultures.
- In diffusion, the form of something (though not necessarily its meaning) moves from one culture to another.
- Acculturation has different meanings, but in this context it refers to replacement of the traits of one culture with those of another, such has happened to certain Native American tribes and to many indigenous peoples across the globe during the process of colonization.
- “Direct Borrowing” on the other hand tends to refer to technological or tangible diffusion from one culture to another.
- Griswold suggests that culture changes through the contextually dependent and socially situated actions of individuals; macro-level culture influences the individual who, in turn, can influence that same culture.
- In anthropology, diffusion theory states that the form of something moves from one culture to another, but not its meaning. Acculturation theory refers to replacement of the traits of one culture with those of another.
Key Terms
- habit
-
An action performed repeatedly and automatically, usually without awareness.
- assimilation
-
The adoption, by a minority group, of the customs and attitudes of the dominant culture.
Example
- An invention that substantially changed culture was the development of the birth control pill, which changed women’s attitudes toward sex. Prior to the introduction of the birth control pill, women were at a high risk of pregnancy as a result of sex. After the introduction of the pill, their risk of pregnancy was substantially reduced, increasing their willingness to engage in sexual activity outside of wedlock.
Fundamentally, although bounded, cultures can change. Cultures are internally affected by both forces encouraging change and forces resisting change. These forces are related to social structures and natural events, and are involved in the perpetuation of cultural ideas and practices within current structures, which are themselves subject to change . Resistance can come from habit, religion, and the integration and interdependence of cultural traits. For example, men and women have complementary roles in many cultures. One sex might desire changes that affect the other, as happened in the second half of the 20th century in western cultures (see, for example, the women’s movement), while the other sex may be resistant to that change (possibly in order to maintain a power imbalance in their favor).
Biology versus Culture
These two avatars illustrate the basic concept of culture. One is simply a reflection of his biology; he is human. The other is a reflection of his biology and his culture: he is human and belongs to a cultural group or sub-culture.
Cultural change can have many causes, including the environment, technological inventions, and contact with other cultures. Cultures are externally affected via contact between societies, which may also produce—or inhibit—social shifts and changes in cultural practices. War or competition over resources may impact technological development or social dynamics. Additionally, cultural ideas may transfer from one society to another, through diffusion or acculturation.
Discovery and invention are mechanisms of social and cultural change. Discovery refers to the finding of new knowledge within an existing realm. Generally, it relates to discovering new understanding of a particular behavior or ritual. Invention is the creation of a new device or process. New discoveries often lead to new inventions by people.
In diffusion, the form of something (though not necessarily its meaning) moves from one culture to another. For example, hamburgers, mundane in the United States, seemed exotic when introduced into China. “Stimulus diffusion” (the sharing of ideas) refers to an element of one culture leading to an invention or propagation in another .
The Change of Symbolic Meaning Over Time
The symbol of the ankh has its roots in Egyptian religious practice, but the symbol diffused over time and was adopted by other groups, including pagans, as a religious symbol.
Acculturation has different meanings, but in this context it refers to replacement of the traits of one culture with those of another, such has happened to certain Native American tribes and to many indigenous peoples across the globe during the process of colonization. Related processes on an individual level include assimilation (adoption of a different culture by an individual) and transculturation.
3.3.3: Cultural Lag
The term “cultural lag” refers to the fact that culture takes time to catch up with technological innovations, resulting in social problems.
Learning Objective
Produce an example of cultural lag using an example of the tension between material and non-material culture
Key Points
- Cultural lag is not only a concept, as it also relates to a theory and explanation in sociology.
- It helps identify and explain social problems and also predict future problems.
- According to Ogburn, cultural lag is a common societal phenomenon due to the tendency of material culture to evolve and change rapidly and voluminously while non-material culture tends to resist change and remain fixed for a far longer period of time.
- Due to the opposing nature of these two aspects of culture, adaptation of new technology becomes rather difficult.
Key Terms
- non-material culture
-
In contrast to material culture, non-material culture does not include any physical objects or artifacts. Examples of non-material culture include any ideas, beliefs, values, and norms that may help shape our society.
- material culture
-
In the social sciences, material culture is a term, developed in the late 19th and early 20th century, that refers to the relationship between artifacts and social relations.
- innovation
-
The act of innovating; the introduction of something new, in customs, rites, and so on.
Example
- Cultural lag can occur when technological innovation outpaces cultural adaptation. For example, when cars were first invented, there were not yet any laws to govern driving: no speed limits, no guidelines for who had the right of way at intersections, no lane markers, no stop signs, and so on. As you can imagine, the result was chaos. City streets became incredibly dangerous. Laws soon were written to address this problem, closing the gap.
The term cultural lag refers to the notion that culture takes time to catch up with technological innovations, and that social problems and conflicts are caused by this lag. Cultural lag is not only a concept, as it also relates to a theory and explanation in sociology. Cultural lag helps to identify and explain social problems and to predict future problems.
The term was coined by the sociologist William F. Ogburn in his 1922 work “Social Change with Respect to Culture and Original Nature. ” According to Ogburn, cultural lag is a common societal phenomenon due to the tendency of material culture to evolve and change rapidly while non-material culture tends to resist change and remain fixed for a far longer period of time. His theory of cultural lag suggests that a period of maladjustment occurs when the non-material culture is struggling to adapt to new material conditions.
Due to the opposing nature of these two aspects of culture, adaptation of new technology becomes rather difficult. As explained by James W. Woodward, when material conditions change, changes are felt in the non-material culture as well. But these changes in the non-material culture do not match exactly with the change in the material culture. This delay is the cultural lag.
Cultural lag creates problems for a society in different ways. Cultural lag is seen as a critical ethical issue because failure to develop broad social consensus on appropriate uses of modern technology may lead to breakdowns in social solidarity and the rise of social conflict . The issue of cultural lag tends to permeate any discussion in which the implementation of some new technology can become controversial for society at large.
Human Embryonic Stem Cells
As example of cultural lag is human embryonic stem cells. We have the necessary technology to turn stem cells into neurons but have not yet developed ethical guidelines and cultural consensus on this practice.
3.3.4: Animals and Culture
Animal culture refers to cultural learning in non-human animals through socially transmitted behaviors.
Learning Objective
Formulate a thesis which defends the idea that non-human animals have culture
Key Points
- Much cultural anthropological research has been done on non-human primates, due to their close evolutionary proximity to humans.
- One of the first signs of culture in early humans was the use of tools. Chimpanzees have been observed using tools such as rocks and sticks to obtain better access to food.
- The acquisition and sharing of behaviors correlates directly to the existence of memes, which are defined as “units of cultural transmission” by the evolutionary biologist Richard Dawkins.
- Though the idea of culture in animals has only been around for just over half of a century, scientists have been noting social behaviors of animals for centuries.
- Aristotle was the first to provide evidence of social learning in the bird songs. Charles Darwin first attempted to find the existence of imitation in animals when trying to prove his theory that the human mind had evolved from that of lower beings.
Key Terms
- meme
-
Any unit of cultural information, such as a practice or idea, that is transmitted verbally or by repeated action from one mind to another.
- social behaviors
-
In physiology and sociology, social behavior is behavior directed towards society, or taking place between, members of the same species.
- cultural anthropological research
-
Cultural anthropology is a branch of anthropology focused on the study of cultural variation among humans, collecting data about the impact of global economic and political processes on local cultural realities.
Animal culture refers to cultural learning in non-human animals through socially transmitted behaviors. The question of the existence of culture in non-human societies has been a contentious subject for decades due to the inexistence of a concise definition for culture. However, many scientists agree on culture being defined as a process, rather than an end product. This process, most agree, involves the social transmission of a novel behavior, both among peers and between generations. This behavior is shared by a group of animals, but not necessarily between separate groups of the same species .
Animal Culture
A chimpanzee mother and baby.
Tools and Learned Activities
One of the first signs of culture in early humans was the use of tools. Chimpanzees have been observed using tools such as rocks and sticks to obtain better access to food. There are other learned activities that have been exhibited by animals as well. Some examples of these activities that have been shown by varied animals are opening oysters, swimming, washing food, and unsealing tin lids. The acquisition and sharing of behaviors correlates directly to the existence of memes, which are defined as “units of cultural transmission” by the evolutionary biologist Richard Dawkins. It especially reinforces the natural selection component. These learned actions are mechanisms for making life easier, and therefore longer.
History of Animal Culture
Though the idea of culture in animals has only been around for just over half of a century, scientists have been noting social behaviors of animals for centuries. Aristotle was the first to provide evidence of social learning in the bird songs. Charles Darwin first attempted to find the existence of imitation in animals when trying to prove his theory that the human mind had evolved from that of lower beings. Darwin was also the first to suggest what became known as ‘social learning’ in explaining the transmission of an adaptive behavior pattern throughout a population of honey bees.
Much cultural anthropological research has been done on non-human primates, due to their close evolutionary proximity to humans. In non-primate animals, research tends to be limited, so the evidence for culture is lacking. The subject has become more popular recently, prompting more research in the field.
3.4: Culture Worlds
3.4.1: Subcultures
A subculture is a culture shared and actively participated in by a minority of people within a broader culture.
Learning Objective
Give examples for subcultures by using Gelder’s proposed criteria
Key Points
- Subcultures incorporate large parts of the broader cultures of which they are part; in specifics they may differ radically.
- The study of subcultures often consists of the study of symbolism attached to clothing, music, and other visible affectations by members of subcultures. Sociologists also study the ways in which these same symbols are interpreted by members of the dominant culture.
- Cultural appropriation is the process by which businesses often seek to capitalize on the subversive allure of subcultures in search of “cool,” which remains valuable in the selling of any product.
Key Terms
- cultural appropriation
-
Cultural appropriation is the adoption of some specific elements of one culture by a different cultural group.
- subculture
-
A portion of a culture distinguished from the larger society around it by its customs or other features.
- symbolism
-
Representation of a concept through symbols or underlying meanings of objects or qualities.
Example
- Religious minorities could be considered subcultures. For example, Mormons might be considered a subculture. Within Mormon culture, there may be yet more subcultures (or sub-subcultures), such as those who continue to practice polygamy.
In sociology, anthropology, and cultural studies, a subculture is a group of people with a culture that differentiates themselves from the larger culture to which they belong. A culture often contains numerous subcultures, which incorporate large parts of the broader cultures of which they are part; in specifics they may differ radically. Subcultures bring together like-minded individuals who feel neglected by societal standards and allow them to develop a sense of identity.
Subcultures and Symbolism
The study of subcultures often consists of the study of symbolism attached to clothing, music, and other visible affectations by members of subcultures. Additionally, sociologists study the ways in which these symbols are interpreted by members of the dominant culture. Some subcultures achieve such a status that they acquire a name. Members of a subculture often signal their membership through a distinctive and symbolic use of style, which includes fashions, mannerisms, and argot. Examples of subcultures could include bikers, military personel, and Star Trek fans .
Trekkies
The hand gesture meaning ‘live long and prosper’ has spread beyond the subculture of Star Trek fans and is often recognized in mainstream culture.
Identifying Subcultures
It may be difficult to identify certain subcultures because their style—particularly clothing and music—may be adopted by mass culture for commercial purposes. Businesses often seek to capitalize on the subversive allure of subcultures in search of “cool,” which remains valuable in selling of any product. This process of cultural appropriation may often result in the death or evolution of the subculture, as its members adopt new styles that appear alien to mainstream society.
In 2007, Ken Gelder proposed six key ways in which subcultures can be identified:
- Through their often negative relations to work (as ‘idle’, ‘parasitic’, at play or at leisure, etc.)
- Through their negative or ambivalent relation to class (since subcultures are not ‘class-conscious’ and don’t conform to traditional class definitions)
- Through their association with territory (the ‘street’, the ‘hood’, the club, etc.), rather than property
- Through their movement out of the home and into non-domestic forms of belonging (i.e. social groups other than the family)
- Through their stylistic ties to excess and exaggeration (with some exceptions)
- Through their refusal of the banalities of ordinary life
3.4.2: Countercultures
Counterculture is a term describing the values and norms of a cultural group that run counter to those of the social mainstream of the day.
Learning Objective
Apply the concept of counterculture to the rise and collapse of the US Hippie movement
Key Points
- Examples of countercultures in the U.S. could include the hippie movement of the 1960s, the green movement, polygamists, and feminist groups.
- A counterculture is a subculture with the specific characteristic that some of its beliefs, values, or norms challenge or even contradict those of the main culture with which it shares a geographic region and/or origin.
- Countercultures run counter to dominant cultures and the social mainstream of the day.
Key Terms
- counterculture
-
Any culture whose values and lifestyles are opposed to those of the established mainstream culture, especially to western culture.
- culture
-
The beliefs, values, behavior, and material objects that constitute a people’s way of life.
- mainstream
-
Purchased, used, or accepted broadly rather than by a tiny fraction of a population or market; common, usual, or conventional.
Example
- Modern American Marxist political groups are examples of countercultures — they promote a worldview and set of norms and values that are contrary to the dominant American system.
“Counterculture” is a sociological term that refers to a cultural group or subculture whose values and norms of behavior run counter to those of the region’s social mainstream; it can be considered the cultural equivalent of political opposition.
In the United States, the counterculture of the 1960s became identified with the rejection of conventional social norms of the 1950s. Counterculture youth rejected the cultural standards of their parents, especially with respect to racial segregation and initial widespread support for the Vietnam War.
As the 1960s progressed, widespread tensions developed in American society that tended to flow along generational lines regarding the war in Vietnam, race relations, sexual mores, women’s rights, traditional modes of authority, and a materialistic interpretation of the American Dream. Hippies became the largest countercultural group in the United States. The counterculture also had access to a media eager to present their concerns to a wider public. Demonstrations for social justice created far-reaching changes affecting many aspects of society .
Hippies at an Anti-Vietnam Demonstration, 1967
A female demonstrator offers a flower to military police on guard at the Pentagon during an anti-Vietnam demonstration.
The counterculture in the United States lasted from roughly 1964 to 1973 — coinciding with America’s involvement in Vietnam — and reached its peak in 1967, the “Summer of Love. ” The movement divided the country: to some Americans, these attributes reflected American ideals of free speech, equality, world peace, and the pursuit of happiness; to others, the same attributes reflected a self-indulgent, pointlessly rebellious, unpatriotic, and destructive assault on America’s traditional moral order.
The counterculture collapsed circa 1973, and many have attributed its collapse to two major reasons: First, the most popular of its political goals — civil rights, civil liberties, gender equality, environmentalism, and the end of the Vietnam War — were accomplished. Second, a decline of idealism and hedonism occurred as many notable counterculture figures died, the rest settled into mainstream society and started their own families, and the “magic economy” of the 1960s gave way to the stagflation of the 1970s.
3.5: Culture and the Dominant Ideology in the U.S.
3.5.1: An Overview of U.S. Values
Despite certain consistent values (e.g. individualism, egalitarianism, freedom, democracy), American culture has a variety of expressions.
Learning Objective
Defend the notion that America has both consistent values and a variety of expressions
Key Points
- Values are related to the norms of a culture, but they are more global and abstract than norms. Norms are rules for behavior in specific situations, while values identify what should be judged as good or evil.
- American culture includes both conservative and liberal elements, scientific and religious competitiveness, political structures, risk taking and free expression, materialist and moral elements.
- American culture has a variety of expressions due to its geographical scale and demographic diversity.
- Since the late 1970’s, the term “traditional values” has become synonymous with “family values” in the U.S., and implies a congruence with mainstream Christianity. However “family values” is arguably a modern politicized subset of traditional values, which is a larger concept.
Key Terms
- conservative
-
A person who favors maintenance of the status quo or reversion to some earlier status.
- liberal
-
Open to political or social changes and reforms associated with either classical or modern liberalism.
- traditional
-
Of or pertaining to tradition; derived from tradition; communicated from ancestors to descendants by word only; transmitted from age to age without writing; as, traditional opinions; traditional customs; traditional expositions of the Scriptures.
Example
- Achievement and success are typical American values. These values may explain why so many Americans pursue higher education in order to get better jobs and earn more money, as well as why Americans are given so few vacation days compared to other countries.
Cultures have values that are largely shared by their members. The values of a society can often be identified by noting that which people receive, honor or respect.
Values are related to the norms of a culture, but they are more global and abstract than norms. Norms are rules for behavior in specific situations, while values identify what should be judged as either good or evil. Flying the national flag on a holiday is a norm, but it reflects the value of patriotism. Wearing dark clothing and appearing solemn are normative behaviors at a funeral; in certain cultures, they reflect the values of respect and support for friends and family.
The Statue of Liberty
The Statue of Liberty symbolizes freedom, a fundamental American value.
Different cultures reflect different values. American culture includes both conservative and liberal elements, such as scientific and religious competitiveness, political structures, risk taking and free expression, materialist and moral elements. Aside from certain consistent ideological principles (e.g. individualism, egalitarianism and faith in freedom and democracy), American culture’s geographical scale and demographic diversity has spawned a variety of expressions. The flexibility of U.S. culture and its highly symbolic nature lead some researchers to categorize American culture as a mythic identity, while others recognize it as American exceptionalism.
Declaration of Independence
Many fundamental American values are derived from the Declaration of Independence.
Since the late 1970’s, the terms “traditional values” and”family values” have become synonymous in the U.S., and imply a congruence with mainstream Christianity . However, the term “family values” is arguably a modern politicized subset of traditional values, which is a larger concept, anthropologically speaking. Although It is also not necessarily a political idea, it has become associated with both the particular correlation between Evangelicalism and politics (as embodied by American politicians such as Ronald Reagan, Dan Quayle and George W. Bush), as well as the broader Christianity movement (as exemplified by Pat Robertson).
Traditional values as “family values”?
“Family values” is arguably a modern politicized subset of traditional values.
3.5.2: Value Clusters
People from different backgrounds tend to have different value systems, which cluster together into a more or less consistent system.
Learning Objective
Evaluate the separation of world values into the categories of ‘self-expression’ and ‘survival’
Key Points
- The World Values Survey is used to identify different clusters of values around the world.
- Traditional and survival values tend to cluster in developing countries.
- With industrialization, countries shift from traditional to secular values.
- With the rise of knowledge economies, countries tend to shift from survival to self-expression values.
- With the rise of knowledge economies, countries tend to shift from survival to self-expression values.
Key Terms
- Traditional Values
-
Traditional values emphasize the importance of religion, parent-child ties, deference to authority and traditional family values. People who embrace these values also reject divorce, abortion, euthanasia, and suicide. These societies have high levels of national pride and a nationalistic outlook.
- Secular Values
-
Secular values, as opposed to traditional values, base morality on human faculties such as logic, reason, or moral intuition, rather than on purported supernatural revelation or guidance (which is the source of religious ethics).
Example
- In the 1970s, the United States economy underwent a major shift. Before the 1970s, most people worked in manufacturing. After the 1970s, more people worked in the service sector, and the knowledge economy took off. With this economic shift, values began to change, too. For example, the 1970s saw the blossoming of the environmental movement, which put less emphasis on survival values. Rather than focusing on putting enough food on the table, people became concerned with how that food was produced, packaged, and transported. They began to make consumption decisions based on self-expression; they bought organic produce or free range beef to express their belief that sustainability and environmental protection mattered.
People from different backgrounds tend to have different sets of values, or value systems. Certain values may cluster together into a more or less consistent system. A communal or cultural value system is held by and applied to a community, group, or society. Some communal value systems are reflected in legal codes and laws.
World Values Survey
Some sociologists are interested in better defining and measuring value clusters in different countries. To do so, they have developed what is called the World Values Survey, a survey of questions given to people around the world and used to identify different clusters of values in different regions. Over the years, the World Values Survey has demonstrated that people’s beliefs play a key role in defining life in different countries—defining anything from a nation’s economic development to the emergence of democratic institutions to the rise of gender equality .
World Values Survey
The World Values Survey is administered to people around the world. Their responses are aggregated and can be used to reveal regional value clusters, like those displayed in this map.
Trends
In general, the World Values Survey has revealed two major axes along which values cluster: (1) a continuum from traditional to secular values and (2) a continuum from survival to self-expression. Traditional values emphasize the importance of religion, parent-child ties, deference to authority, and traditional family values. People who embrace these values also reject divorce, abortion, euthanasia, and suicide. These societies have high levels of national pride and a nationalistic outlook. Secular values have the opposite preferences to the traditional values. These societies place less emphasis on religion, traditional family values, and authority. Divorce, abortion, euthanasia, and suicide are seen as relatively acceptable. Industrialization tends to bring a shift from traditional values to secular ones.
With the rise of the knowledge society, cultural change moves in a new direction. The transition from industrial society to knowledge society is linked to a shift from survival values to self-expression values. In knowledge societies, such as the United States, an increasing share of the population has grown up taking survival for granted. Survival values place emphasis on economic and physical security. It is linked with a relatively ethnocentric outlook and low levels of trust and tolerance. Self-expression values give high priority to environmental protection; tolerance of foreigners, gays, and lesbians; gender equality; and participation in decision-making as it relates to economic and political life.
3.5.3: Value Contradictions
Although various values often reinforce one another, these clusters of values may also include values that contradict one another.
Learning Objective
Analyze a scenario in which a value system, either individual or collective, is shown to be internally inconsistent, and then resolve the conflict
Key Points
- Value systems may contain value contradictions. A value system by itself is internally inconsistent if its values contradict each other, and its exceptions are highly situational and inconsistently applied.
- Value contradictions can also arise within systems of personal or communal values.
- Often, conflicts arise due to value systems contradictions. Society tries to resolve value contradictions in order to reduce conflict.
- Society tries to resolve value contradictions.
Key Terms
- Value Consistency
-
A value system in its own right is internally consistent when its values do not contradict each other, and its exceptions are abstract enough to be used in all situations and consistently applied.
- Value Contradictions
-
A value system by itself is internally inconsistent or contradictory if its values contradict each other, and its exceptions are highly situational and inconsistently applied.
- Communal Values
-
A communal or cultural value system is held by and applied to a community/ group/society. Some communal value systems are reflected in the form of legal codes or law.
Examples
- An example of conflict would be a value system based on individualism pitted against a value system based on collectivism. Society might try to resolve that conflict in various ways, such as the using the following guideline: Individuals may act freely unless their actions harm others or interfere with others’ freedom or with functions of society that individuals need, provided those functions do not themselves interfere with these prescribed individual rights and were agreed to by a majority of the individuals.
- Individuals may act freely unless their actions harm others or interfere with others’ freedom or with functions of society that individuals need, provided those functions do not themselves interfere with these prescribed individual rights and were agreed to by a majority of the individuals.
- A society (or more specifically the system of order that enables the workings of a society) exists for the purpose of benefiting the lives of the individuals who are members of that society. The functions of a society in providing such benefits would be those agreed to by the majority of individuals in the society.
- A society may require contributions from its members in order for them to benefit from the services provided by the society. The failure of individuals to make such required contributions could be considered a reason to deny those benefits to them, although a society could elect to consider hardship situations in determining how much should be contributed.
- A society may restrict behavior of individuals who are members of the society only for the purpose of performing its designated functions agreed to by the majority of individuals in the society, only insofar as they violate the aforementioned values. This means that a society may abrogate the rights of any of its members who fails to uphold the aforementioned values.
Although value clusters generally work together so that various values reinforce one another, at times, these clusters of values may also include values that contradict one another. Value contradictions can arise between individual and communal value systems. That is, as a member of a society, group, or community, an individual can hold both a personal value system and a communal value system at the same time. In this case, the two value systems (one personal and one communal) are externally consistent provided they bear no contradictions or situational exceptions between them.
Value contradictions can also arise within individual or communal value systems. A value system is internally consistent (value consistency) when its values do not contradict each other and its exceptions are abstract enough to be used in all situations and consistently applied. Conversely, a value system by itself is internally inconsistent if its values contradict each other and its exceptions are highly situational and inconsistently applied. A value contradiction could be based on a difference in how people rank the value of things, or on fundamental value conflict. For example, although sharing a set of common values, such as hockey is better than baseball or ice cream is better than fruit, two different parties might not rank those values equally. Also, two parties might disagree as to whether certain actions are right or wrong, both in theory and in practice, and find themselves in an ideological or physical conflict.
Conflicts are often a result of differing value systems. An example conflict would be a value system based on individualism pitted against a value system based on collectivism. A rational value system organized to resolve the conflict between two such value systems might take this form: Individuals may act freely unless their actions harm others or interfere with others’ freedom or with functions of society that individuals need, provided those functions do not themselves interfere with these proscribed individual rights and were agreed to by a majority of the individuals.
Protestors clash with police at the 1999 WTO summit in Seattle
People whose personal values conflict with communal values may try to change communal values through protest.
Life of George Washington–The farmer
This picture, by French artist Régnier, shows George Washington standing among African American field workers. The practice of slavery represents a value contradiction between wealth and liberty.
3.5.4: Emerging Values
Values tend to change over time, and the dominant values in a country might shift as that country undergoes economic and social change.
Learning Objective
Criticize materialist values for the sake of argument
Key Points
- Millennials and Baby Boomers grew up under different conditions and therefore have different values.
- People who grow up worrying about meeting their basic material needs will tend to have materialist values that emphasize survival and meeting basic needs.
- People who grow up without having to worry about meeting basic material needs will tend to have post-materialist values such as self-expression.
Key Terms
- values
-
A collection of guiding principles; what one deems to be correct and desirable in life, especially regarding personal conduct.
- autonomy
-
Self-government; freedom to act or function independently.
Example
- The difference between materialist and post-materialist values can often be witnessed in family dinner conversations, which reveal how generational change leads to value change. A father, who grew up with only the bare necessities, may work very hard to provide for his family. He ensures that his children have what he never had: the security of having every basic need and most of their desires satisfied. But his children, growing up with such material security, develop different values. Rather than being concerned with earning a living, they are concerned with making a difference and following their dreams. At family dinners, the father may urge his children to pursue practical courses in college that will prepare them for dependable jobs, whereas the children may argue in favor of pursuing interesting courses that may lack practical application.
Values tend to change over time. The dominant values in a country may shift as that country undergoes economic and social change. Often, such value change can be observed in generational differences. For example, most young adults today share similar values. They are sometimes referred to as Generation Y or Milliennials. This generation was born in the 1980s and 1990s, and raised in a much more technologically advanced environment.
Millennials (Generation Y)
This generation was born in the 1980s and 1990s, a time of major technological advancement.
Milliennials tend to have different values than the previous generation. Some common, notable tendencies are:
- wanting to “make a difference” or have purpose
- wanting to balance work with the rest of life
- excessive seeking of fun and variety
- questioning authority or refusal to respond to authority without “good reason”
- unlimited ambition coupled with overly demanding, confrontational personality
- lack of commitment in the face of unmet expectations
- extreme sense of loyalty to family, friends, and self
By contrast, their parents or grandparents tend to belong to the Baby Boom generation, born between 1946 and 1964. Baby Boomers did not grow up with the same technologies as today’s youth. Instead, they came of age during the 1960s and 1970s, and their values were often formed in support of or reaction to the political and social issues of the time. Whereas the generation before the Baby Boom was concerned with economic and physical security, Boomers tend to have what are referred to as post-materialist values.
Civil Rights Movement
The right to assembly protects citizens’ rights to gather together to peacefully protest. This right was frequently exercised during the Civil Rights Movement (depicted here).
Post-materialist values emphasize non-material values like freedom and the ability to express oneself. The rising prosperity of the post-WWII years fostered these values by liberating people from the overriding concern for material security. Sociologists explain the rise of post-materialist values in two ways. First, they argue that individuals pursue various goals in order of basic necessity. While people may universally aspire to freedom and autonomy, the most pressing material needs like hunger, thirst, and physical security have to be satisfied first, since they are immediately linked with survival. These materialistic goals will have priority over post-materialist goals like belonging, esteem, and aesthetic and intellectual satisfaction. Once satisfaction has been achieved from these material survival needs, focus will gradually shift to the nonmaterial.
Second, sociologists suggest that people’s basic values are largely fixed when they reach adulthood, and change relatively little thereafter. For example, those who experience economic scarcity in childhood may as adults place a high value on meeting economic needs (such as valuing economic growth above protecting the environment) and on safety needs (such as supporting more authoritarian styles of leadership or exhibiting strong feelings of national pride—e.g., maintaining a strong army or willingness to sacrifice civil liberties for the sake of law and order). On the other hand, those who mainly experienced sustained material affluence during youth might give high priority to values such as individual improvement, personal freedom, citizen input in government decisions, the ideal of a society based on humanism, and maintaining a clean and healthy environment. Because values are set when people are young, value change can be slow. The values we see emerging today may depend on material conditions nearly a generation ago.
3.5.5: Culture Wars
In American usage, “culture war” refers to the claim that there is a conflict between those conservative and liberal values.
Learning Objective
Support the notion of a culture war by giving an example from your own contemporary society
Key Points
- A culture war is a struggle between two sets of conflicting cultural values.
- Italian Marxist Antonio Gramsci argued for a culture war in which anti-capitalist elements seek to gain a dominant voice in the mass media, education, and other mass institutions.
- Members of the religious right accused their political opponents of undermining tradition, Western civilization, and family values.
- James Davison Hunter argued that on an increasing number of “hot-button” defining issues, such as abortion, gun politics, separation of church and state, privacy, recreational drug use, homosexuality, and censorship issues, there existed two definable polarities.
- James Davison Hunter argued that on an increasing number of “hot-button” defining issues — abortion, gun politics, separation of church and state, privacy, recreational drug use, homosexuality, censorship issues — there existed two definable polarities.
Key Terms
- kulturkampf
-
A conflict between secular and religious authorities, especially the struggle between the Roman Catholic Church and the German government under Bismarck.
- progressive
-
Favoring or promoting progress; advanced.
- religious right
-
The religious or Christian right is a term used in the United States to describe right-wing Christian political groups that are characterized by their strong support of socially conservative policies. Christian conservatives principally seek to apply their understanding of the teachings of Christianity to politics and public policy by proclaiming the value of those teachings and/or by seeking to use those teachings to influence law and public policy.
Example
- At the 1992 Republican National Convention, conservative pundit Patrick Buchanan gave a landmark speech that is now often referred to as his “culture war speech. ” In it, he defined the battle lines between the two sides in the culture war, which he claimed was being fought by Republicans and Democrats. On one side, Republicans (and their 1992 presidential candidate George H.W. Bush) believed in the importance of religion and traditional family values. They wanted parents to have the option of sending children to private, religious schools (at state expense), and they opposed legalized abortion or equal rights for gay and lesbian people.
A culture war is a struggle between two sets of conflicting cultural values. This can be framed to describe west versus east, rural versus urban, or traditional values versus progressive secularism. The concept of a culture war has been in use in English since at least its adoption as a calque (loan translation) to refer to the German “Kulturkampf.”
Italian Marxist Antonio Gramsci presented in the 1920s a theory of cultural hegemony. He stated that a culturally diverse society can be dominated by one class who has a monopoly over the mass media and popular culture, and Gramsci argued for a culture war in which anti-capitalist elements seek to gain a dominant voice in the mass media, education, and other mass institutions.
As an American phenomenon, it originated in the 1920s when urban and rural American values came into clear conflict. In American usage, the term culture war is used to claim that there is a conflict between those values considered traditionalist or conservative and those considered progressive or liberal. In the 1980s, the culture war in America was characterized by the conservative climate during the presidency of Ronald Reagan. Members of the religious right often criticized academics and artists, and their works, in a struggle against what they considered indecent, subversive, and blasphemous. They often accused their political opponents of undermining tradition, Western civilization and family values.
The expression was introduced again by the 1991 publication of Culture Wars: The Struggle to Define America by James Davison Hunter, a sociologist at the University of Virginia. Hunter described what he saw as a dramatic realignment and polarization that had transformed American politics and culture. He argued that on an increasing number of “hot-button” defining issues, such as abortion, gun politics, separation of church and state, privacy, recreational drug use, homosexuality, and censorship issues, there existed two definable polarities. Furthermore, not only were there a number of divisive issues, but society had divided along essentially the same lines on these issues, so as to constitute two warring groups, defined primarily not by religion, ethnicity, social class, or even political affiliation, but rather by ideological world views.
Culture Wars
So-called red state/blue state maps have become popular for showing election results. Some suggest that the red state/blue state divide maps the battle lines in the culture wars.
3.5.6: Values as Binders
Cultures hold values that are largely shared by their members, thereby binding members together.
Learning Objective
Compose a scenario which illustrates a potential clash between personal and cultural/societal values
Key Points
- Values and value systems are guidelines that determine what is important in a society, reflecting a person’s sense of right and wrong, or what “ought” to be.
- Types of values include ethical/moral value, doctrinal/ideological (religious or political) values, social values, and aesthetic values.
- While a personal value system is held by and applied to one individual only, a communal or cultural value system is held by and applied to a community/group/society.
- Cultures have values that are largely shared by their members, thereby binding members together. Members take part in a culture even if each member’s personal values do not entirely agree with some of the normative values sanctioned in the culture.
- Values are related to the norms of a culture, but they are more global and abstract than norms.
- Values can act as blinders if people take their own personal values (or their society’s values) as universal truths and fail to recognize the diversity of values held across people and societies.
Key Terms
- value system
-
a set of consistent personal and cultural values used for the purpose of ethical or ideological integrity.
- value
-
The degree of importance given to something.
Example
- In the 1950s, few women worked outside the home. Most people, men and women alike, believed that the proper roles of women were as homemaker and mother. Gradually, though, economic necessity drove more and more women to seek jobs. As they did, many women confronted hostility, both at work and at home. They were accused of undermining the stability of the American family. Critics worried that, without mothers at home, children would grow up to be criminals. But no such catastrophe came to be. It was only the values of the time, acting as blinders, which prevented people from imagining the stability of a society in which women worked outside the home. Of course, today, the value of work has become so entrenched for women that some criticize those women who choose to stay at home instead of working. They are likewise using values as blinders.
Values and value systems are guidelines that determine what is important in a society. They can be defined as broad preferences concerning appropriate courses of action or outcomes. Values reflect a person’s sense of right and wrong, or what “ought” to be. “Equal rights for all,” “Excellence deserves admiration,” and “People should be treated with respect and dignity” are representative of values. Types of values include ethical/moral value, doctrinal/ideological (religious, political, etc.) values, social values, and aesthetic values.
Values tend to influence attitudes and behavior. For example, if you value equal rights for all and you work for an organization that treats some employees markedly better than others, this may cause internal conflict. A value system is a set of consistent personal and cultural values used for the purpose of ethical or ideological integrity. While a personal value system is held by and applied to one individual only, a communal or cultural value system is held by and applied to a community/group/society. Some communal value systems are reflected in the form of legal codes or law. As a member of a society, group, or community, an individual can hold both a personal value system and a communal value system at the same time. In this case, the two value systems (one personal and one communal) are externally consistent provided they bear no contradictions or situational exceptions between them.
Cultures have values that are largely shared by their members, thereby binding members together. Members take part in a culture even if each member’s personal values do not entirely agree with some of the normative values sanctioned in the culture. This reflects an individual’s ability to synthesize and extract aspects valuable to them from the multiple subcultures to which they belong. Values vary across individuals and cultures, and change over time; in many ways, they are aligned with belief and belief systems. Noting which people receive honor or respect can often identify the values of a society. In the US, for example, professional athletes at the top levels in some sports are honored (in the form of monetary payment) more than college professors. Surveys show that voters in the United States would be reluctant to elect an atheist as a president, suggesting that belief in God is a value.
Values are related to the norms of a culture, but they are more global and abstract than norms. Normsare rules for behavior in specific situations, while values identify what should be judged as right or wrong. Flying the national flag on a holiday is a norm, but it reflects the value of patriotism. Wearing dark clothing and appearing solemn are normative behaviors at a funeral. In certain cultures, they reflect the values of respect and support of friends and family. If a group member expresses a value that is in serious conflict with the group’s norms, the group’s authority may carry out various ways of encouraging conformity or stigmatizing the non-conforming behavior of its members. For example, transgender individuals hold the value of freedom to identify and express their gender as they choose; however this value is not shared by much of society, and discriminatory laws and practices prevent this freedom.
Values can act as blinders if people take their own personal values (or their society’s values) as universal truths and fail to recognize the diversity of values held across people and societies. They may believe their values determine the only way to understand and act in the world, when, in fact, different people and different societies may have widely divergent values.
Blinders
Values can act as blinders if people fail to recognize the diversity of values held across people and cultures, and assume their own society’s values are universal truths.
3.5.7: Ideal vs. Real Culture
Any given culture contains a set of values that determine what is important to the society; these values can be idealized or realized.
Learning Objective
Compare the idea of an idealized and a realized value system
Key Points
- Ideal values are absolute; they bear no exceptions. These values can be codified as a strict set of proscriptions on behavior.
- A realized value system contains exceptions to resolve the contradictions between ideal values and practical realities in everyday circumstances.
- Whereas we might refer to ideal values when listing American values (or even our own values), the values that we uphold in daily life tend to be real values.
Key Terms
- real values
-
values that contain exceptions to resolve the contradictions inherent between ideal values and practical realities.
- ideal values
-
absolute values that bear no exceptions and can be codified as a strict set of proscriptions on behavior.
Example
- In America, ideal values include marriage and monogamy based on romantic love. But in reality, many marriages are based on something other than romantic love: money or convenience, for instance. And in reality, few marriages endure for life as monogamous couplings. Many spouses have affairs or divorce. None of this is to say that monogamous marriages based on romantic love do not exist. But such marriages are not universal, despite our value ideals.
Any given culture contains a set of values and value systems that determine what is important to the society as a whole. When we talk about American values, we often have in mind a set of ideal values. Ideal values are absolute; they bear no exceptions. These values can be codified as a strict set of proscriptions on behavior, and those who hold to their idealized value system and claim no exceptions are often referred to as absolutists.
An example of an ideal value is the idea of marriage and monogamy based on romantic love . In reality, many marriages are based on things other than romantic love (such as money, convenience, or social expectation), and many end in divorce. While monogamous marriages based on romantic love certainly do exist, such marriages are not universal, despite our value ideals.
The Ideal Marriage?
In ideal culture, marriage is forever, but in real culture, many marriages end in divorce.
Few things in life exist without exception. Along with every value system comes exceptions to those values. Abstract exceptions serve to reinforce the ranking of values; their definitions are generalized enough to be relevant to any and all situations. Situational exceptions, on the other hand, are ad hoc and pertain only to specific situations. With these exceptions, real values emerge. A realized value system, as opposed to an ideal value system, contains exceptions to resolve the contradictions between ideal values and practical realities in everyday circumstances.
Whereas we might refer to ideal values when listing American values (or even our own values), the values that we uphold in daily life tend to be real values. The difference between these two types of systems can be seen when people state that they hold one value system, yet in practice deviate from it, thus holding a different value system. For example, a religion lists an absolute set of values, while the practice of that religion may include exceptions.
Chapter 2: Sociological Research
2.1: The Research Process
2.1.1: Defining the Problem
Defining a sociological problem helps frame a question to be addressed in the research process.
Learning Objective
Explain how the definition of the problem relates to the research process
Key Points
- The first step of the scientific method is to ask a question, describe a problem, and identify the specific area of interest. The topic should be narrow enough to study within the context of a particular test but also broad enough to have a more general practical or theoretical merit.
- For many sociologists, the goal is to conduct research which may be applied directly to social policy and welfare, while others focus primarily on refining the theoretical understanding of social processes. Subject matter ranges from the micro level to the macro level.
- Like other sciences, sociology relies on the systematic, careful collection of measurements or counts of relevant quantities to be considered valid. Given that sociology deals with topics that are often difficult to measure, this generally involves operationalizing relevant terms.
Key Terms
- operationalization
-
In humanities, operationalization is the process of defining a fuzzy concept so as to make the concept clearly distinguishable or measurable and to understand it in terms of empirical observations.
- operational definition
-
A showing of something — such as a variable, term, or object — in terms of the specific process or set of validation tests used to determine its presence and quantity.
Defining the problem is necessarily the first step of the research process. After the problem and research question is defined, scientists generally gather information and other observations, form hypotheses, test hypotheses by collecting data in a reproducible manner, analyze and interpret that data, and draw conclusions that serve as a starting point for new hypotheses .
The Scientific Method is an Essential Tool in Research
This image lists the various stages of the scientific method.
The first step of the scientific method is to ask a question, describe a problem, and identify the specific area of interest. The topic should be narrow enough to study within the context of a particular test but also broad enough to have a more general practical or theoretical merit. For many sociologists, the goal is to conduct research which may be applied directly to social policy and welfare, while others focus primarily on refining the theoretical understanding of social processes. Subject matter ranges from the micro level of individual agency and interaction to the macro level of systems and the social structure.
Like other sciences, sociology relies on the systematic, careful collection of measurements or counts of relevant quantities to be considered valid. Given that sociology deals with topics that are often difficult to measure, this generally involves operationalizing relevant terms. Operationalization is a process that describes or defines a concept in terms of the physical or concrete steps it takes to objectively measure it, as opposed to some more vague, inexact, or idealized definition. The operational definition thus identifies an observable condition of the concept. By operationalizing a variable of the concept, all researchers can collect data in a systematic or replicable manner.
For example, intelligence cannot be directly quantified. We cannot say, simply by observing, exactly how much more intelligent one person is than another. But we can operationalize intelligence in various ways. For instance, we might administer an IQ test, which uses specific types of questions and scoring processes to give a quantitative measure of intelligence. Or we might use years of education as a way to operationalize intelligence, assuming that a person with more years of education is also more intelligent. Of course, others might dispute the validity of these operational definitions of intelligence by arguing that IQ or years of education are not good measures of intelligence. After all, a very intelligent person may not have the means or inclination to pursue higher education, or a less intelligent person may stay in school longer because they have trouble completing graduation requirements. In most cases, the way we choose to operationalize variables can be contested; few operational definitions are perfect. But we must use the best approximation we can in order to have some sort of measurable quantity for otherwise unmeasurable variables.
2.1.2: Reviewing the Literature
Sociological researchers review past work in their area of interest and include this “literature review” in the presentation of their research.
Learning Objective
Explain the purpose of literature reviews in sociological research
Key Points
- Literature reviews showcase researchers’ knowledge and understanding of the existing body of scholarship that relates to their research questions.
- A thorough literature review demonstrates the ability to research and synthesize. Furthermore, it provides a comprehensive overview of what is and is not known, and why the research in question is important to begin with.
- Literature reviews offer an explanation of how the researcher can contribute toward the existing body of scholarship by pursuing their own thesis or research question.
Key Terms
- disciplinary
-
Of or relating to an academic field of study.
- Theses
-
A dissertation or thesis is a document submitted in support of candidature for an academic degree or professional qualification presenting the author’s research and findings. The term thesis is also used to refer to the general claim of an essay or similar work.
- essay
-
A written composition of moderate length exploring a particular issue or subject.
A literature review is a logical and methodical way of organizing what has been written about a topic by scholars and researchers. Literature reviews can normally be found at the beginning of many essays, research reports, or theses. In writing the literature review, the purpose is to convey what a researcher has learned through a careful reading of a set of articles, books, and other relevant forms of scholarship related to the research question. Furthermore, creating a literature review allows researchers to demonstrate the ability to find significant articles, valid studies, or seminal books that are related to their topic as well as the analytic skill to synthesize and summarize different views on a topic or issue .
Library Research
Good literature reviews require exhaustive research. Online resources make this process easier, but researchers must still sift through stacks in libraries.
A strong literature review has the following properties:
- It is organized around issues, themes, factors, or variables that are related directly to the thesis or research question.
- It demonstrates the researcher’s familiarity with the body of knowledge by providing a good synthesis of what is and is not known about the subject in question, while also identifying areas of controversy and debate, or limitations in the literature sharing different perspectives.
- It indicates the theoretical framework that the researcher is working with.
- It places the formation of research questions in their historical and disciplinary context.
- It identifies the most important authors engaged in similar work.
- It offers an explanation of how the researcher can contribute toward the existing body of scholarship by pursuing their own thesis or research question .
2.1.3: Formulating the Hypothesis
A hypothesis is a potential answer to your research question; the research process helps you determine if your hypothesis is true.
Learning Objective
Explain how hypotheses are used in sociological research and the difference between dependent and independent variables
Key Points
- Hypotheses are testable explanations of a problem, phenomenon, or observation.
- Both quantitative and qualitative research involve formulating a hypothesis to address the research problem.
- Hypotheses that suggest a causal relationship involve at least one independent variable and at least one dependent variable; in other words, one variable which is presumed to affect the other.
- An independent variable is one whose value is manipulated by the researcher or experimenter.
- A dependent variable is a variable whose values are presumed to change as a result of changes in the independent variable.
Key Terms
- hypothesis
-
Used loosely, a tentative conjecture explaining an observation, phenomenon, or scientific problem that can be tested by further observation, investigation, or experimentation.
- dependent variable
-
In an equation, the variable whose value depends on one or more variables in the equation.
- independent variable
-
In an equation, any variable whose value is not dependent on any other in the equation.
Examples
- In his book Making Democracy Work, Robert Putnam developed a theory that social capital makes government more responsive. To demonstrate his theory, he tested several hypotheses about the ways that social capital influences government. One of his hypotheses was that regions with strong traditions of civic engagement would have more responsive, more democratic, and more efficient governments, regardless of the institutional form that government took. This is an example of a causal hypothesis. In this hypothesis, the independent (causal) variable is civic engagement and the dependent variables (or effects) are the qualities of government. To test this hypothesis, he compared twenty different regional Italian governments. All of these governments had similar institutions, but the regions had different traditions of civic engagement. In southern Italy, politics were traditionally patrimonial, whereas in northern Italy, politics were traditionally more open and citizens were more engaged. Putnam’s evidence supported his hypothesis: in the north, which had a stronger tradition of civic engagement, government was indeed more responsive and more democratic.
- To test this hypothesis, he compared twenty different regional Italian governments. All of these governments had similar institutions, but the regions had different traditions of civic engagement. In southern Italy, politics were traditionally patrimonial, whereas in northern Italy, politics were traditionally more open and citizens were more engaged. Putnam’s evidence supported his hypothesis: in the north, which had a stronger tradition of civic engaegment, government was indeed more responsive and more democratic.
A hypothesis is an assumption or suggested explanation about how two or more variables are related. It is a crucial step in the scientific method and, therefore, a vital aspect of all scientific research . There are no definitive guidelines for the production of new hypotheses. The history of science is filled with stories of scientists claiming a flash of inspiration, or a hunch, which then motivated them to look for evidence to support or refute the idea.
The Scientific Method is an Essential Tool in Research
This image lists the various stages of the scientific method.
While there is no single way to develop a hypothesis, a useful hypothesis will use deductive reasoning to make predictions that can be experimentally assessed. If results contradict the predictions, then the hypothesis under examination is incorrect or incomplete and must be revised or abandoned. If results confirm the predictions, then the hypothesis might be correct but is still subject to further testing.
Both quantitative and qualitative research involve formulating a hypothesis to address the research problem. A hypothesis will generally provide a causal explanation or propose some association between two variables. Variables are measurable phenomena whose values can change under different conditions. For example, if the hypothesis is a causal explanation, it will involve at least one dependent variable and one independent variable. In research, independent variables are the cause of the change. The dependent variable is the effect, or thing that is changed. In other words, the value of a dependent variable depends on the value of the independent variable. Of course, this assumes that there is an actual relationship between the two variables. If there is no relationship, then the value of the dependent variable does not depend on the value of the independent variable.
2.1.4: Determining the Research Design
The research design is the methodology and procedure a researcher follows to answer their sociological question.
Learning Objective
Compare and contrast quantitive methods and qualitative methods
Key Points
- Research design defines the study type, research question, hypotheses, variables, and data collection methods. Some examples of research designs include descriptive, correlational, and experimental. Another distinction can be made between quantitative and qualitative methods.
- Sociological research can be conducted via quantitative or qualitative methods. Quantitative methods are useful when a researcher seeks to study large-scale patterns of behavior, while qualitative methods are more effective when dealing with interactions and relationships in detail.
- Quantitative methods include experiments, surveys, and statistical analysis, among others. Qualitative methods include participant observation, interviews, and content analysis.
- An interpretive framework is one that seeks to understand the social world from the perspective of participants.
- Although sociologists often specialize in one approach, many sociologists use a complementary combination of design types and research methods in their research. Even in the same study a researcher may employ multiple methods.
Key Terms
- scientific method
-
A method of discovering knowledge about the natural world based in making falsifiable predictions (hypotheses), testing them empirically, and developing peer-reviewed theories that best explain the known data.
- qualitative methods
-
Qualitative research is a method of inquiry employed in many different academic disciplines, traditionally in the social sciences, but also in market research and further contexts. Qualitative researchers aim to gather an in-depth understanding of human behavior and the reasons that govern such behavior. The qualitative method investigates the why and how of decision making, not just what, where, and when. Hence, smaller but focused samples are more often needed than large samples.
- quantitative methods
-
Quantitative research refers to the systematic empirical investigation of social phenomena via statistical, mathematical, or computational techniques.
Examples
- One of the best known examples of a quantitative research instrument is the United States Census, which is taken every 10 years. The Census form asks every person in the United States for some very basic information, such as age and race. These responses are collected and added together to calculate useful statistics, such as the national poverty rate. These statistics are then used to make important policy decisions.
- One of the most intensive forms of qualitative research is participant observation. In this method of research, the researcher actually becomes a member of the group she or he is studying. In 1981, sociologist Elijah Anderson published a book called A Place on the Corner, which was based on participant observation of an urban neighborhood. Anderson spent three years hanging out on the south side of Chicago, blending in with the neighborhood’s other regulars. He frequented a bar and liquor store, which he called “Jelly’s corner. ” His book is based on his interactions with and more formal interviews with the men who were regulars at Jelly’s corner. By essentially becoming one of them, Anderson was able to gain deep and detailed insight into their small community and how it worked.
A research design encompasses the methodology and procedure employed to conduct scientific research. Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methods of obtaining knowledge. In general, scientific researchers propose hypotheses as explanations of phenomena, and design research to test these hypotheses via predictions which can be derived from them.
The design of a study defines the study type, research question and hypotheses, independent and dependent variables, and data collection methods . There are many ways to classify research designs, but some examples include descriptive (case studies, surveys), correlational (observational study), semi-experimental (field experiment), experimental (with random assignment), review, and meta-analytic, among others. Another distinction can be made between quantitative methods and qualitative methods.
The Scientific Method is an Essential Tool in Research
This image lists the various stages of the scientific method.
Quantitative Methods
Quantitative methods are generally useful when a researcher seeks to study large-scale patterns of behavior, while qualitative methods are often more effective when dealing with interactions and relationships in detail . Quantitative methods of sociological research approach social phenomena from the perspective that they can be measured and quantified. For instance, socio-economic status (often referred to by sociologists as SES) can be divided into different groups such as working-class, middle-class, and wealthy, and can be measured using any of a number of variables, such as income and educational attainment.
Qualitative Methods
Qualitative methods are often used to develop a deeper understanding of a particular phenomenon. They also often deliberately give up on quantity, which is necessary for statistical analysis, in order to reach a greater depth in analysis of the phenomenon being studied. While quantitative methods involve experiments, surveys, secondary data analysis, and statistical analysis, qualitatively oriented sociologists tend to employ different methods of data collection and hypothesis testing, including participant observation, interviews, focus groups, content analysis, and historical comparison .
Qualitative sociological research is often associated with an interpretive framework, which is more descriptive or narrative in its findings. In contrast to the scientific method, which follows the hypothesis-testing model in order to find generalizable results, the interpretive framework seeks to understand social worlds from the point of view of participants.
Although sociologists often specialize in one approach, many sociologists use a complementary combination of design types and research methods in their research. Even in the same study a researcher may employ multiple methods.
2.1.5: Defining the Sample and Collecting Data
Defining the sample and collecting data are key parts of all empirical research, both qualitative and quantitative.
Learning Objective
Describe different types of research samples
Key Points
- It is important to determine the scope of a research project when developing the question. The choice of method often depends largely on what the researcher intends to investigate. Quantitative and qualitative research projects require different subject selection techniques.
- It is important to determine the scope of a research project when developing the question. While quantitative research requires at least 30 subjects to be considered statistically significant, qualitative research generally takes a more in-depth approach to fewer subjects.
- For both qualitative and quantitative research, sampling can be used. The stages of the sampling process are defining the population of interest, specifying the sampling frame, determining the sampling method and sample size, and sampling and data collecting.
- There are various types of samples, including probability and nonprobability samples. Examples of types of samples include simple random samples, stratified samples, cluster samples, and convenience samples.
- Good data collection involves following the defined sampling process, keeping the data in order, and noting comments and non-responses. Errors and biases can result in the data. Sampling errors and biases are induced by the sample design. Non-sampling errors can also affect results.
Key Terms
- bias
-
The difference between the expectation of the sample estimator and the true population value, which reduces the representativeness of the estimator by systematically distorting it.
- data collection
-
Data collection is a term used to describe a process of preparing and collecting data.
- sample
-
A subset of a population selected for measurement, observation or questioning, to provide statistical information about the population.
Social scientists employ a range of methods in order to analyze a vast breadth of social phenomena. Many empirical forms of sociological research follow the scientific method . Scientific inquiry is generally intended to be as objective as possible in order to reduce the biased interpretations of results. Sampling and data collection are a key component of this process.
The Scientific Method is an Essential Tool in Research
This image lists the various stages of the scientific method.
It is important to determine the scope of a research project when developing the question. The choice of method often depends largely on what the researcher intends to investigate. For example, a researcher concerned with drawing a statistical generalization across an entire population may administer a survey questionnaire to a representative sample population. By contrast, a researcher who seeks full contextual understanding of the social actions of individuals may choose ethnographic participant observation or open-ended interviews. These two types of studies will yield different types of data. While quantitative research requires at least 30 subjects to be considered statistically significant, qualitative research generally takes a more in-depth approach to fewer subjects.
In both cases, it behooves the researcher to create a concrete list of goals for collecting data. For instance, a researcher might identify what characteristics should be represented in the subjects. Sampling can be used in both quantitative and qualitative research. In statistics and survey methodology, sampling is concerned with the selection of a subset of individuals from within a statistical population to estimate characteristics of the whole population . The stages of the sampling process are defining the population of interest, specifying the sampling frame, determining the sampling method and sample size, and sampling and data collecting.
Collecting Data
Natural scientists collect data by measuring and recording a sample of the thing they’re studying, such as plants or soil. Similarly, sociologists must collect a sample of social information, often by surveying or interviewing a group of people.
There are various types of samples. A probability sampling is one in which every unit in the population has a chance (greater than zero) of being selected in the sample, and this probability can be accurately determined. Nonprobability sampling is any sampling method where some elements of the population have no chance of selection or where the probability of selection can’t be accurately determined. Examples of types of samples include simple random samples, stratified samples, cluster samples, and convenience samples.
Good data collection involves following the defined sampling process, keeping the data in time order, noting comments and other contextual events, and recording non-responses. Errors and biases can result in the data. Sampling errors and biases, such as selection bias and random sampling error, are induced by the sample design. Non-sampling errors are other errors which can impact the results, caused by problems in data collection, processing, or sample design.
2.1.6: Analyzing Data and Drawing Conclusions
Data analysis in sociological research aims to identify meaningful sociological patterns.
Learning Objective
Compare and contrast the analysis of quantitative vs. qualitative data
Key Points
- Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making. Data analysis is a process, within which several phases can be distinguished.
- One way in which analysis can vary is by the nature of the data. Quantitative data is often analyzed using regressions. Regression analyses measure relationships between dependent and independent variables, taking the existence of unknown parameters into account.
- Qualitative data can be coded–that is, key concepts and variables are assigned a shorthand, and the data gathered are broken down into those concepts or variables. Coding allows sociologists to perform a more rigorous scientific analysis of the data.
- Sociological data analysis is designed to produce patterns. It is important to remember, however, that correlation does not imply causation; in other words, just because variables change at a proportional rate, it does not follow that one variable influences the other.
- Without a valid design, valid scientific conclusions cannot be drawn. Internal validity concerns the degree to which conclusions about causality can be made. External validity concerns the extent to which the results of a study are generalizable.
Key Terms
- correlation
-
A reciprocal, parallel or complementary relationship between two or more comparable objects.
- causation
-
The act of causing; also the act or agency by which an effect is produced.
- Regression analysis
-
In statistics, regression analysis includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed.
Example
- When analyzing data and drawing conclusions, researchers look for patterns. Many hope to find causal patterns. But they must be cautious not to mistake correlation for causation. To better understand the difference between correlation and causation, consider this example. In a certain city, when more ice cream cones are sold, more shootings are reported. Surprised? Don’t be. This relationship is a correlation and it does not necessarily imply causation. The shootings aren’t necessarily caused by the ice cream cone sales. They just happen to occur at the same time. Why? In this case, it’s because of a third variable: temperature. Both shootings and ice cream cone sales tend to increase when the temperature goes up. This may be a causal relationship, not just correlation.
The Process of Data Analysis
Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making. In statistical applications, some people divide data analysis into descriptive statistics, exploratory data analysis (EDA), and confirmatory data analysis (CDA). EDA focuses on discovering new features in the data and CDA focuses on confirming or falsifying existing hypotheses. Predictive analytics focuses on the application of statistical or structural models for predictive forecasting or classification. Text analytics applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a species of unstructured data.
Data analysis is a process, within which several phases can be distinguished. The initial data analysis phase is guided by examining, among other things, the quality of the data (for example, the presence of missing or extreme observations), the quality of measurements, and if the implementation of the study was in line with the research design. In the main analysis phase, either an exploratory or confirmatory approach can be adopted. Usually the approach is decided before data is collected. In an exploratory analysis, no clear hypothesis is stated before analyzing the data, and the data is searched for models that describe the data well. In a confirmatory analysis, clear hypotheses about the data are tested.
Regression Analysis
The type of data analysis employed can vary. One way in which analysis often varies is by the quantitative or qualitative nature of the data.
Quantitative data can be analyzed in a variety of ways, regression analysis being among the most popular . Regression analyses measure relationships between dependent and independent variables, taking the existence of unknown parameters into account. More specifically, regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed.
Linear Regression
This graph illustrates random data points and their linear regression.
A large body of techniques for carrying out regression analysis has been developed. In practice, the performance of regression analysis methods depends on the form of the data generating process and how it relates to the regression approach being used. Since the true form of the data-generating process is generally not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes testable if a large amount of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small effects or questions of causality based on observational data, regression methods give misleading results.
Coding
Qualitative data can involve coding–that is, key concepts and variables are assigned a shorthand, and the data gathered is broken down into those concepts or variables . Coding allows sociologists to perform a more rigorous scientific analysis of the data. Coding is the process of categorizing qualitative data so that the data becomes quantifiable and thus measurable. Of course, before researchers can code raw data such as taped interviews, they need to have a clear research question. How data is coded depends entirely on what the researcher hopes to discover in the data; the same qualitative data can be coded in many different ways, calling attention to different aspects of the data.
Coding Qualitative Data
Qualitative data can be coded, or sorted into categories. Coded data is quantifiable. In this bar chart, help requests have been coded and categorized so we can see which types of help requests are most common.
Sociological Data Analysis
Sociological data analysis is designed to produce patterns. It is important to remember, however, that correlation does not imply causation; in other words, just because variables change at a proportional rate, it does not follow that one variable influences the other .
Conclusions
In terms of the kinds of conclusions that can be drawn, a study and its results can be assessed in multiple ways. Without a valid design, valid scientific conclusions cannot be drawn. Internal validity is an inductive estimate of the degree to which conclusions about causal relationships can be made (e.g., cause and effect), based on the measures used, the research setting, and the whole research design. External validity concerns the extent to which the (internally valid) results of a study can be held to be true for other cases, such as to different people, places, or times. In other words, it is about whether findings can be validly generalized. Learning about and applying statistics (as well as knowing their limitations) can help you better understand sociological research and studies. Knowledge of statistics helps you makes sense of the numbers in terms of relationships, and it allows you to ask relevant questions about sociological phenomena.
2.1.7: Preparing the Research Report
Sociological research publications generally include a literature review, an overview of the methodology followed, the results and an analysis of those results, and conclusions.
Learning Objective
Describe the main components of a sociological research paper
Key Points
- Like any research paper, a sociological research report typically consists of a literature review, an overview of the methods used in data collection, and analysis, findings, and conclusions.
- A literature review is a creative way of organizing what has been written about a topic by scholars and researchers.
- The methods section is necessary to demonstrate how the study was conducted, including the population, sample frame, sample method, sample size, data collection method, and data processing and analysis.
- In the findings and conclusion sections, the researcher reviews all significant findings, notes and discusses all shortcomings, and suggests future research.
Key Terms
- literature review
-
A literature review is a body of text that aims to review the critical points of current knowledge including substantive findings as well as theoretical and methodological contributions to a particular topic.
- quantitative
-
Of a measurement based on some quantity or number rather than on some quality.
- methodology
-
A collection of methods, practices, procedures, and rules used by those who work in some field.
Like any research paper, sociological research is presented with a literature review, an overview of the methods used in data collection, and analysis, findings, and conclusions. Quantitative research papers are usually highly formulaic, with a clear introduction (including presentation of the problem and literature review); sampling and methods; results; discussion and conclusion. In striving to be as objective as possible in order to reduce biased interpretations of results, sociological esearch papers follow the scientific method . Research reports may be published as books or journal articles, given directly to a client, or presented at professional meetings .
The Scientific Method is an Essential Tool in Research
The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge.
A literature review is a creative way of organizing what has been written about a topic by scholars and researchers. You will find literature reviews at the beginning of many essays, research reports, or theses. In writing the literature review, your purpose is to convey to your reader what you have learned through a careful reading of a set of articles related to your research question.
A strong literature review has the following properties:
- It is organized around issues, themes, factors, or variables that are related directly to your thesis or research question.
- It shows the path of prior research and how the current project is linked.
- It provides a good synthesis of what is, and is not, known.
- It indicates the theoretical framework with which you are working.
- It identifies areas of controversy and debate, or limitations in the literature sharing different perspectives.
- It places the formation of research questions in their historical context.
- It identifies the list of the authors that are engaged in similar work.
The methodssection is necessary to demonstrate how the study was conducted, and that the data is valid for study. Without assurance that the research is based on sound methods, readers cannot countenance any conclusions the researcher proposes. In the methodology section, be sure to include: the population, sample frame, sample method, sample size, data collection method, and data processing and analysis. This is also a section in which to clearly present information in table and graph form.
In the findings and conclusion sections, the researcher reviews all significant findings, notes and discusses all shortcomings, and suggests future research. The conclusion section is the only section where opinions can be expressed and persuasive writing is tolerated.
2.2: Research Models
2.2.1: Surveys
The goal of a survey is to collect data from a representative sample of a population to draw conclusions about that larger population.
Learning Objective
Assess the various types of surveys and sampling methods used in sociological research, appealing to the concepts of reliability and validity
Key Points
- The sample of people surveyed is chosen from the entire population of interest. The goal of a survey is to describe not the smaller sample but the larger population.
- To be able to generalize about a population from a smaller sample, that sample must be representative; proportionally the same in all relevant aspects (e.g., percent of women vs. men).
- Surveys can be distributed by mail, email, telephone, or in-person interview.
- Surveys can be used in cross-sectional, successive-independent-samples, and longitudinal study designs.
- Effective surveys are both reliable and valid. A reliable instrument produces consistent results every time it is administered; a valid instrument does in fact measure what it intends to measure.
Key Terms
- sample
-
A subset of a population selected for measurement, observation or questioning, to provide statistical information about the population.
- cross-sectional study
-
A research method that involves observation of a representative sample of a population at one specific point in time.
- successive-independent-samples design
-
A research method that involves observation of multiple random samples from a population over multiple time points.
- longitudinal design
-
A research method that involves observation of the same representative sample of a population over multiple time points, generally over a period of years or decades.
Selecting a Sample to Survey
The sample of people surveyed is chosen from the entire population of interest. The goal of a survey is to describe not the smaller sample but the larger population. This generalizing ability is dependent on the representativeness of the sample.
Nuclear Energy Support in the U.S.
This pie chart shows the results of a survey of people in the United States (February 2005, Bisconti Research Inc.). According to the poll, 67 percent of Americans favor nuclear energy (blue), while 26 percent oppose it (yellow).
There are frequent difficulties one encounters while choosing a representative sample. One common error that results is selection bias—when the procedures used to select a sample result in over- or under-representation of some significant aspect of the population. For instance, if the population of interest consists of 75% females, and 25% males, and the sample consists of 40% females and 60% males, females are under represented while males are overrepresented. In order to minimize selection biases, stratified random sampling is often used. This is when the population is divided into sub-populations called strata, and random samples are drawn from each of the strata, or elements are drawn for the sample on a proportional basis.
For instance, a Gallup Poll, if conducted as a truly representative nationwide random sampling, should be able to provide an accurate estimate of public opinion whether it contacts 2,000 or 10,000 people.
Modes of Administering a Survey
There are several ways of administering a survey. The choice between administration modes is influenced by several factors, including
- costs,
- coverage of the target population,
- flexibility of asking questions,
- respondents’ willingness to participate and
- response accuracy
Different methods create mode effects that change how respondents answer, and different methods have different advantages. The most common modes of administration can be summarized as:
- Telephone
- Mail (post)
- Online surveys
- Personal in-home surveys
- Personal mall or street intercept survey
- Hybrids of the above
Participants willing to take the time to respond will convey personal information about religious beliefs, political views, and morals. Some topics that reflect internal thought are impossible to observe directly and are difficult to discuss honestly in a public forum. People are more likely to share honest answers if they can respond to questions anonymously.
Questionnaire
Questionnaires are a common research method; the U.S. Census is a well-known example.
Types of Studies
Cross-Sectional Design
In a cross-sectional study, a sample (or samples) is drawn from the relevant population and studied once. A cross-sectional study describes characteristics of that population at one time, but cannot give any insight as to causes of population characteristics.
Successive-Independent-Samples Design
A successive-independent-samples design draws multiple random samples from a population at one or more times. This design can study changes within a population, but not changes within individuals because the same individuals are not surveyed more than once. Such studies cannot, therefore, identify the causes of change over time necessarily.
For successive independent samples designs to be effective, the samples must be drawn from the same population, and must be equally representative of it. If the samples are not comparable, the changes between samples may be due to demographic characteristics rather than time. In addition, the questions must be asked in the same way so that responses can be compared directly.
Longitudinal Design
A study following a longitudinal design takes measure of the same random sample at multiple time points. Unlike with a successive independent samples design, this design measures the differences in individual participants’ responses over time. This means that a researcher can potentially assess the reasons for response changes by assessing the differences in respondents’ experiences. However, longitudinal studies are both expensive and difficult to do. It’s harder to find a sample that will commit to a months- or years-long study than a 15-minute interview, and participants frequently leave the study before the final assessment. This attrition of participants is not random, so samples can become less representative with successive assessments.
Reliability and Validity
Reliable measures of self-report are defined by their consistency. Thus, a reliable self-report measure produces consistent results every time it is executed.
A test’s reliability can be measured a few ways. First, one can calculate a test-retest reliability. A test-retest reliability entails conducting the same questionnaire to a large sample at two different times. For the questionnaire to be considered reliable, people in the sample do not have to score identically on each test, but rather their position in the score distribution should be similar for both the test and the retest.
Self-report measures will generally be more reliable when they have many items measuring a construct. Furthermore, measurements will be more reliable when the factor being measured has greater variability among the individuals in the sample that are being tested. Finally, there will be greater reliability when instructions for the completion of the questionnaire are clear and when there are limited distractions in the testing environment.
Contrastingly, a questionnaire is valid if what it measures is what it had originally planned to measure. Construct validity of a measure is the degree to which it measures the theoretical construct that it was originally supposed to measure.
2.2.2: Fieldwork and Observation
Ethnography is a research process that uses fieldwork and observation to learn about a particular community or culture.
Learning Objective
Explain the goals and methods of ethnography
Key Points
- Ethnographic work requires intensive and often immersive long-term participation in the community that is the subject of research, typically involving physical relocation (hence the term fieldwork).
- In participant observation, the researcher immerses himself in a cultural environment, usually over an extended period of time, in order to gain a close and intimate familiarity with a given group of individuals and their practices.
- Such research involves a range of well-defined, though variable methods: interviews, direct observation, participation in the life of the group, collective discussions, analyses of personal documents produced within the group, self-analysis, and life-histories, among others.
- The advantage of ethnography as a technique is that it maximizes the researcher’s understanding of the social and cultural context in which human behavior occurs.
- The advantage of ethnography as a technique is that it maximizes the researcher’s understanding of the social and cultural context in which human behavior occurs. The ethnographer seeks out and develops relationships with cultural insiders, or informants, who are willing to explain aspects of their community from a native viewpoint. A particularly knowledgeable informant who can connect the ethnographer with other such informants is known as a key informant.
Key Terms
- qualitative
-
Of descriptions or distinctions based on some quality rather than on some quantity.
- ethnography
-
The branch of anthropology that scientifically describes specific human cultures and societies.
Example
- Key informants are usually well-connected people who can help an ethnographer gain access to and better understand a community. For example, Sudhir Venkatesh’s key informant, JT, was the leader of the street gang Venkatesh was studying. As the leader of the gang, JT had a privileged vantage point to see, understand, and explain how the gang worked, as well as to introduce Venkatesh to other members.
Fieldwork and Observation
Ethnography is a qualitative research strategy, involving a combination of fieldwork and observation, which seeks to understand cultural phenomena that reflect the knowledge and system of meanings guiding the life of a cultural group. It was pioneered in the field of socio-cultural anthropology, but has also become a popular method in various other fields of social sciences, particularly in sociology.
Ethnographic work requires intensive and often immersive long-term participation in the community that is the subject of research, typically involving physical relocation (hence the term fieldwork). Although it often involves studying ethnic or cultural minority groups, this is not always the case. Ideally, the researcher should strive to have very little effect on the subjects of the study, being as invisible and enmeshed in the community as possible.
Participant Observation
One of the most common methods for collecting data in an ethnographic study is first-hand engagement, known as participant observation . In participant observation, the researcher immerses himself in a cultural environment, usually over an extended period of time, in order to gain a close and intimate familiarity with a given group of individuals (such as a religious, occupational, or sub-cultural group, or a particular community) and their practices.
Fieldwork and Observation
One of the most common methods for collecting data in an ethnographic study is first-hand engagement, known as participant observation.
Methods
Such research involves a range of well-defined, though variable methods: interviews, direct observation, participation in the life of the group, collective discussions, analyses of personal documents produced within the group, self-analysis, and life-histories, among others.
Interviews can be either informal or formal and can range from brief conversations to extended sessions. One way of transcribing interview data is the genealogical method. This is a set of procedures by which ethnographers discover and record connections of kinship, descent, and marriage using diagrams and symbols. Questionnaires can also be used to aid the discovery of local beliefs and perceptions and, in the case of longitudinal research where there is continuous long-term study of an area or site, they can act as valid instruments for measuring changes in the individuals or groups studied.
Advantages
The advantage of ethnography as a technique is that it maximizes the researcher’s understanding of the social and cultural context in which human behavior occurs. The ethnographer seeks out and develops relationships with cultural insiders, or informants, who are willing to explain aspects of their community from a native viewpoint. The process of seeking out new contacts through their personal relationships with current informants is often effective in revealing common cultural common denominators connected to the topic being studied.
2.2.3: Experiments
Experiments are tests designed to prove or disprove a hypothesis by controlling for pertinent variables.
Learning Objective
Compare and contrast how hypotheses are being tested in sociology and in the hard sciences
Key Points
- Experiments are controlled tests designed to prove or disprove a hypothesis.
- A hypothesis is a prediction or an idea that has not yet been tested.
- Researchers must attempt to identify everything that might influence the results of an experiment, and do their best to neutralize the effects of everything except the topic of study.
- Since social scientists do not seek to isolate variables in the same way that the hard sciences do, sociologists create the equivalent of an experimental control via statistical techniques that are applied after data is gathered.
- A control is when two identical experiments are conducted and the factor being tested is varied in only one of these experiments.
Key Terms
- experiment
-
A test under controlled conditions made to either demonstrate a known truth, examine the validity of a hypothesis, or determine the efficacy of something previously untried.
- control
-
A separate group or subject in an experiment against which the results are compared where the primary variable is low or nonexistent.
- hypothesis
-
Used loosely, a tentative conjecture explaining an observation, phenomenon, or scientific problem that can be tested by further observation, investigation, or experimentation.
Example
- To conduct an experiment, a scientist must be able to control experimental conditions so that only the variable being studied changes. For example, a scientist might have two identical bacterial cultures. One is the control and it remains unchanged. The other is the experimental culture and it will be subjected to a treatment, such as an antibiotic. The scientist can then compare any differences that develop and safely assume those differences are due to the effects of the antibiotic. But social life is complicated and it would be difficult for a social scientist to find two identical groups of people in order to have both an experimental and a control group. Imagine, for instance, that a sociologist wants to know whether small class size improves academic achievement. For a true experiment, she would need to find identical students and identical teachers, then put some in large classes and some in small classes. But finding identical students and teachers would be impossible! Instead, the sociologist can statistically control for differences by including variables such as students’ socioeconomic status, family background, and teachers’ evaluation scores in a regression. This technique does not make the students or teachers identical, but it allows the research to calculate how much of the difference in students’ achievement is due to background factors like socioeconomic status and how much is actually due to differences in class size.
Scientists form a hypothesis, which is a prediction or an idea that has not yet been tested. In order to prove or disprove the hypothesis, scientists must perform experiments. The experiment is a controlled test designed specifically to prove or disprove the hypothesis . Before undertaking the experiment, researchers must attempt to identify everything that might influence the results of an experiment and do their best to neutralize the effects of everything except the topic of study. This is done through the introduction of an experimental control: two virtually identical experiments are run, in only one of which the factor being tested is varied. This serves to further isolate any causal phenomena.
An Experiment
An experiment is a controlled test designed specifically to prove or disprove a hypothesis.
Of course, an experiment is not an absolute requirement. In observation based fields of science, actual experiments must be designed differently than for the classical laboratory based sciences. Due to ethical concerns and the sheer cost of manipulating large segments of society, sociologists often turn to other methods for testing hypotheses.
Since sociologists do not seek to isolate variables in the same way that hard sciences do, this kind of control is often done via statistical techniques, such as regressions, applied after data is gathered. Direct experimentation is thus fairly rare in sociology.
Scientists must assume an attitude of openness and accountability on the part of those conducting an experiment. It is essential to keep detailed records in order to facilitate reporting on the experimental results and provide evidence of the effectiveness and integrity of the procedure.
2.2.4: Documents
Documentary research involves examining texts and documents as evidence of human behavior.
Learning Objective
Describe different kinds of documents used in sociological research
Key Points
- This kind of sociological research is generally considered a part of media studies.
- Unobtrusive research involves ways of studying human behavior without affecting it in the process.
- Documents can either be primary sources, which are original materials that are not created after the fact with the benefit of hindsight, or secondary sources that cite, comment, or build upon primary sources.
- Typically, sociological research involving documents falls under the cross-disciplinary purview of media studies, which encompasses all research dealing with television, books, magazines, pamphlets, or any other human-recorded data. The specific media being studied are often referred to as texts.
- Sociological research involving documents, or, more specifically, media studies, is one of the less interactive research options available to sociologists. It can provide a significant insight into the norms, values, and beliefs of people belonging to a particular historical and cultural context.
- Content analysis is the study of recorded human communications.
Key Terms
- media studies
-
Academic discipline that deals with the content, history, meaning, and effects of various media, and in particular mass media.
- content analysis
-
Content analysis or textual analysis is a methodology in the social sciences for studying the content of communication.
- documentary research
-
Documentary research involves the use of texts and documents as source materials. Source materials include: government publications, newspapers, certificates, census publications, novels, film and video, paintings, personal photographs, diaries and innumerable other written, visual, and pictorial sources in paper, electronic, or other “hard copy” form.
Documentary Research
It is possible to do sociological research without directly involving humans at all. One such method is documentary research. In documentary research, all information is collected from texts and documents. The texts and documents can be either written, pictorial, or visual in form.
The material used can be categorized as primary sources, which are original materials that are not created after the fact with the benefit of hindsight, and secondary sources that cite, comment, or build upon primary sources.
Media Studies and Content Analysis
Typically, sociological research on documents falls under the cross-disciplinary purview of media studies, which encompasses all research dealing with television, books, magazines, pamphlets, or any other human-recorded data. Regardless of the specific media being studied, they are referred to as texts.
Media studies may draw on traditions from both the social sciences and the humanities, but mostly from its core disciplines of mass communication, communication, communication sciences, and communication studies.
Researchers may also develop and employ theories and methods from disciplines including cultural studies, rhetoric, philosophy, literary theory, psychology, political economy, economics, sociology, anthropology, social theory, art history and criticism, film theory, feminist theory, information theory, and political science .
Government Documentary Research
Sociologists may use government documents to research the ways in which policies are made.
Content analysis refers to the study of recorded human communications, such as paintings, written texts, and photos. It falls under the category of unobtrusive research, which can be defined as ways for studying human behavior without affecting it in the process. While sociological research involving documents is one of the less interactive research options available to sociologists, it can reveal a great deal about the norms, values, and beliefs of people belonging to a particular temporal and cultural context.
2.2.5: Use of Existing Sources
Studying existing sources collected by other researchers is an essential part of research in the social sciences.
Learning Objective
Explain how the use of existing sources can benefit researchers
Key Points
- Archival research is the study of existing sources. Without archival research, any research project is necessarily incomplete.
- The study of sources collected by someone other than the researcher is known as archival research or secondary data research.
- The importance of archival or secondary data research is two-fold. By studying texts related to their topics, researchers gain a strong foundation on which to base their work. Secondly, this kind of study is necessary in the development of their central research question.
Key Terms
- primary data
-
Data that has been compiled for a specific purpose, and has not been collated or merged with others.
- Archival research
-
An archive is a way of sorting and organizing older documents, whether it be digitally (photographs online, e-mails, etc.) or manually (putting it in folders, photo albums, etc.). Archiving is one part of the curating process which is typically carried out by a curator.
- secondary data
-
Secondary data is data collected by someone other than the user. Common sources of secondary data for social science include censuses, organizational records, and data collected through qualitative methodologies or qualitative research.
Example
- Harvard sociologist Theda Skocpol is well known for her work in comparative historical sociology, a sub-field that tends to emphasize the use of existing sources because of its often wide geographical and historical scope. For example, in her 1979 book States and Social Revolutions, Skocpol compared the history of revolution in three countries: France, Russia, and China. Direct research methods such as interviews would have been impossible, since many of the events she analyzed, such as the French Revolution, took place hundreds of years in the past. But even gathering primary historical documents for each of the three countries would have been a daunting task. Instead, Skocpol relied heavily on secondary accounts of the histories of each country, which allowed her to analyze and compare hundreds of years of history from countries thousands of miles apart.
Using Existing Sources
The study of sources collected by someone other than the researcher, also known as archival research or secondary data research, is an essential part of sociology . In archival research or secondary research, the focus is not on collecting new data but on studying existing texts.
Existing Sources
While some sociologists spend time in the field conducting surveys or observing participants, others spend most of their research time in libraries, using existing sources for their research.
By studying texts related to their topics, researchers gain a strong foundation on which to base their work. Furthermore, this kind of study is necessary for the development of their central research question. Without a thorough understanding of the research that has already been done, it is impossible to know what a meaningful and relevant research question is, much less how to position and frame research within the context of the field as a whole.
Types of Existing Sources
Common sources of secondary data for social science include censuses, organizational records, field notes, semi-structured and structured interviews, and other forms of data collected through quantitative methods or qualitative research. These methods are considered non-reactive, because the people do not know they are involved in a study. Common sources differ from primary data. Primary data, by contrast, are collected by the investigator conducting the research.
Researchers use secondary analysis for several reasons. The primary reason is that secondary data analysis saves time that would otherwise be spent collecting data. In the case of quantitative data, secondary analysis provides larger and higher-quality databases that would be unfeasible for any individual researcher to collect on his own. In addition, analysts of social and economic change consider secondary data essential, since it is impossible to conduct a new survey that can adequately capture past change and developments.
2.3: Ethics in Sociological Research
2.3.1: Confidentiality
Sociologists should take all necessary steps to protect the privacy and confidentiality of their subjects.
Learning Objective
Give examples of how the anonymity of a research subject can be protected
Key Points
- When a survey is used, the data should be coded to protect the anonymity of subject.
- For field research, anonymity can be maintained by using aliases (fake names) on the observation reports.
- The types of information that should be kept confidential can range from a person’s name or income, to more significant details (depending on the participant’s social and political contexts), such as religious or political affiliation.
- The kinds of information that should be kept confidential can range from relatively innocuous facts, such as a person’s name, to more sensitive information, such as a person’s religious affiliation.
- Steps to ensure that the confidentiality of research participants is never breached include using pseudonyms for research subjects and keeping all notes in a secure location.
Key Terms
- confidentiality
-
Confidentiality is an ethical principle of discretion associated with the professions, such as medicine, law, and psychotherapy.
- research
-
Diligent inquiry or examination to seek or revise facts, principles, theories, and applications.
- pseudonym
-
A fictitious name, often used by writers and movie stars.
In any sociological research conducted on human subjects, the sociologists should take all the steps necessary to protect the privacy and confidentiality of their subjects. For example, when a survey is used, the data should be coded to protect the anonymity of the subjects.
In addition, there should be no way for any answers to be connected with the respondent who gave them . These rules apply to field research as well. For field research, anonymity can be maintained by using aliases (fake names) on the observation reports.
Cuyahoga County U.S. Census Form from 1920
Following ethical guidelines, researchers keep individual details confidential for decades. This form, from 1920, has been released because the information contained is too old to have any likely consequences for people who are still alive.
The types of information that should be kept confidential can range from something as relatively mundane and innocuous as a person’s name (pseudonyms are often employed in both interview transcripts and published research) or income, to more significant details (depending on the participant’s social and political contexts), such as religious or political affiliation.
Even seemingly trivial information should be kept safe, because it is impossible to predict what the repercussions would be in the event that this information becomes public. Unless subjects specifically and explicitly give their consent to be associated with the published information, no real names or identifying information of any kind should be used. Any research notes that might identify subjects should be stored securely. It is the obligation of the researcher to protect the private information of the research subjects, particularly when studying sensitive and controversial topics like deviance, the results of which may harm the participants if they were to be personally identified. By ensuring the safety of sensitive information, researchers ensure the safety of their subjects.
2.3.2: Protecting Research Subjects
There are many guidelines in place to protect human subjects in sociological research.
Learning Objective
Identify the core tenet of research ethics, the importance of research ethics, and examples of ethical practice
Key Points
- Sociologists have a responsibility to protect their subjects by following ethical guidelines. Organizations like the American Sociological Association maintain, oversee, and enforce a code of ethics for sociologists to follow.
- In the context of sociological research, a code of ethics refers to formal guidelines for conducting sociological research, consisting of principles and ethical standards.
- The core tenet of research ethics is that the subjects not be harmed; principles such as confidentiality, anonymity, informed consent, and honesty follow from this premise.
- Institutional review boards are committees designated to approve, monitor, and review research involving people. They are intended to assess such factors as conflicts of interest and potential emotional distress caused to subjects.
- Institutional Review Boards are committees designated to approve, monitor, and review research involving people. They are intended to assess such factors as conflicts of interest–for instance, a funding source that has a vested interest in the outcome of a research project–and potential emotional distress caused to subjects.
Key Terms
- institutional review board
-
An institutional review board (IRB), also known as an independent ethics committee or ethical review board, is a committee that has been formally designated to approve, monitor, and review biomedical and behavioral research involving humans.
- confidentiality
-
Confidentiality is an ethical principle of discretion associated with the professions, such as medicine, law, and psychotherapy.
- informed consent
-
Informed consent is a phrase often used in law to indicate that the consent a person gives meets certain minimum standards. In order to give informed consent, the individual concerned must have adequate reasoning faculties and be in possession of all relevant facts at the time consent is given.
Examples
- Today’s ethical standards were developed in response to previous studies that had deleterious results for participants. One of the most infamous was the Stanford prison experiment. The Stanford prison experiment, conducted by researchers at Stanford in 1971, was funded by the military and meant to shed light on the sources of conflict between military guards and prisoners. For the experiment, researchers recruited 24 undergraduate students and randomly assigned each of them to be either a prisoner or a guard in a mock prison in the basement of the Stanford psychology department building. The participants adapted to their roles well beyond expectations, as the guards enforced authoritarian measures and ultimately subjected some of the prisoners to psychological torture. Many of the prisoners passively accepted psychological abuse and, at the request of the guards, readily harassed other prisoners who attempted to prevent it. Two of the prisoners quit the experiment early and the entire experiment was abruptly stopped after only six days. The experiment was criticized as being unethical and unscientific. Subsequently-adopted ethical standards would make it a breach of ethics to conduct such a study.
- The Stanford prison experiment, conducted by researchers at Stanford in 1971, was funded by the military and meant to shed light on the sources of conflict between military guards and prisoners. For the experiment, researchers recruited 24 undergraduate students and randomly assigned each of them to be either a prisoner or a guard in a mock prison in the basement of the Stanford psychology department building. The participants adapted to their roles well beyond expectations, as the guards enforced authoritarian measures and ultimately subjected some of the prisoners to psychological torture. Many of the prisoners passively accepted psychological abuse and, at the request of the guards, readily harassed other prisoners who attempted to prevent it. Two of the prisoners quit the experiment early and the entire experiment was abruptly stopped after only six days. The experiment was criticized as being unethical and unscientific. Subsequently-adopted ethical standards would make it a breach of ethics to conduct such a study.
Ethical considerations are of particular importance to sociologists because sociologists study people. Thus, sociologists must adhere to a rigorous code of ethics. In the context of sociological research, a code of ethics refers to formal guidelines for conducting research, consisting of principles and ethical standards concerning the treatment of human individuals.
The most important ethical consideration in sociological research is that participants in a sociological investigation are not harmed in any way. Exactly what this entails can vary from study to study, but there are several universally recognized considerations. For instance, research on children and youth always requires parental consent . All sociological research requiresinformed consent, and participants are never coerced into participation. Informed consent in general involves ensuring that prior to agreeing to participate, research subjects are aware of details of the study including the risks and benefits of participation and in what ways the data collected will be used and kept secure. Participants are also told that they may stop their participation in the study at any time.
Ethical Guidelines for Research Involving Children
Sociologists must follow strict ethical guidelines, especially when working with children or other vulnerable populations.
Institutional review boards (IRBs) are committees that are appointed to approve, monitor, and review research involving human subjects in order to make sure that the well-being of research participants is never compromised. They are thus intended to assess such factors as conflicts of interest–for instance, a funding source that has a vested interest in the outcome of a research project–and potential emotional distress caused to subjects. While often primarily oriented toward biomedical research, approval from IRBs is now required for all studies dealing with humans.
2.3.3: Misleading Research Subjects
If a researcher deceives or conceals the purpose or procedure of a study, they are misleading their research subjects.
Learning Objective
Identify two problems with intentionally deceiving research subjects
Key Points
- Although deception introduces ethical concerns because it threatens the validity of the subjects’ informed consent, there are certain cases in which researchers are allowed to deceive their subjects.
- Some studies involve intentionally deceiving subjects about the nature of the research, especially in cases in which full disclosure to the research subject could either skew the results of the study or cause some sort of harm to the researcher.
- In most instances, researchers are required to debrief (reveal the deception and explain the true purpose of the study to) subjects after the data is gathered.
- Some possible ways to address concerns are collecting pre-consent from participants and minimizing deception.
Key Terms
- subject
-
A human research subject is a living individual about whom a research investigator (whether a professional or a student) obtains data.
- debrief
-
To question someone, or a group of people, after the implementation of a project in order to learn from mistakes.
Example
- Asch’s study of conformity is an example of research that required deception. Asch put a subject in a room with other participants who appeared to be normal subjects but who were actually part of the experiment. All participants were shown three lines of different length and asked to identify which was the same length as a fourth line. All experimenters would answer correctly until the last question, at which point they would choose the wrong answer. In most cases, the subject would conform and agree with the others, choosing a line that was clearly incorrect. If subjects knew beforehand that the study was investigating conformity, they would have reacted differently. In this case, deception was justified.
Some sociology studies involve intentionally deceiving subjects about the nature of the research. For instance, a researcher dealing with an organized crime syndicate might be concerned that if his subjects were aware of the researcher’s academic interests, his physical safety might be at risk . A more common case is a study in which researchers are concerned that if the subjects are aware of what is being measured, such as their reaction to a series of violent images, the results will be altered or tempered by that knowledge. In the latter case, researchers are required to debrief (reveal the deception and explain the true purpose of the study to) subjects after the data is gathered.
Dangerous Elements
Researchers working in dangerous environments may deceive participants in order to protect their own safety.
The ethical problems with conducting a trial involving an element of deception are legion. Valid consent means a participant is aware of all relevant context surrounding the research they are participating in, including both risks and benefits. Failure to ensure informed consent is likely to result in the harm of potential participants and others who may be affected indirectly. This harm could occur either in terms of the distress that subsequent knowledge of deception may cause participants and others, or in terms of the significant risks to which deception may expose participants and others. For example, a participant in a medical trial could misuse a drug substance, believing it to be a placebo.
Two approaches have been suggested to minimize such difficulties: pre-consent (including authorized deception and generic pre-consent) and minimized deception. Pre-consent involves informing potential participants that a given research study involves an element of deception without revealing its exact nature. This approach respects the autonomy of individuals because subjects consent to the deception. Minimizing deception involves taking steps such as introducing words like “probably” so that statements are formally accurate even if they may be misleading.
2.3.4: Research Funding
Research funding comes from grants from private groups or governments, and researchers must be careful to avoid conflicts of interest.
Learning Objective
Examine the process of receiving research funding, including avoiding conflicts of interest and the sources of research funding
Key Points
- Most research funding comes from two major sources: corporations (through research and development departments) and government (primarily carried out through universities and specialized government agencies).
- If the funding source for a research project has an interest in the outcome of the project, this represents a conflict of interest and a potential ethical breach.
- A conflict of interest can occur if a sociologist is granted funding to conduct research on a topic, which the source of funding is invested in or related to in some way.
Key Terms
- research
-
Diligent inquiry or examination to seek or revise facts, principles, theories, and applications.
- conflict of interest
-
A situation in which someone in a position of trust has competing professional or personal interests.
Example
- A conflict of interest can occur when a sociologist is given funding to conduct research on an issue that relates to the source of the funds. For example, if Microsoft were to fund a sociologist to investigate whether users of Microsoft’s product are happier than users of open source software (e.g., Linux, Openoffice.org), the sociologist would need to disclose the source of the funding as it presents a significant conflict of interest.
Money for sociological research doesn’t grow on trees. Many researchers fund their work by applying for grants from private groups or governments, but they must be careful to avoid a conflict of interest . Research funding is a term generally covering any funding for scientific research, in the areas of both “hard” science and technology and social sciences. The term often connotes funding obtained through a competitive process, in which potential research projects are evaluated and only the most promising receive funding. Such processes, which are run by government, corporations, or foundations, allocate scarce funds.
Funding and Conflicts of Interest
Money for sociological research doesn’t grow on trees. Many researchers fund their work by applying for grants from private groups or governments, but they must be careful to avoid conflicts of interest.
Most research funding comes from two major sources: corporations (through research and development departments) and government (primarily carried out through universities and specialized government agencies). Some small amounts of scientific research are also carried out (or funded) by charitable foundations. In the United States, the government funding proportion in certain industries is higher, and it dominates research in social science and humanities.
Government-funded research can either be carried out by the government itself, or through grants to academic and other researchers outside the government. An advantage to government sponsored research is that the results are publicly shared, whereas with privately funded research the ideas are controlled by a single group. Consequently, government sponsored research can result in mass collaborative projects that are beyond the scope of isolated private researchers.
Funding of research by private companies is mainly motivated by profit, and are much less likely than governments to fund research projects solely for the sake of knowledge. The profit incentive causes researchers to concentrate their energies on projects which are perceived as likely to generate profits.
Research funding is often applied for by scientists and approved by a granting agency to financially support research. These grants require a lengthy process as the granting agency can inquire about the researcher’s background, the facilities used, the equipment needed, the time involved, and the overall potential of the scientific outcome. The process of grant writing and grant proposing is a somewhat delicate process for both the granter and the grantee. The granter wants to choose the research that best fits their scientific principles, and the grantee wants to apply for research in which he has the best chances but also in which he can build a body of work toward future scientific endeavors. This interplay can be a lengthy process. However, most universities have research administration offices to facilitate the interaction between the researcher and the granting agency.
If the funding source for a research project has an interest in the outcome of the project, this can represent a conflict of interest and a potential ethical breach. In other words, when research is funded by the same agency that can be expected to gain from a favorable outcome, there is a potential for biased results. The existence of a conflict of interest, or a potential one at that, can call into question the integrity of a sociologist’s research and findings.
2.3.5: Value Neutrality in Sociological Research
Value neutrality is the duty of sociologists to strive to be impartial and overcome their biases as they conduct their research.
Learning Objective
Reconstruct the tension surrounding the idea of value neutrality in sociological research
Key Points
- Assigning moral values to social phenomena is an inescapable result of being part of society, rendering truly value-free research inconceivable. Despite this fact, sociologists should still strive for value neutrality.
- Value neutrality, as described by Max Weber, is the duty of sociologists to identify and acknowledge their own values and overcome their personal biases when conducting sociological research.
- In order to be value-neutral, sociologists must be aware of their own moral judgments and values, and avoid incorporating them into their research, their conclusions, and their teaching.
- Many sociologists believe it is impossible to set aside personal values and retain complete objectivity. They caution readers, rather, to understand that sociological studies may, by necessity, contain a certain amount of value bias.
Key Term
- Max Weber
-
(1864–1920) A German sociologist, philosopher, and political economist who profoundly influenced social theory, social research, and the discipline of sociology itself.
Assigning moral values to social phenomena is an inescapable result of being part of society. This inevitably renders truly value-free research inconceivable; however despite this, sociologists should strive for value neutrality. According to Max Weber, a German sociologist and philosopher who profoundly influenced social theory, value neutrality is the duty of sociologists to strive to be impartial and overcome their biases as they conduct their research, analyze their data, and publish their findings . Weber understood that personal values could distort the framework for disclosing study results. While he accepted that some aspects of research design might be influenced by personal values, he declared that it was entirely inappropriate to allow them to shape the interpretation of the responses.
Max Weber
Max Weber was a German sociologist, philosopher, and political economist who profoundly influenced social theory, social research, and the discipline of sociology itself.
Sociologists, Weber stated, must establish value neutrality, a practice of remaining impartial, without bias or judgment, during the course of a study and in publishing results. To do this, they must be conscious of their own personal values. Sociologists are obligated to disclose research findings without omitting or distorting significant data, even if results contradict personal views, predicted outcomes, or widely accepted beliefs. Furthermore, and perhaps more importantly, it is the duty of sociologists to avoid bringing their ideology into their roles as instructors.
Is value neutrality possible? Many sociologists believe it is impossible to set aside personal values and retain complete objectivity. They caution readers, rather, to understand that sociological studies may, by necessity, contain a certain amount of value bias. It does not discredit the results but allows readers to view them as one form of truth rather than as a singular fact. Some sociologists attempt to remain uncritical and as objective as possible when studying cultural institutions. However, this is difficult to obtain. Being a human and studying human subjects results in some degree of subjectivity, due to cultural influences. This is not necessarily negative, but it should be reported in any study being done so people can interpret the results as clearly as possible.
Value neutrality does not mean having no opinions, however. It just means that sociologists must strive to overcome personal biases, particularly subconscious biases, when analyzing data. It also means that sociologists must avoid skewing data in order to match a predetermined outcome that aligns with a particular agenda, such as a political or moral point of view. Although subjectivity is likely in almost any sociological study, with careful consideration, a good sociologist can limit its effect on any particular study.
Chapter 1: Sociology
1.1: The Sociological Perspective
1.1.1: Studying Sociology
Sociological studies range from the analysis of conversations and behaviors to the development of theories in order to understand how the world works.
Learning Objective
Identify ways in which sociology is applied in the real world
Key Points
- Sociology uses both quantitative and qualitative methods to study both face-to-face human social interactions and large scale social trends.
- Sociology uses empirical and critical analysis methods to study human social interaction.
- Sociology includes both macrosociology and microsociology; microsociology examines the study of people in face-to-face interactions, and macrosociology involves the study of widespread social processes.
- Sociology is a branch of the social sciences that uses systematic methods of empirical investigation and critical analysis to develop and refine a body of knowledge about human social structure and activity.
Key Terms
- quantitative
-
Of a measurement based on some quantity or number rather than on some quality.
- sociology
-
The study of society, human social interaction, and the rules and processes that bind and separate people, not only as individuals, but as members of associations, groups, and institutions
- qualitative
-
Of descriptions or distinctions based on some quality rather than on some quantity.
Sociology is the study of human social life. Sociology has many sub-sections of study, ranging from the analysis of conversations to the development of theories to try to understand how the entire world works. This chapter will introduce you to sociology and explain why it is important and how it can change your perspective of the world around you, and give a brief history of the discipline.
Sociology is a branch of the social sciences that uses systematic methods of empirical investigation and critical analysis to develop and refine a body of knowledge about human social structure and activity. Sometimes the goal of sociology is to apply such knowledge to the pursuit of government policies designed to benefit the general social welfare. Its subject matter ranges from the micro level to the macro level. Microsociology involves the study of people in face-to-face interactions. Macrosociology involves the study of widespread social processes. Sociology is a broad discipline in terms of both methodology and subject matter. The traditional focuses of sociology have included social relations, social stratification, social interaction, culture, and deviance, and the approaches of sociology have included both qualitative and quantitative research techniques.
Much of what human activity falls under the category of social structure or social activity; because of this, sociology has gradually expanded its focus to such far-flung subjects as the study of economic activity, health disparities, and even the role of social activity in the creation of scientific knowledge. The range of social scientific methods has also been broadly expanded. For example, the “cultural turn” of the 1970s and 1980s brought more humanistic interpretive approaches to the study of culture in sociology. Conversely, the same decades saw the rise of new mathematically rigorous approaches, such as social network analysis.
1.1.2: The Sociological Imagination
The sociological imagination is the ability to situate personal troubles within an informed framework of larger social processes.
Learning Objective
Discuss C. Wright Mills’ claim concerning the importance of the “sociological imagination” for individuals
Key Points
- Because they tried to understand the larger processes that were affecting their own personal experience of the world, it might be said that the founders of sociology, like Marx, Weber, and Durkheim, exercised what C. Wright Mills later called the sociological imagination.
- C. Wright Mills, a prominent mid-20th century American sociologist, described the sociological imagination as the ability to situate personal troubles and life trajectories within an informed framework of larger social processes.
- Other scholars after Mills have employed the phrase more generally, as the type of insight offered by sociology and its relevance in daily life. Another way of describing sociological imagination is the understanding that social outcomes are shaped by social context, actors, and social actions.
Key Term
- the sociological imagination
-
Coined by C. Wright Mills, the sociological imagination is the ability to situate personal troubles and life trajectories within an informed framework of larger social processes.
Example
- An analogy can help us better understand what Mills meant by the sociological imagination. Think of a fish swimming in the ocean. That fish is surrounded by water, but the water is so familiar and commonplace to the fish that, if asked to describe its situation, the fish could hardly be expected to describe the water as well. Similarly, we all live in a social milieu, but because we are so intimately familiar with it, we cannot easily study it objectively. The sociological imagination takes the metaphorical fish out of the water. It allows us to look on ourselves and our social surroundings in a reflective way and to question the things we have always taken for granted.
The Sociological Imagination
Early sociological theorists, like Marx , Weber, and Durkheim, were concerned with the phenomena they believed to be driving social change in their time. Naturally, in pursuing answers to these large questions, they received intellectual stimulation. These founders of sociology were some of the earliest individuals to employ what C. Wright Mills (a prominent mid-20th century American sociologist) would later call the sociological imagination: the ability to situate personal troubles and life trajectories within an informed framework of larger social processes. The term sociological imagination describes the type of insight offered by the discipline of sociology. While scholars have quarreled over interpretations of the phrase, it is also sometimes used to emphasize sociology’s relevance in daily life.
Émile Durkheim
Durkheim formally established the academic discipline and, with Karl Marx and Max Weber, is commonly cited as the principal architect of modern social science and father of sociology.
Karl Marx
Karl Marx, another one of the founders of sociology, used his sociological imagination to understand and critique industrial society.
C. Wright Mills
In describing the sociological imagination, Mills asserted the following. “What people need… is a quality of mind that will help them to use information and to develop reason in order to achieve lucid summations of what is going on in the world and of what may be happening within themselves. The sociological imagination enables its possessor to understand the larger historical scene in terms of its meaning for the inner life and the external career of a variety of individuals. ” Mills believed in the power of the sociological imagination to connect “personal troubles to public issues. “
As Mills saw it, the sociological imagination helped individuals cope with the social world by enabling them to step outside their own, personal, self-centered view of the world. By employing the sociological imagination, individual people are forced to perceive, from an objective position, events and social structures that influence behavior, attitudes, and culture.
In the decades after Mills, other scholars have employed the term to describe the sociological approach in a more general way. Another way of defining the sociological imagination is the understanding that social outcomes are shaped by social context, actors, and actions.
1.1.3: Sociology and Science
Early sociological studies were thought to be similar to the natural sciences due to their use of empiricism and the scientific method.
Learning Objective
Contrast positivist sociology with “verstehen”-oriented sociological approaches
Key Points
- Early sociological approaches were primarily positivist—they treated sensory data as the sole source of authentic knowledge, and they tried to predict human behavior.
- Max Weber and Wilhelm Dilthey introduced the idea of verstehen, which is an attempt to understand and interpret meanings behind social behavior.
- The difference between positivism and verstehen has often been understood as the difference between quantitative and qualitative sociology.
- Quantitative sociology seeks to answer a question using numerical analysis of patterns, while qualitative sociology seeks to arrive at deeper a understanding based on how people talk about and interpret their actions.
Key Terms
- empirical
-
Pertaining to, derived from, or testable by observations made using the physical senses or using instruments which extend the senses.
- Verstehen
-
A systematic interpretive process of understanding the meaning of action from the actor’s point of view; in the context of German philosophy and social sciences in general, the special sense of “interpretive or participatory examination” of social phenomena.
- positivism
-
A doctrine that states that the only authentic knowledge is scientific knowledge, and that such knowledge can only come from positive affirmation of theories through strict scientific method, refusing every form of metaphysics.
Early sociological studies considered the field of sociology to be similar to the natural sciences, like physics or biology. As a result, many researchers argued that the methodology used in the natural sciences was perfectly suited for use in the social sciences. The effect of employing the scientific method and stressing empiricism was the distinction of sociology from theology, philosophy, and metaphysics. This also resulted in sociology being recognized as an empirical science.
Positivism and Verstehen
This early sociological approach, supported by August Comte, led to positivism, an idea that data derived from sensory experience and that logical and mathematical treatments of such data are together the exclusive source of all authentic knowledge. The goal of positivism, like the natural sciences, is prediction. But in the case of sociology, positivism’s goal is prediction of human behavior, which is a complicated proposition.
The goal of predicting human behavior was quickly realized to be a bit lofty. Scientists like Wilhelm Dilthey and Heinrich Rickert argued that the natural world differs from the social world; human society has culture, unlike the societies of most other animals. The behavior of ants and wolves, for example, is primarily based on genetic instructions and is not passed from generation to generation through socialization. As a result, an additional goal was proposed for sociology. Max Weber and Wilhelm Dilthey introduced the concept of verstehen. The goal of verstehen is less to predict behavior than it is to understand behavior. Weber said that he was after meaningful social action, not simply statistical or mathematical knowledge about society. Arriving at a verstehen-like understanding of society thus involves not only quantitative approaches, but more interpretive, qualitative approaches.
The inability of sociology and other social sciences to perfectly predict the behavior of humans or to fully comprehend a different culture has led to the social sciences being labeled “soft sciences. ” While some might consider this label derogatory, in a sense it can be seen as an admission of the remarkable complexity of humans as social animals. Any animal as complex as humans is bound to be difficult to fully comprehend. Humans, human society, and human culture are all constantly changing, which means the social sciences will constantly be works in progress.
Quantitative and Qualitative Sociology
The contrast between positivist sociology and the verstehen approach has been reformulated in modern sociology as a distinction between quantitative and qualitative methodological approaches, respectively. Quantitative sociology is generally a numerical approach to understanding human behavior. Surveys with large numbers of participants are aggregated into data sets and analyzed using statistics, allowing researchers to discern patterns in human behavior. Qualitative sociology generally opts for depth over breadth. The qualitative approach uses in-depth interviews, focus groups, or the analysis of content sources (books, magazines, journals, TV shows, etc.) as data sources. These sources are then analyzed systematically to discern patterns and to arrive at a better understanding of human behavior.
Drawing a hard and fast distinction between quantitative and qualitative sociology is a bit misleading, however. Both share a similar approach in that the first step in all sciences is the development of a theory and the generation of testable hypotheses. While there are some individuals who begin analyzing data without a theoretical orientation to guide their analysis, most begin with a theoretical idea or question and gather data to test that theory. The second step is the collection of data, and this is really where the two approaches differ. Quantitative sociology focuses on numerical representations of the research subjects, while qualitative sociology focuses on the ideas found within the discourse and rhetoric of the research subjects.
Max Weber
Max Weber and Wilhelm Dilthey introduced verstehen—understanding behaviors—as goal of sociology.
1.1.4: Sociology and the Social Sciences
As a social science, sociology explores the application of scientific methods to the study of the human aspects of the world.
Learning Objective
Analyze the similarities and differences between the social sciences
Key Points
- In the 17th century, scholars began to define the natural world as a reality separate from human or spiritual reality. As such, they thought the natural world should be studied using scientific and empirical methods.
- The pressure to discover mathematical relationships between objects of study carried into the study of human behavior, thus distinguishing social sciences from the humanities.
- By the 19th century, scholars began studying human behavior from a scientific perspective in an attempt to discover law-like properties of human interaction.
- In the attempt to study human behavior using scientific and empirical principles, sociologists always encounter dilemmas, as humans do not always operate predictably according to natural laws.
- Even as Durkheim and Marx formulated law-like models of the transition from pre-industrial to industrial societies, Weber was interested in the seemingly “irrational” ideas and values, which, in his view, also contributed to the transition.
Key Terms
- social science
-
A branch of science that studies society and the human behavior in it, including anthropology, communication studies, criminology, economics, geography, history, political science, psychology, social studies, and sociology.
- science
-
A particular discipline or branch of learning, especially one dealing with measurable or systematic principles rather than intuition or natural ability.
- humanities
-
The humanities are academic disciplines that study the human condition, using methods that are primarily analytical, critical, or speculative, as distinguished from the mainly empirical approaches of the natural sciences.
Example
- Sociologists occasionally posit the existence of unchanging, abstract social laws. For example, Thomas Malthus believed human populations were subject to the law of exponential growth: as populations grew, more people would be available to reproduce, and thus the rate of population growth would increase, resulting in exponential growth. But even this law has proved to have exceptions. Around the world, population growth rates have declined as new types of contraception have been introduced and as policies or economic circumstances discourage reproduction.
As a social science, sociology involves the application of scientific methods to the study of the human aspects of the world. The social science disciplines also include psychology, political science, and economics, among other fields. As a generalization, psychology is the study of the human mind and micro-level (or individual) behavior; sociology examines human society; psychology focuses on mental and thought processes (internal), whereas sociology focuses on human behavior (external). Political science studies the governing of groups and countries; and economics concerns itself with the production and allocation of wealth in society. The use of scientific methods differentiates the social sciences from the humanities.
The Development of Social Science
In ancient philosophy, there was no difference between science and humanities. Only with the development of mathematical proof did there gradually arise a perceived difference between scientific disciplines and the humanities or liberal arts. Thus, Aristotle studied planetary motion and poetry with the same methods; Plato mixed geometrical proofs with his demonstration on the state of intrinsic knowledge.
During the 17th century, a revolution took place in what constituted science, particularly with the work of Isaac Newton in physics . Newton made a sharp distinction between the natural world, which he asserted was an independent reality that operated by its own laws, and the human or spiritual world. Newton’s ideas differed from other philosophers of the same period (such as Blaise Pascal, Gottfried Leibniz, and Johannes Kepler) for whom mathematical expressions of philosophical ideals were taken to be symbolic of natural human relationships as well; the same laws moved physical and spiritual reality. Newton, along with others, changed the basic framework by which individuals understood what was scientific .
Isaac Newton, 1689
Isaac Newton was a key figure in the process which split the natural sciences from the humanities.
Natural laws
Kepler’s law, which describes planet orbit, is an example of the sort of laws Newton believed science should seek. But social life is rarely predictable enough to be described by such laws.
In the realm of other disciplines, this reformulation of the scientific method created a pressure to express ideas in the form of mathematical relationships, that is, unchanging and abstract laws. In the late 19th century, attempts to discover laws regarding human behavior became increasingly common. The rise of statistics and probability theory in the 20th century also contributed to the attempt to mathematically model human behavior in the social sciences.
In the attempt to study human behavior using scientific and empirical principles, sociologists always encounter dilemmas, as humans do not always operate predictably according to natural laws. Hence, even as Durkheim and Marx formulated law-like models of the transition from pre-industrial to industrial societies, Weber was interested in the seemingly “irrational” ideas and values, which, in his view, also contributed to the transition. The social sciences occupy a middle position between the “hard” natural sciences and the interpretive bent of the humanities.
1.1.5: The Sociological Approach
The sociological approach goes beyond everyday common sense by using systematic methods of empirical observation and theorization.
Learning Objective
Explain how the sociological approach differs from a “common sense” understanding of the social world
Key Points
- Sociology is more rigorous than common sense because sociologists test and modify their understanding of how the world works through scientific analysis.
- Sociologists gather data on the ground and formulate theories about what they find. These theories are then tested by using the scientific method to assess the theory’s validity.
- Sociology, unlike common sense, utilizes methods of induction and deduction.
Key Terms
- deduction
-
The process of reasoning in which a conclusion follows necessarily from the stated premises; inference by reasoning from the general to the specific.
- scientific method
-
A method of discovering knowledge about the natural world based in making falsifiable predictions (hypotheses), testing them empirically, and developing peer-reviewed theories that best explain the known data.
- induction
-
the derivation of general principles from specific instances
The sociological approach goes beyond everyday common sense. Many people believe they understand the world and the events taking place within it, often justifying their understandings by calling it “common sense. ” However, they have not actually engaged in a systematic attempt to understand the social world.
Sociology, is an attempt to understand the social world by situating social events in their corresponding environment (i.e., social structure, culture, history) and trying to understand social phenomena by collecting and analyzing empirical data. This scientific approach is what differentiates sociological knowledge from common sense.
For example, Peter Berger, a well-known sociologist, argued, that what distinguishes sociology from common sense is that sociologists:
“[try] to see what is there. [They] may have hopes or fears concerning what [they] may find. But [they] will try to see, regardless of [their] hopes or fears. It is thus an act of pure perception…”
Thus, to obtain sociological knowledge, sociologists must study their world methodically and systematically. They do this through induction and deduction. With induction, sociologists gather data on the ground and formulate theories about what they find. These theories are then tested by using the scientific method in order to assess the theory’s validity. In order to test a theory’s validity, they utilize deduction. Deduction is the act of evaluating their theories in light of new data. Thus, sociological knowledge is produced through a constant back and forth between empirical observation and theorization. In this way, sociology is more rigorous than common sense, because sociologists test and modify their understanding of how the world works through scientific analysis.
Light Bulb
Obtaining sociological knowledge is not just a process of a light-bulb going off in someone’s head; it requires thorough empirical research and analysis.
1.2: The History of Sociology
1.2.1: Tradition vs. Science
Social scientists began to adopt the scientific method to make sense of the rapid changes accompanying modernization and industrialization.
Learning Objective
Distinguish positivist from interpretive sociological approaches
Key Points
- Beginning in the 17th century, observation-based natural philosophy was replaced by natural science, which attempted to define and test scientific laws.
- Social science continued this trend, attempting to find laws to explain social behavior, which had become problematic with the decline of tradition and the rise of modernity and industrialization.
- Sociology is not a homogeneous field; it involves tensions between quantitative and qualitative sociology, positivist and interpretive sociology, and objective and critical sociology.
- The first thinkers to attempt to combine scientific inquiry with the exploration of human relationships were Emile Durkheim in France and William James in the United States.
- Social science adopted quantitative measurement and statistical methods from natural science to find laws of social behavior, as demonstrated in Emile Durkheim’s book Suicide. But sociology may also use qualitative methods.
- Positivist sociology (also known as empiricist) attempts to predict outcomes based on observed variables. Interpretive sociology (which Max Weber called verstehen, German for “understanding”) attempts to understand a culture or phenomenon on its own terms.
- Sociology embodies several tensions, such as those between quantitative and qualitative methods, between positivist and interpretive orientations, and between objective and critical approaches.
- Social science adopted quantitative measurement and statistical methods from natural science to find laws of social behavior, as demonstrated in Emile Durkheim’s book Suicide. But sociology may also use qualitative methods.
- Positivist sociology (also known as empiricist) attempts to predict outcomes based on observed variables. Interpretive sociology (which Max Weber called Verstehen, German for “understanding”) attempts to understand a culture or phenomenon on its own terms.
- Objective sociology tries to explain the world; critical sociology tries to change it.
Key Terms
- Positivist sociology
-
The overarching methodological principle of positivism is to conduct sociology in broadly the same manner as natural science. An emphasis on empiricism and the scientific method is sought to provide a tested foundation for sociological research based on the assumption that the only authentic knowledge is scientific knowledge, and that such knowledge can only arrive by positive affirmation through scientific methodology.
- scientific method
-
A method of discovering knowledge about the natural world based in making falsifiable predictions (hypotheses), testing them empirically, and developing peer-reviewed theories that best explain the known data.
- Critical sociology
-
Critical theory is a school of thought that stresses the examination and critique of society and culture, drawing from knowledge across the social sciences and humanities.
Examples
- Following the quantitative approach, an individual’s social class can be understood by measuring certain variables and fitting the individual into a defined category. That is, social class can be divided into different groups (upper-, middle-, and lower-class) and can be measured using any of a number of variables: income, educational attainment, prestige, power, etc.
- Quantitative and qualitative methods can be complementary: often, quantitative methods are used to describe large or general patterns in society while qualitative approaches are used to help explain how individuals understand those patterns. For example, a sociologist might use quantitative survey methods to find that, on average, single mothers are more likely to receive welfare even if they could earn more working. To find out why, the sociologist may need to employ qualitative methods, such as interviews. During interviews, the sociologist can ask women why they choose not to work, and may find the answer is surprising. A common sense explanation of the quantitative findings might be that welfare recipients are lazy and prefer not to work, but using qualitative methods and the sociological imagination, the investigator could find that women strategically choose not to work because the cost of childcare would mean less net income.
In ancient philosophy, there was no difference between the liberal arts of mathematics and the study of history, poetry, or politics; only with the development of mathematical proofs did there gradually arise a perceived difference between scientific disciplines and the humanities or liberal arts. Thus, Aristotle studied planetary motion and poetry with the same methods, and Plato mixed geometrical proofs with his demonstration on the state of intrinsic knowledge.
However, by the end of the 17th century, a new scientific paradigm was emerging, particularly with the work of Isaac Newton in physics. Newton, by revolutionizing what was then called natural philosophy, changed the basic framework by which individuals understood what was scientific. While Newton was merely the archetype of an accelerating trend, his work highlights an important distinction. For Newton, mathematical truth was objective and absolute: it flowed from a reality independent of the observer and it worked by its own rules. Mathematics was the gold standard of knowledge. In the realm of other disciplines, this created a pressure to express ideas in the form of mathematical relationships, or laws. Such laws became the model that other disciplines would emulate.
In the late 19th century, scholars increasingly tried to apply mathematical laws to explain human behavior. Among the first efforts were the laws of philology, which attempted to map the change over time of sounds in a language. At first, scientists sought mathematical truth through logical proofs. But in the early 20th century, statistics and probability theory offered a new way to divine mathematical laws underlying all sorts of phenomena. As statistics and probability theory developed, they were applied to empirical sciences, such as biology, and to the social sciences. The first thinkers to attempt to combine scientific inquiry with the exploration of human relationships were Emile Durkheim in France and William James in the United States . Durkheim’s sociological theories and James’s work on experimental psychology had an enormous impact on those who followed.
William James
William James was one of the first Americans to explore human relations scientifically.
Quantitative and Qualitative Methods
Sociology embodies several tensions, such as those between quantitative and qualitative methods, between positivist and interpretive orientations, and between objective and critical approaches. Positivist sociology (also known as empiricist) attempts to predict outcomes based on observed variables. Interpretive sociology attempts to understand a culture or phenomenon on its own terms.
Early sociological studies considered the field to be analogous to the natural sciences, like physics or biology. Many researchers argued that the methodology used in the natural sciences was perfectly suited for use in the social sciences. By employing the scientific method and emphasizing empiricism, sociology established itself as an empirical science and distinguished itself from other disciplines that tried to explain the human condition, such as theology, philosophy, or metaphysics.
Early sociologists hoped to use the scientific method to explain and predict human behavior, just as natural scientists used it to explain and predict natural phenomena. Still today, sociologists often are interested in predicting outcomes given knowledge of the variables and relationships involved. This approach to doing science is often termed positivism or empiricism. The positivist approach to social science seeks to explain and predict social phenomena, often employing a quantitative approach.
Understanding Culture and Behavior Instead of Predicting
But human society soon showed itself to be less predictable than the natural world. Scientists like Wilhelm Dilthey and Heinrich Rickert began to catalog ways in which the social world differs from the natural world. For example, human society has culture, unlike the societies of most other animals, which are based on instincts and genetic instructions that are passed between generations biologically, not through social processes. As a result, some sociologists proposed a new goal for sociology: not predicting human behavior, but understanding it. Max Weber and Wilhelm Dilthey introduced the concept of verstehen, or understanding. The goal of verstehen is less to predict behavior than it is to understand behavior. It aims to understand a culture or phenomenon on its own terms rather than trying to develop a theory that allows for prediction.
Sociology’s inability to perfectly predict the behavior of humans has led some to label it a “soft science. ” While some might consider this label derogatory, in a sense it can be seen as an admission of the remarkable complexity of humans as social animals. And, while arriving at a verstehen-like understanding of a culture adopts a more subjective approach, it nevertheless employs systematic methodologies like the scientific method. Both positivist and verstehen approaches employ a scientific method as they make observations and gather data, propose hypotheses, and test their hypotheses in the formulation of theories.
1.2.2: Early Thinkers and Comte
One of the most influential early figures in sociology was Auguste Comte who proposed a positivist sociology with a scientific base.
Learning Objective
Recall Auguste Comte’s most important accomplishments
Key Points
- Auguste Comte was one of the founders of sociology and coined the term sociology.
- Comte believed sociology could unite all sciences and improve society.
- Comte was a positivist who argued that sociology must have a scientific base and be objective.
- Comte theorized a three-stage development of society.
- In sociology, scientific methods may include quantitative surveys or qualitative cultural and historical analysis.
- One common scientific method in sociology is the survey.
Key Terms
- positivism
-
A doctrine that states that the only authentic knowledge is scientific knowledge, and that such knowledge can only come from positive affirmation of theories through strict scientific method, refusing every form of metaphysics.
- Law of Three Stages
-
The Law of Three Stages is an idea developed by Auguste Comte. It states that society as a whole, and each particular science, develops through three mentally conceived stages: (1) the theological stage, (2) the metaphysical stage, and (3) the positive stage.
- Auguste Comte
-
Isidore Auguste Marie François Xavier Comte (19 January 1798 – 5 September 1857), better known as Auguste Comte was a French philosopher. He was a founder of the discipline of sociology and of the doctrine of positivism.
Auguste Comte is considered one of the founders of sociology. He coined the term “sociology” in 1838 by combining the Latin term socius (companion, associate) and the Greek term logia (study of, speech). Comte hoped to unify all the sciences under sociology. He believed sociology held the potential to improve society and direct human activity, including the other sciences.
His ambition to unify the sciences was not unique. Other thinkers of the nineteenth century (for example, Herbert Spencer) held similar goals. This period was a key turning point in defining disciplinary boundaries. In sociology’s early days, disciplinary boundaries were less well defined than today. Many classical theorists of sociology (including Karl Marx, Ferdinand Toennies, Emile Durkheim, Vilfredo Pareto, and Max Weber) were trained in other academic disciplines, including history, philosophy, and economics. The diversity of their trainings is reflected in the topics they researched and in the occasional impulse to unify the sciences in a universal explanation of human life.
One of Comte’s central questions was how societies evolve and change, which is known as social dynamics. He also studied the trends in society which do not change, which is known as social statics. Sociology today draws on these categories, though few sociologists have continued on Comte’s theoretical work in this line.
While his theory is no longer employed in sociology, Comte, like other Enlightenment thinkers, believed society developed in stages. He argued for an understanding of society he labeled “The Law of Three Stages. ” The first was the theological stage where people took a religious view of society. The second was the metaphysical stage where people understood society as natural rather than supernatural. Comte’s final stage was the scientific or positivist stage, which he believed to be the pinnacle of social development. In the scientific stage, society would be governed by reliable knowledge and would be understood in light of the knowledge produced by science, primarily sociology. While Comte’s approach is today considered a highly simplified and ill-founded way to understand social development, it nevertheless reveals important insights into his thinking about the way in which sociology, as part of the third stage, would unite the sciences and improve society.
Neither his vision of a unified science nor his three-stage model have stood the test of time. Instead, today, Comte is remembered for imparting to sociology a positivist orientation and a demand for scientific rigor. As explained in the previous section, early sociological studies drew an analogy from sociology to the natural sciences, such as physics or biology. Many researchers argued that sociology should adopt the scientific methodology used in the natural sciences. This scientific approach, supported by Auguste Comte, is at the heart of positivism, a methodological orientation with a goal that is rigorous, objective scientific investigation and prediction.
Since the nineteenth century, the idea of positivism has been extensively elaborated. Though positivism now has wider range of meanings than Comte intended, belief in a scientifically rigorous sociology has, in its essence, been carried on. The scientific method has been applied to sociological research across all facets of society, including government, education, and in the economy.
Today, sociologists following Comte’s positivist orientation employ a variety of scientific research methods. Unlike natural scientists, sociologists rarely conduct experiments, since limited research resources and ethical guidelines prevent large-scale experimental manipulation of social groups. Still, sometimes sociologists are able to conduct field experiments. Though quantitative methods, such as surveys, are most commonly associated with positivism, any method, quantitative or qualitative, may be employed scientifically.
Auguste Comte
Auguste Comte was one of the founding figures of sociology.
1.2.3: Early Social Research and Martineau
Harriet Martineau was an English social theorist and Whig writer, often cited as the first female sociologist.
Learning Objective
Recall Harriet Martineau’s most important accomplishments
Key Points
- Although today Martineau is rarely mentioned, she was critical to the early growth of sociology.
- Martineau is notable for her progressive politics. She introduced feminist sociological perspectives in her writing and addressed overlooked issues such as marriage, children, domestic life, religious life, and race relations.
- In 1852, Martineau translated the works of Auguste Comte, who had coined the term sociology. Through this process, she both clarified his work and made it accessible to English readers.
- Martineau’s reflections on Society in America, published in 1837, are prime examples of her approach to what would later be known as sociological methods.
Key Terms
- Whig
-
a member of a 19th-century US political party opposed to the Democratic Party
- laissez-faire
-
a policy of governmental non-interference in economic or competitive affairs; pertaining to free-market capitalism
- Harriet Martineau
-
Harriet Martineau (12 June 1802 – 27 June 1876) was an English social theorist and Whig writer, often cited as the first female sociologist.
Harriet Martineau
Harriet Martineau (12 June 1802 – 27 June 1876) was an English social theorist and Whig writer, often cited as the first female sociologist . Although today Martineau is rarely mentioned, she was critical to the early growth of the sociological discipline. Martineau wrote 35 books and a multitude of essays from a sociological, holistic, religious, domestic, and, perhaps most significantly, feminine perspective. She earned enough to be supported entirely by her writing, a challenging feat for a woman in the Victorian era. As a theorist, she believed that a thorough societal analysis was necessary to understand the status of women. She is notable for her progressive politics. Martineau introduced feminist sociological perspectives in her writing and addressed overlooked issues such as marriage, children, domestic life, religious life, and race relations.
Harriet Martineau, 1802-1876
Harriet Martineau introduced Comte to the English-speaking world by translating his works.
Translating Comte
Although Auguste Comte is credited with launching the science of sociology, he might have been forgotten were it not for Martineau, who translated Comte’s 1839 text, Cours de Philosophie Positive, from French into English. As she translated this piece, she also condensed Comte’s work into clearer, more accessible terms. In 1853, her translation was published in two volumes as The Positive Philosophy of Auguste Comte. Her translation so dramatically improved the work that Comte himself suggested his students read her translations rather than his original work. Most significantly, her translation brought Comte’s works to the English-speaking world.
Martineau’s Writing
As early as 1831, Martineau wrote on the subject of “Political Economy” (as the field of economics was then known). Her goal was to popularize and illustrate the principles of laissez faire capitalism, though she made no claim to original theorizing.
Martineau’s reflective writing, published in Society in America in 1837, are prime examples of her approach to what would eventually be known as sociological methods. Her ideas in this field were set out in her 1838 book, How to Observe Morals and Manners. She believed that some very general social laws influenced the life of any society, including the principle of progress, the emergence of science as the most advanced product of human intellectual endeavors, and the significance of population dynamics and the natural physical environment.
1.2.4: Spencer and Social Darwinism
Herbert Spencer created what he called “sociology,” a synthetic philosophy that tried to find a set of rules explaining social behavior.
Learning Objective
Analyze the concept of “progress” in Herbert Spencer’s synthetic philosophy
Key Points
- According to Spencer’s synthetic philosophy, the laws of nature applied without exception to the organic realm as much as the inorganic, and to the human mind as much as the rest of creation.
- Spencer conceptualized society as a “social organism” that evolved from a simpler state to a more complex one, according to the universal law of evolution.
- Spencer is perhaps best known for coining the term “survival of the fittest,” later commonly termed “social Darwinism.”
Key Terms
- positivism
-
A doctrine that states that the only authentic knowledge is scientific knowledge, and that such knowledge can only come from positive affirmation of theories through strict scientific method, refusing every form of metaphysics.
- Social Darwinism
-
a theory that the laws of evolution by natural selection also apply to social structures.
- survival of the fittest
-
Natural selection.
Example
- Social Darwinism explains individuals’ success by attributing it to their greater fitness. For example, a social Darwinist might argue that students who receive higher grades are more academically fit than others. Thus, their success in school is the result of their own qualities.
Though Auguste Comte coined the term “sociology,” the first book with the term sociology in its title was written in the mid-19th century by the English philosopher Herbert Spencer. Following Comte, Spencer created a synthetic philosophy that attempted to find a set of rules to explain everything in the universe, including social behavior.
Spencer’s Synthetic Philosophy
Like Comte, Spencer saw in sociology the potential to unify the sciences, or to develop what he called a “synthetic philosophy. ” He believed that the natural laws discovered by natural scientists were not limited to natural phenomena; these laws revealed an underlying order to the universe that could explain natural and social phenomena alike. According to Spencer’s synthetic philosophy, the laws of nature applied to the organic realm as much as to the inorganic, and to the human mind as much as to the rest of creation. Even in his writings on ethics, he held that it was possible to discover laws of morality that had the same authority as laws of nature. This assumption led Spencer, like Comte, to adopt positivism as an approach to sociological investigation; the scientific method was best suited to uncover the laws he believed explained social life.
Spencer and Progress
But Spencer went beyond Comte, claiming that not only the scientific method, but scientific knowledge itself was universal. He believed that all natural laws could be reduced to one fundamental law, the law of evolution. Spencer posited that all structures in the universe developed from a simple, undifferentiated homogeneity to a complex, differentiated heterogeneity, while being accompanied by a process of greater integration of the differentiated parts. This evolutionary process could be found at work, Spencer believed, throughout the cosmos. It was a universal law, applying to the stars and the galaxies as much as to biological organisms, and to human social organization as much as to the human mind. Thus, Spencer’s synthetic philosophy aimed to show that natural laws led inexorably to progress. He claimed all things—the physical world, the biological realm, and human society—underwent progressive development.
In a sense, Spencer’s belief in progressive development echoed Comte’s own theory of the three-stage development of society. However, writing after important developments in the field of biology, Spencer rejected the ideological assumptions of Comte’s three-stage model and attempted to reformulate the theory of social progress in terms of evolutionary biology. Following this evolutionary logic, Spencer conceptualized society as a “social organism” that evolved from a simpler state to a more complex one, according to the universal law of evolution. This social evolution, he argued, exemplifed the universal evolutionary process from simple, undifferentiated homogeneity to complex, differentiated heterogeneity.
As he elaborated the theory, he proposed two types of society: militant and industrial. Militant society, structured around relationships of hierarchy and obedience, was simple and undifferentiated. Industrial society, based on voluntary behavior and contractually assumed social obligations, was complex and differentiated. Spencer questioned whether the evolution of society would result in peaceful anarchism (as he had first believed) or whether it pointed to a continued role for the state, albeit one reduced to minimal functions—the enforcement of contracts and external defense. Spenser believed, as society evolved, the hierarchical and authoritarian institutions of militant society would become obsolete.
Social Darwinism
Spencer is perhaps best known for coining the term “survival of the fittest,” later commonly termed “social Darwinism.” But, popular belief to the contrary, Spencer did not merely appropriate and generalize Darwin’s work on natural selection; Spencer only grudgingly incorporated Darwin’s theory of natural selection into his preexisting synthetic philosophical system. Spencer’s evolutionary ideas were based more directly on the evolutionary theory of Lamarck, who posited that organs are developed or diminished by use or disuse and that the resulting changes may be transmitted to future generations. Spencer believed that this evolutionary mechanism was necessary to explain ‘higher’ evolution, especially the social development of humanity. Moreover, in contrast to Darwin, Spencer held that evolution had a direction and an endpoint—the attainment of a final state of equilibrium. Evolution meant progress, improvement, and eventually perfection of the social organism.
Criticism
Though Spencer is rightly credited with making a significant contribution to early sociology, his attempt to introduce evolutionary ideas into the realm of social science was ultimately unsuccessful. It was considered by many to be actively dangerous. Critics of Spencer’s positivist synthetic philosophy argued that the social sciences were essentially different from the natural sciences and that the methods of the natural sciences—the search for universal laws was inappropriate for the study of human society.
Herbert Spencer
Herbert Spencer built on Darwin’s framework of evolution, extrapolating it to the spheres of ethics and society. This is why Spencer’s theories are often called “social Darwinism.”
1.2.5: Class Conflict and Marx
Marx focuses on explaining class conflict due to the means of production, which he posited was the driving force behind social evolution.
Learning Objective
Relate Marx’s concept of class to his view of historical change
Key Points
- Marx sees society evolving through stages. He focuses on dialectical class conflict to control the means of production as the driving force behind social evolution.
- According to Marx, society evolves through different modes of production in which the upper class controls the means of production and the lower class is forced to provide labor.
- In Marx’s dialectic, the class conflict in each stage necessarily leads to the development of the next stage (for example, feudalism leads to capitalism).
- Marx was especially critical of capitalism and foresaw a communist revolution.
- Marx predicted that class conflict between the bourgeoisie and the proletariat would lead to capitalism’s downfall.
- According to Marx, under capitalism, workers (the proletariat) must alienate their labor.
- The bourgeoisie try to preserve capitalism by promoting ideologies and false consciousness that keep workers from revolting.
- Marx’s understanding of history is called historical materialism because it focuses on history and material (versus ideas).
Key Terms
- false consciousness
-
A faulty understanding of the true character of social processes due to ideology.
- bourgeoisie
-
The capitalist class.
- proletariat
-
the working class or lower class
- dialectical
-
Of, relating to, or of the nature of logical argumentation.
Examples
- For Marx, society was characterized by class conflict. In the United States, class conflict periodically comes to the fore of public awareness. For instance, the Occupy Wall Street movement has emphasized class conflict by highlighting wealth disparities between the richest 1% of the population and the remaining 99%, much of which is currently encumbered by debt. The movement faces the significant hurdle of uniting the so-called 99%.
- Marx argued that establishing class solidarity was difficult because most people were blind to their true class position. Instead, they embraced a false consciousness composed of ideology disseminated by the ruling class. In his book, What’s the Matter with Kansas, Thomas Frank describes the modern political situation in the United States by referring to this concept. According to Frank, rural voters in the United States (like in Kansas) tend to vote against their economic interests. Although many of these voters are poor and in debt and would benefit from more liberal economic policy, they vote for fiscally conservative Republicans because Republican ideology has duped them into prioritizing cultural issues over their economic interests.
Marx, one of the principle architects of modern social science, believed that history was made of up stages driven by class conflict. Famously, Marx wrote in The Communist Manifesto, “The history of all hitherto existing society is the history of class struggles. ” Class struggle pushed society from one stage to the next, in a dialectical process. In each stage, an ownership class controls the means of production while a lower class provides labor for production. The two classes come into conflict and that conflict leads to social change. For example, in the feudal stage, feudal lords owned the land used to produce agricultural goods, while serfs provided the labor to plant, raise, and harvest crops. When the serfs rose up and overthrew the feudal lords, the feudal stage ended and ushered in a new stage: capitalism.
Means of Production, Relations of Production
According to Marx, the way society is organized depends on the current means of production and who owns them. The means of production include things that are necessary to produce material goods, such as land and natural resources. They also include technology, such as tools or machines, that people use to produce things. The means of production in any given society may change as technology advances. In feudal society, means of production might have included simple tools like a shovel and hoe. Today, the means of production include advanced technology, such as microchips and robots.
At different stages in history, different groups have controlled the means of production. In feudal times, feudal lords owned the land and tools used for production. Today, large corporations own many of the means of production. Different stages have different relations of production, or different forms of social relationships that people must enter into as they acquire and use the means of production. Throughout history, the relations of production have taken a variety of forms—slavery, feudalism, capitalism—in which employees enter into a contract with an employer to provide labor in exchange for a wage.
Modes of Production
Together, the means of production and the relations of production compose a particular period’s mode of production. Marx distinguished different historical eras in terms of their different modes of production. He believed that the mode of production was the defining element of any period in history, and he called this economic structure the base of that society. By contrast, he believed that the ideas and culture of a given stage were derived from the mode of production. He referred to ideas and culture as the “superstructure,” which grew up from the more fundamental economic “base. ” Because of his focus on the economic base over culture and ideas, Marx is often referred to as an economic determinist.
In Marx’s dialectic, the class conflict in each stage necessarily leads to the development of the next stage.
Marx was less interested in explaining the stable organization of any given historical stage than in explaining how society changed from one stage to the next. Marx believed that the class conflict present in any stage would necessarily lead to class struggle and, eventually, to the end of that stage and the beginning of the next. Feudalism ended with class struggle between serfs and lords, and gave rise to a new stage, capitalism.
Instabilities in Capitalism
Marx’s work focused largely on explaining the inherent instabilities present in capitalism and predicting its eventual fall and transition to socialism. Marx argued that capitalism was unstable and prone to periodic crises. Marx believed that economic growth would be punctuated by increasingly severe crises as capitalism went through cycles of growth, collapse, and more growth. Moreover, he believed that in the long-term this process would necessarily enrich and empower the capitalist class, while at the same time it would impoverish the poorer laboring class, which he referred to as the proletariat.
Eventually, the proletariat would become class conscious—aware that their seemingly individual problems were created by an economic system that disadvantaged all those who did not own the means of production. Once the proletariat developed a class consciousness, Marx believed, they would rise up and seize the means of production, overthrowing the capitalist mode of production, and bringing about a socialist society. Marx believed that the socialist system established after the proletariat revolution would encourage social relations that would benefit everyone equally, abolish the exploitative capitalist, ending their exclusive ownership of the means of production, and introduce a system of production less vulnerable to cyclical crises. For Marx, this eventual uprising was inevitable, given the inherent structural contradictions in capitalism and the inevitability of class conflict .
1.2.6: Durkheim and Social Integration
Emile Durkheim studied how societies maintained social integration after traditional bonds were replaced by modern economic relations.
Learning Objective
Contrast the different modes of social integration according to Durkheim
Key Points
- Durkheim believed that society exerted a powerful force on individuals. According to Durkheim, people’s norms, beliefs, and values make up a collective consciousness, or a shared way of understanding and behaving in the world.
- The collective consciousness binds individuals together and creates social integration.
- Durkheim saw increasing population density as a key factor in the advent of modernity. As the number of people in a given area increase, so does the number of interactions, and the society becomes more complex.
- As people engage in more economic activity with neighbors or distant traders, they begin to loosen the traditional bonds of family, religion, and moral solidarity that had previously ensured social integration. Durkheim worried that modernity might herald the disintegration of society.
- Simpler societies are based on mechanical solidarity, in which self-sufficient people are connected to others by close personal ties and traditions. Modern societies are based on organic solidarity, in which people are connected by their reliance on others in the division of labor.
- Although modern society may undermine the traditional bonds of mechanical solidarity, it replaces them with the bonds of organic solidarity.
- In the Elementary Forms of Religious Life, Durkheim presented a theory of the function of religion in aboriginal and modern societies and described the phenomenon of collective effervescence and collective consciousness.
- Durkheim has been called a structural functionalist because his theories focus on the function certain institutions (e.g., religion) play in maintaining social solidarity or social structure.
Key Terms
- organic solidarity
-
It is social cohesion based upon the dependence individuals have on each other in more advanced societies.
- mechanical solidarity
-
It normally operates in “traditional” and small scale societies. In simpler societies (e.g., tribal), solidarity is usually based on kinship ties of familial networks.
Examples
- In his book The Lexus and the Olive Tree, Thomas L. Friedman discussed the Golden Arches Theory of Conflict Prevention, also known as “the McDonald’s Doctrine. ” In short, Friedman asserted: “No two countries that both had McDonald’s had fought a war against each other since each got its McDonald’s. ” This logic is primarily based on the idea that McDonald’s countries have established such strong economic interdependence among themselves that they would have too much to lose to ever wage a war against one another. Though the theory turns out not to be true, the logic follows that of Durkheim’s explanation of organic solidarity.
- In modern society, collective effervescence continues to play a role in cementing social solidarity. It is not only experienced among the religious. Think about the last time you were at a football game or a rock concert. As a member of the crowd, cheering along with others, you may yourself have experienced a feeling of excitement or felt a special energy that seemed to infect the crowd. That feeling was collective effervescence!
Along with Marx and Weber, French sociologist Emile Durkheim is considered one of the founders of sociology. One of Durkheim’s primary goals was to analyze how how modern societies could maintain social integration after the traditional bonds of family and church were replaced by modern economic relations.
Durkheim believed that society exerted a powerful force on individuals. People’s norms, beliefs, and values make up a collective consciousness, or a shared way of understanding and behaving in the world. The collective consciousness binds individuals together and creates social integration. For Durkheim, the collective consciousness was crucial in explaining the existence of society: it produces society and holds it together. At the same time, the collective consciousness is produced by individuals through their actions and interactions. Society is a social product created by the actions of individuals that then exerts a coercive social force back on those individuals. Through their collective consciousness, Durkheim argued, human beings become aware of one another as social beings, not just animals.
Formation of Collective Consciousness
According to Durkheim, the collective consciousness is formed through social interactions. In particular, Durkheim thought of the close-knit interactions between families and small communities, groups of people who share a common religion, who may eat together, work together, and spend leisure time together. Yet all around him, Durkheim observed evidence of rapid social change and the withering away of these groups. He saw increasing population density and population growth as key factors in the evolution of society and the advent of modernity. As the number of people in a given area increase, he posited, so does the number of interactions, and the society becomes more complex. Population growth creates competition and incentives to trade and further the division of labor. But as people engage in more economic activity with neighbors or distant traders, they begin to loosen the traditional bonds of family, religion, and moral solidarity that had previously ensured social integration. Durkheim worried that modernity might herald the disintegration of society.
Durkheim and Modernity
Following a socioevolutionary approach reminiscent of Comte, Durkheim described the evolution of society from mechanical solidarity to organic solidarity. Simpler societies, he argued, are based on mechanical solidarity, in which self-sufficient people are connected to others by close personal ties and traditions (e.g., family and religion). Also, in such societies, people have far fewer options in life. Modern societies, on the other hand, are based on organic solidarity, in which people are connected by their reliance on others in the division of labor. Modernization, Durkheim argued, is based first on population growth and increasing population density, second on increasing “moral density” (that is, the development of more complex social interactions), and third, on the increasing specialization in work (i.e., the division of labor). Because modern society is complex, and because the work that individuals do is so specialized, individuals can no longer be self-sufficient and must rely on others to survive. Thus, although modern society may undermine the traditional bonds of mechanical solidarity, it replaces them with the bonds of organic solidarity.
Organic versus Mechanical Solidarity
Further, Durkheim argued, the organic solidarity of modern societies might have advantages over traditional mechanical solidarity. In traditional societies, people are self-sufficient, and therefore society has little need for cooperation and interdependence. Institutions that require cooperation and agreement must often resort to force and repression to keep society together. Traditional mechanical solidarity may tend, therefore, to be authoritarian and coercive. In modern societies, under organic solidarity, people are necessarily much more interdependent. Specialization and the division of labor require cooperation. Thus, solidarity and social integration are necessary for survival and do not require the same sort of coercion as under mechanical solidarity.
In organic solidarity, the individual is considered vitally important, even sacred. In organic solidarity, the individual, rather than the collective, becomes the focus of rights and responsibilities, the center of public and private rituals holding the society together—a function once performed by the religion. To stress the importance of this concept, Durkheim talked of the “cult of the individual. ” However, he made clear that the cult of the individual is itself a social fact, socially produced; reverence for the individual is not an inherent human trait, but a social fact that arises in certain societies at certain times .
1.2.7: Protestant Work Ethic and Weber
Weber departed from positivist sociology, instead emphasizing Verstehen, or understanding, as the goal of sociology.
Learning Objective
Summarize Weber’s view on the relationship between Protestantism and capitalism
Key Points
- Max Weber was a German sociologist and political economist who profoundly influenced social theory, social research, and the discipline of sociology itself.
- In The Protestant Ethic and the Spirit of Capitalism, his most enduring text, Weber proposed that ascetic Protestantism was one of the major “elective affinities” associated with the rise of capitalism, bureaucracy, and the rational-legal nation-state in the Western world.
- Weber argued that Protestantism, and especially the ascetic Protestant or Calvinist denominations, had redefined the connection between work and piety.
- Weber tried to explain social action in modern society by focusing on rationalization and secularization.
- Weber also developed a theory of political authority and the modern state, defining three types of authority: traditional, charismatic, and rational-legal.
Key Terms
- predestination
-
The doctrine that everything has been foreordained by a God, especially that certain people have been elected for salvation, and sometimes also that others are destined for reprobation.
- Protestant Ethic and the Spirit of Capitalism
-
A book written by Max Weber, arguing that the rise in ascetic Protestantism, particularly denominations like Calvinism, was associated with the rise of modern capitalism in the West.
- secularization
-
The transformation of a society from close identification with religious values and institutions toward non-religious (or “irreligious”) values and secular institutions.
- rationalization
-
the process, or result of rationalizing
Max Weber
Max Weber was a German sociologist and political economist who profoundly influenced social theory, social research, and the discipline of sociology itself. In 1919, he established a sociology department at the Ludwig Maximilian University of Munich.
Along with Marx and Durkheim, Weber is considered one of the three principal forefathers of modern social science. That being said, Weber developed a unique methodological position that set him apart from these other sociologists. As opposed to positivists like Comte and Durkheim, Weber was a key proponent of methodological antipositivism. He presented sociology as a non-empiricist field whose goal was not to gather data and predict outcomes, but instead to understand the meanings and purposes that individuals attach to their own actions.
The Protestant Ethic and the Spirit of Capitalism
In The Protestant Ethic and the Spirit of Capitalism, his most famous text, Weber proposed that ascetic Protestantism was one of the major “elective affinities” associated with the rise of capitalism, bureaucracy, and the rational-legal nation-state in the Western world. Although some consider Weber’s argument to be a study of religion, it can also be interpreted as an introduction to his later works, especially his studies of the interaction between various religious ideas and economic behavior. In contrast to Marx’s “historical materialism,” Weber emphasized how the cultural influences embedded in religion could be a means for understanding the genesis of capitalism. Weber viewed religion as one of the core forces in society.
Weber proposed that ascetic Protestantism had an elective affinity with capitalism, bureaucracy, and the rational-legal nation-state in the Western world. By elective affinity, Weber meant something less direct than causality, but something more direct than correlation. In other words, although he did not argue that religion caused economic change, Weber did find that ascetic Protestantism and modern capitalism often appeared alongside one another in societies. Additionally, Weber observed that both ascetic Protestantism and capitalism encouraged cultural practices that reinforced one another. He never claimed that religion was the complete, simple, isolated cause of the rise of capitalism in the West. Instead, he viewed it was part of a cultural complex that included the following:
- rationalism of scientific pursuit
- the merging of observation with mathematics
- an increasingly scientific method of scholarship and jurisprudence
- the rational systemization of government administration and economic enterprise
- increasing bureaucratization
In the end, the study of the sociology of religion, according to Weber, focused on one distinguishing fact about Western culture, the decline of beliefs in magic. He referred to this phenomena as the “disenchantment of the world. “
Weber’s Evidence and Argument
As evidence for his study, Weber noted that ascetic Protestantism and advanced capitalism tended to coincide with one another. Weber observed that, after the Reformation, Protestant countries such as the Netherlands, England, Scotland, and Germany gained economic prominence over Catholic countries such as France, Spain, and Italy. Furthermore, in societies with different religions, the most successful business leaders tended to be Protestant.
To explain these observations, Weber argued that Protestantism, and especially the ascetic Protestant or Calvinist denominations, had redefined the connection between work and piety . Historically, Christian religious devotion had been accompanied by a rejection of mundane affairs, including economic pursuits. In contrast, Weber showed that certain types of Protestantism, notably Calvinism, supported worldly activities and the rational pursuit of economic gain. Because of the particularly Calvinist view of the world, these activities became endowed with moral and spiritual significance. In these religions, believers expressed their piety towards God through hard work and achievement in a secular vocation, or calling. Because of this religious orientation, human effort was shifted away from the contemplation of the divine and towards rational efforts aimed at achieving economic gain. Furthermore, the Protestant ethic, while promoting the pursuit of economic gain, eschewed hedonistic pleasure. Thus, believers were encouraged to make money, but not to spend it. This motivated believers to work hard, to be successful in business, and to reinvest their profits rather than spend them on frivolous pleasures. The Calvinist notion of predestination also meant that material wealth could be taken as a sign of salvation in the afterlife. Predestination is the belief that God has chosen who will be saved and who will not.
John Calvin, the first capitalist?
Weber saw an elective affinity between capitalism and Protestantism, especially Calvinism.
Protestant believers thus reconciled, even encouraged, the pursuit of profit with religion. Instead of being viewed as morally suspect, greedy, or ambitious, financially successful believers were viewed as being motivated by a highly moral and respectable philosophy, the “spirit of capitalism. ” Eventually, the rational roots of this doctrine outgrew their religious origins and became autonomous cultural traits of capitalist society. Thus, Weber explained the rise of capitalism by looking at systems of culture and ideas. This theory is often viewed as a reversal of Marx’s thesis that the economic “base” of society determines all other aspects of it.
1.2.8: The Development of Sociology in the U.S.
Lester Ward, the first president of the American Sociological Association, is generally thought of as the founder of American sociological study.
Learning Objective
Discuss Lester Ward’s views on sociology’s role in society
Key Points
- Ward was a positivist who saw sociology as a scientific tool to improve life.
- He criticized laissez-faire theories and Spencer’s survival of the fittest theory and developed his own theory of social liberalism.
- Ward believed that in large, complex, and rapidly growing societies, human freedom could only be achieved with the assistance of a strong, democratic government acting in the interest of the individual.
- Ward had a strong influence on a rising generation of progressive political leaders, including on the administrations of Presidents Theodore Roosevelt, Woodrow Wilson, and Franklin D. Roosevelt and on the modern Democratic Party.
Key Terms
- Social liberalism
-
The belief that the legitimate role of the state includes addressing economic and social issues, such as unemployment, health care, and education while simultaneously expanding civil rights; this belief supports capitalism but rejects unchecked laissez-faire economics.
- laissez-faire
-
a policy of governmental non-interference in economic or competitive affairs; pertaining to free-market capitalism
- American Sociological Association
-
The American Sociological Association (ASA), founded in 1905 as the American Sociological Society, is a non-profit organization dedicated to advancing the discipline and profession of sociology.
Lester Ward is generally thought of as the founder of American sociological study. He served as the first president of the American Sociological Society, which was founded in 1905 (and which later changed its name to its current form, the American Sociological Association), and was appointed Chair of Sociology at Brown University in 1906.
Works and ideas
Like Comte and the positivist founders of sociology, Ward embraced the scientific ethos. In 1883, Ward published his two-volume,1,200 page Dynamic Sociology, Or Applied Social Science as Based Upon Statistical Sociology and the Less Complex Sciences, with which he hoped to establish the central importance of experimentation and the scientific method to the field of sociology.
But for Ward, science was not objective and removed, but human-centered and results-oriented. As he put it in the preface to Dynamic Sociology:
“The real object of science is to benefit man. A science which fails to do this, however agreeable its study, is lifeless. Sociology, which of all sciences should benefit man most, is in danger of falling into the class of polite amusements, or dead sciences. It is the object of this work to point out a method by which the breath of life may be breathed into its nostrils. “
Thus, Ward embodied what would become a distinctive characteristic of American sociology. Though devoted to developing sociology as a rigorous science, he also believed sociology had unique potential as a tool to better society. He believed that the scientific methodology of sociology should be deployed in the interest of resolving practical, real-world problems, such as poverty, which he theorized could be minimized or eliminated by systematic intervention in society.
Criticism of laissez-faire
Ward is most often remembered for his criticism of the laissez-faire theories advanced by Herbert Spencer and popular among his contemporaries. Spencer had argued that society would naturally evolve and progress while allowing the survival of the fittest and weeding out the socially unfit. Thus, social ills such as poverty would be naturally alleviated as the unfit poor were selected against; no intervention was necessary. Though originated by Spencer, these ideas were advanced in the United States by William Graham Sumner, an economist and sociologist at Yale. Ward disagreed with Spencer and Sumner and, in contrast to their laissez-faire approach, promoted active intervention.
As a political approach, Ward’s system became known as “social liberalism,” as distinguished from the classical liberalism of the 18th and 19th centuries. While classical liberalism (featuring such thinkers as Adam Smith and John Stuart Mill) had sought prosperity and progress through laissez-faire policies, Ward’s “American social liberalism” sought to enhance social progress through direct government intervention. Ward believed that in large, complex, and rapidly growing societies, human freedom could only be achieved with the assistance of a strong democratic government acting in the interest of the individual. The characteristic element of Ward’s thinking was his faith that government, acting on the empirical and scientifically based findings of the science of sociology, could be harnessed to create a near Utopian social order.
Ward had a strong influence on a rising generation of progressive political leaders, including on the administrations of Presidents Theodore Roosevelt, Woodrow Wilson, and Franklin D. Roosevelt and on the modern Democratic Party. He has, in fact, been called “the father of the modern welfare state. ” The liberalism of the Democrats today is not that of Smith and Mill, which stressed non-interference from the government in economic issues, but of Ward, which stressed the unique position of government to effect positive change. While Roosevelt’s experiments in social engineering were popular and effective, the full effect of the forces Ward set in motion came to bear half a century after his death, in the Great Society programs of President Lyndon B. Johnson and the Vietnam war.
Influence on academic sociology
Despite Ward’s impressive political legacy, he has been largely written out of the history of sociology. The thing that made Ward most attractive in the 19th century, his criticism of laissez faire, made him seem dangerously radical to the ever-cautious academic community in early 20th century America. This perception was strengthened by the growing socialist movement in the United States, led by the Marxist Russian Revolution and the rise of Nazism in Europe. Ward was basically replaced by Durkheim in the history books, which was easily accomplished because Durkheim’s views were similar to Ward’s but without the relentless criticism of lassiez faire and without Ward’s calls for a strong, central government and “social engineering”. In 1937, Talcott Parsons, the Harvard sociologist and functionalist who almost single-handedly set American sociology’s academic curriculum in the mid-20th century, wrote that “Spencer is dead,” thereby dismissing not only Spencer but also Spencer’s most powerful critic.
Lester Ward
Lester Ward, the first president of the American Sociological Association, is generally thought of as the founder of American sociological study.
1.3: Theoretical Perspectives in Sociology
1.3.1: Theoretical Perspectives in Sociology
Social theories draw the connections between seemingly disparate concepts in order to help us understand the world around us.
Learning Objective
Analyze why theory is important for sociological research
Key Points
- Theories have two components: data, and and the explanation of relationships between concepts that are measured by the data.
- A theory is a proposed relationship between two or more concepts, often cause and effect.
- Sociologists develop theories to explain social phenomena.
- Sociological theory is developed at multiple levels, ranging from grand theory to highly contextualized and specific micro-range theories.
Key Terms
- sociological theory
-
A theory is a statement as to how and why particular facts are related. In sociology, sociological perspectives, theories, or paradigms are complex theoretical and methodological frameworks, used to analyze and explain objects of social study, and facilitate organizing sociological knowledge.
- cause and effect
-
Cause and effect (also written as cause-effect or cause/effect) refers to the philosophical concept of causality, in which an action or event will produce a certain response to the action in the form of another event.
- anomie
-
Alienation or social instability caused by erosion of standards and values.
Sociologists develop theories to explain social phenomena. A theory is a proposed relationship between two or more concepts. In other words, a theory is an explanation for why a phenomenon occurs.
Sociological theory is developed at multiple levels, ranging from grand theory to highly contextualized and specific micro-range theories. There are many middle-range and micro-range theories in sociology. Because such theories are dependent on context and specific to certain situations, it is beyond the scope of this text to explore each of those theories.
Sociological Theories at Work
An example of a sociological theory comes from the work of Robert Putnam. Putman’s work focused on the decline of civic engagement. Putnam found that Americans involvement in civic life (e.g., community organizations, clubs, voting, religious participation, etc. ) has declined over the last 40 to 60 years. While a number of factors that contribute to this decline, one of the prominent factors is the increased consumption of television as a form of entertainment. Putnam’s theory proposes:
The more television people watch, the lower their involvement in civic life will be.
This element of Putnam’s theory clearly illustrates the basic purpose of sociological theory. Putnam’s theory proposes a relationship between two or more concepts. In this case, the concepts are civic engagement and television watching. This is an inverse relationship – as one goes up, the other goes down; it is also an explanation of one phenomenon with another: part of the reason for the decline in civic engagement over the last several decades is because people are watching more television. In short, Putnam’s theory clearly encapsulates the key ideas of a sociological theory.
Importance of Theory
Theory is the connective tissue that bridges the connection between raw data and critical thought. In the theory above, the data showed that that civic engagement has declined and TV watching has increased. Data alone are not particularly informative. If Putnam had not proposed a relationship between the two elements of social life, we may not have realized that television viewing does, in fact, reduce people’s desire to, and time for participating in civic life. In order to understand the social world around us, it is necessary to employ theory to draw the connections between seemingly disparate concepts.
Another example of sociological theorizing illustrates this point. In his now classic work, Suicide, Emile Durkheim was interested in explaining a social phenomenon, suicide, and employed both data and theory to offer an explanation. By aggregating data for large groups of people in Europe, Durkheim was able to discern patterns in suicide rates and connect those patterns with another concept (or variable), religious affiliation. Durkheim found that Protestants were more likely than Catholics to commit suicide. At this point, Durkheim’s analysis was still in the data stage; he had not proposed an explanation for the different suicide rates of the two groups. When Durkheim introduced the ideas of anomie and social solidarity, he began to explain the difference in suicide rates. Durkheim argued that the looser social ties found in Protestant religions lead to weaker social cohesion and reduced social solidarity. The higher suicide rates were the result of weakening social bonds among Protestants.
While Durkheim’s findings have since been criticized, his study is a classic example of the use of theory to explain the relationship between two concepts. Durkheim’s work also illustrates the importance of theory: without theories to explain the relationship between concepts, we would not be able to understand cause and effect relationships in social life. The discovery of the cause and effect relationship is the major component of the sociological theory.
Theories: Are Some Better than Others?
There are many theories in sociology, but there are several broad theoretical perspectives that are prominent in the field. These theories are prominent because they are quite good at explaining social life. They are not without their problems, but these theories remain widely used and cited precisely because they have withstood a great deal of criticism.
You might be inclined to ask, “Which theories are the best? ” As is often the case in sociology, just because things are different doesn’t mean one is better than another. In fact, it is probably more useful and informative to view theories as complementary. One theory may explain one element of society better than another. Or, both may be useful for explaining social life. In short, all of the theories are correct in the sense that they offer compelling explanations for social phenomena.
Ritzer’s Integrative Micro-Macro Theory of Social Analysis
The theoretical perspectives in sociology use both micro- and macro-perspectives to understand sociological and cultural phenomenon.
1.3.2: The Functionalist Perspective
The functionalist perspective attempts to explain social institutions as collective means to meet individual and social needs.
Learning Objective
Apply the functionalist perspective to issues in the contemporary world
Key Points
- In the functionalist perspective, societies are thought to function like organisms, with various social institutions working together like organs to maintain and reproduce societies.
- According to functionalist theories, institutions come about and persist because they play a function in society, promoting stability and integration.
- Functionalism has been criticized for its failure to account for social change and individual agency; some consider it conservatively biased.
- Functionalism has been criticized for attributing human-like needs to society.
- Emile Durkheim’s work is considered the foundation of functionalist theory in sociology.
- Merton observed that institutions could have both manifest and latent functions.
Key Terms
- latent function
-
the element of a behavior that is not explicitly stated, recognized, or intended, and is thereby hidden
- social institutions
-
In the social sciences, institutions are the structures and mechanisms of social order and cooperation governing the behavior of a set of individuals within a given human collectivity. Institutions include the family, religion, peer group, economic systems, legal systems, penal systems, language, and the media.
- functionalism
-
Structural functionalism, or simply functionalism, is a framework for building theory that sees society as a complex system whose parts work together to promote solidarity and stability.
- manifest function
-
the element of a behavior that is conscious and deliberate
Examples
- Before the 19th century, higher education was primarily training for clergy and the elite. But in the late 19th century, higher education transitioned to become a center for science and the general education of the masses. In other words, education began to serve a new function; it had not always served the function of preparing individuals for the labor force (with the exception of the ministry and the elite). Functionalists might respond that this transition can be explained by looking for other institutional changes that precipitated this change. For example, the 19th century also saw the blossoming of the industrial revolution. The industrial revolution and the rise of capitalism increasingly demanding technological innovation to increase profit. Technological innovation and advanced industry both required a more educated workforce. As the industrial revolution changed one aspect of society (the economy and production), it required a parallel change in other institutions (e.g., the educational system), thus bringing social life back into equilibrium. Yet critics might reply that this explanation only raises more questions: in particular, what change sparked the industrial revolution?
- Education also provides an example of Merton’s theory of manifest and latent functions. The manifest purpose of public education is to increase the knowledge and abilities of the citizenry to prepare them to contribute in the workforce. A latent function of the public education system is the development and maintenance of a class hierarchy. The most educated are often also the most affluent and enjoy privileged access to the best jobs, the best schools, the best housing, and so on. Thus, while education’s manifest function is to empower all individuals to contribute to the workforce and society, its latent function is to create and maintain inequality.
Functionalism
The functionalist perspective attempts to explain social institutions as collective means to meet individual and social needs. It is sometimes called structural-functionalism because it often focuses on the ways social structures (e.g., social institutions) meet social needs.
Functionalism draws its inspiration from the ideas of Emile Durkheim. Durkheim was concerned with the question of how societies maintain internal stability and survive over time. He sought to explain social stability through the concept of solidarity, and differentiated between the mechanical solidarity of primitive societies and the organic solidarity of complex modern societies. According to Durkheim, more primitive or traditional societies were held together by mechanical solidarity; members of society lived in relatively small and undifferentiated groups, where they shared strong family ties and performed similar daily tasks. Such societies were held together by shared values and common symbols. By contrast, he observed that, in modern societies, traditional family bonds are weaker; modern societies also exhibit a complex division of labor, where members perform very different daily tasks. Durkheim argued that modern industrial society would destroy the traditional mechanical solidarity that held primitive societies together. Modern societies however, do not fall apart. Instead, modern societies rely on organic solidarity; because of the extensive division of labor, members of society are forced to interact and exchange with one another to provide the things they need.
The functionalist perspective continues to try and explain how societies maintained the stability and internal cohesion necessary to ensure their continued existence over time. In the functionalist perspective, societies are thought to function like organisms, with various social institutions working together like organs to maintain and reproduce them. The various parts of society are assumed to work together naturally and automatically to maintain overall social equilibrium. Because social institutions are functionally integrated to form a stable system, a change in one institution will precipitate a change in other institutions. Dysfunctional institutions, which do not contribute to the overall maintenance of a society, will cease to exist.
In the 1950s, Robert Merton elaborated the functionalist perspective by proposing a distinction between manifest and latent functions. Manifest functions are the intended functions of an institution or a phenomenon in a social system. Latent functions are its unintended functions. Latent functions may be undesirable, but unintended consequences, or manifestly dysfunctional institutions may have latent functions that explain their persistence. For example, crime seems difficult to explain from the functionalist perspective; it seems to play little role in maintaining social stability. Crime, however, may have the latent function of providing examples that demonstrate the boundaries of acceptable behavior and the function of these boundaries to maintain social norms.
Social Institutions
Functionalists analyze social institutions in terms of the function they play. In other words, to understand a component of society, one must ask, “What is the function of this institution? How does it contribute to social stability? ” Thus, one can ask of education, “What is the function of education for society? ” A complete answer would be quite complex and require a detailed analysis of the history of education, but one obvious answer is that education prepares individuals to enter the workforce and, therefore, maintains a functioning economy. By delineating the functions of elements of society, of the social structure, we can better understand social life.
Criticism of Functionalism
Functionalism has been criticized for downplaying the role of individual action, and for being unable to account for social change. In the functionalist perspective, society and its institutions are the primary units of analysis. Individuals are significant only in terms of their places within social systems (i.e., social status and position in patterns of social relations). Some critics also take issue with functionalism’s tendency to attribute needs to society. They point out that, unlike human beings, society does not have needs; society is only alive in the sense that it is made up of living individuals. By downplaying the role of individuals, functionalism is less likely to recognize how individual actions may alter social institutions.
Critics also argue that functionalism is unable to explain social change because it focuses so intently on social order and equilibrium in society. Following functionalist logic, if a social institution exists, it must serve a function. Institutions, however, change over time; some disappear and others come into being. The focus of functionalism on elements of social life in relation to their present function, and not their past functions, makes it difficult to use functionalism to explain why a function of some element of society might change, or how such change occurs.
1.3.3: The Conflict Perspective
Conflict theory sees society as a dynamic entity constantly undergoing change as a result of competition over scarce resources.
Learning Objective
Identify the tenets of and contributors to conflict theory, as well as the criticisms made against it
Key Points
- Conflict theory sees social life as a competition, and focuses on the distribution of resources, power, and inequality.
- Unlike functionalist theory, conflict theory is better at explaining social change, and weaker at explaining social stability.
- Conflict theory has been critiqued for its inability to explain social stability and incremental change.
- Conflict theory derives from the ideas of Karl Marx.
Key Terms
- functionalism
-
Structural functionalism, or simply functionalism, is a framework for building theory that sees society as a complex system whose parts work together to promote solidarity and stability.
- conflict theory
-
A social science perspective that holds that stratification is dysfunctional and harmful in society, with inequality perpetuated because it benefits the rich and powerful at the expense of the poor.
Examples
- A conflict theorist might ask, “Who benefits from the current higher educational system in the U.S.? ” The answer, for a conflict theorist attuned to unequal distributions of wealth, is the wealthy. After all, higher education in the U.S. is not free. The educational system often screens out poorer individuals, not because they are unable to compete academically, but because they cannot afford to pay for their education. Because the poor are unable to obtain higher education, they are generally also unable to get higher paying jobs, and, thus, they remain poor. Such an arrangement translates into a vicious cycle of poverty. While a functionalist might say that the function of education is to educate the workforce, a conflict theorist might point out that it also has an element of conflict and inequality, favoring one group (the wealthy) over other groups (the poor). Thinking about education in this way helps illustrate why both functionalist and conflict theories are helpful in understanding how society works.
- In his 1982 book Power and Powerlessness: Quiescence and Rebellion in an Appalachian Valley, John Gaventa used conflict theory to explain why coal miners in Appalachia accepted such low pay and poor working conditions. Although miners belonged to unions, and although unions occasionally called strikes and even used violence, the status quo prevailed. Gaventa theorized that power does not only operate through overt force and oppression. Rather, power also operates in hidden ways. The absentee mine owners manipulated complaints and debates to downplay concerns and refocus attention on other issues. In that way, they were able to avoid any real challenges to their power.
The Conflict Perspective
The conflict perspective, or conflict theory, derives from the ideas of Karl Marx, who believed society is a dynamic entity constantly undergoing change driven by class conflict. Whereas functionalism understands society as a complex system striving for equilibrium, the conflict perspective views social life as competition. According to the conflict perspective, society is made up of individuals competing for limited resources (e.g., money, leisure, sexual partners, etc.). Competition over scarce resources is at the heart of all social relationships. Competition, rather than consensus, is characteristic of human relationships. Broader social structures and organizations (e.g., religions, government, etc.) reflect the competition for resources and the inherent inequality competition entails; some people and organizations have more resources (i.e., power and influence), and use those resources to maintain their positions of power in society.
C. Wright Mills is known as the founder of modern conflict theory. In his work, he believes social structures are created because of conflict between differing interests. People are then impacted by the creation of social structures, and the usual result is a differential of power between the “elite” and the “others”. Examples of the “elite” would be government and large corporations. G. William Domhoff believes in a similar philosophy as Mills and has written about the “power elite of America”.
Sociologists who work from the conflict perspective study the distribution of resources, power, and inequality. When studying a social institution or phenomenon, they ask, “Who benefits from this element of society? “
Conflict Theory and Change
While functionalism emphasizes stability, conflict theory emphasizes change. According to the conflict perspective, society is constantly in conflict over resources, and that conflict drives social change. For example, conflict theorists might explain the civil rights movements of the 1960s by studying how activists challenged the racially unequal distribution of political power and economic resources. As in this example, conflict theorists generally see social change as abrupt, even revolutionary, rather than incremental. In the conflict perspective, change comes about through conflict between competing interests, not consensus or adaptation. Conflict theory, therefore, gives sociologists a framework for explaining social change, thereby addressing one of the problems with the functionalist perspective.
Criticism of Conflict Theory
Predictably, conflict theory has been criticized for its focus on change and neglect of social stability. Some critics acknowledge that societies are in a constant state of change, but point out that much of the change is minor or incremental, not revolutionary. For example, many modern capitalist states have avoided a communist revolution, and have instead instituted elaborate social service programs. Although conflict theorists often focus on social change, they have, in fact, also developed a theory to explain social stability. According to the conflict perspective, inequalities in power and reward are built into all social structures. Individuals and groups who benefit from any particular structure strive to see it maintained. For example, the wealthy may fight to maintain their privileged access to higher education by opposing measures that would broaden access, such as affirmative action or public funding.
1.3.4: The Symbolic Interactionist Perspective
Symbolic interactionism looks at individual and group meaning-making, focusing on human action instead of large-scale social structures.
Learning Objective
Examine the differences between symbolic interactionism and other sociological perspectives
Key Points
- Symbolic interactionism has roots in phenomenology, which emphasizes the subjective meaning of reality.
- Symbolic interactionism proposes a social theory of the self, or a looking glass self.
- Symbolic interactionists study meaning and communication; they tend to use qualitative methods.
- Symbolic interactionism has been criticized for failing to take into account large-scale macro social structures and forces.
Key Terms
- role theory
-
assumes that people are primarily conformists who try to achieve the norms that accompany their roles; group members check each individual’s performance to determine whether it conforms with that individual’s assigned norms, and apply sanctions for misbehavior in an attempt to ensure role performance.
- behaviorism
-
an approach to psychology focusing on behavior, denying any independent significance for mind, and assuming that behavior is determined by the environment
- phenomenology
-
A philosophy based on the intuitive experience of phenomena, and on the premise that reality consists of objects and events as consciously perceived by conscious beings.
Example
- A good example of the looking glass self is a person trying on clothes before going out with friends. Some people may not think much about how others will think about their clothing choices, but others can spend quite a bit of time considering what they are going to wear. While they are deciding, the dialogue taking place inside their mind is usually a dialogue between their “self” (that portion of their identity that calls itself “I”) and that person’s internalized understanding of their friends and society (a “generalized other”). An indicator of mature socialization is when an individual quite accurately predicts how other people think about him or her. Such an individual has incorporated the “social” into the “self. “
Symbolic interactionism is a theoretical approach to understanding the relationship between humans and society. The basic notion of symbolic interactionism is that human action and interaction are understandable only through the exchange of meaningful communication or symbols. In this approach, humans are portrayed as acting, as opposed to being acted upon. The main principles of symbolic interactionism are:
- Human beings act toward things on the basis of the meanings that things have for them
- These meanings arise out of social interaction
- Social action results from a fitting together of individual lines of action
This approach stands in contrast to the strict behaviorism of psychological theories prevalent at the time it was first formulated (the 1920s and 1930s). According to symbolic interactionism, humans are distinct from infrahumans (lower animals) because infrahumans simply respond to their environment (i.e., a stimulus evokes a response or stimulus ⇒ response), whereas humans have the ability to interrupt that process (i.e., stimulus ⇒ cognition ⇒ response). Additionally, infrahumans are unable to conceive of alternative responses to gestures. Humans, however, can. This understanding should not be taken to indicate that humans never behave in a strict stimulus ⇒ response fashion, but rather that humans have the capability of responding in a different way, and do so much of the time.
This perspective is also rooted in phenomenological thought. According to symbolic interactionism, the objective world has no reality for humans; only subjectively defined objects have meaning. There is no single objective “reality”; there are only (possibly multiple, possibly conflicting) interpretations of a situation. Meanings are not entities that are bestowed on humans and learned by habituation; instead, meanings can be altered through the creative capabilities of humans, and individuals may influence the many meanings that form their society. Human society, therefore, is a social product.
The Looking Glass Self
Neurological evidence, based on EEGs, supports the idea that humans have a “social brain,” meaning, there are components of the human brain that govern social interaction. These parts of the brain begin developing in early childhood (the preschool years) and aid humans in understanding how other people think. In symbolic interactionism, this is known as “reflected appraisals” or “the looking glass self,” and refers to our ability to think about how other people will think about us. In 1902, Charles Horton Cooley developed the social psychological concept of the looking glass self. The term was first used in his work, Human Nature and the Social Order. There are three main components of the looking glass self:
Charles Cooley
Cooley developed the idea of the looking glass self.
- We imagine how we must appear to others
- We imagine the judgment of that appearance
- We develop our self through the judgments of others
Cooley clarified this concept in his writings, stating that society is an interweaving and interworking of mental selves.
In hypothesizing the framework for the looking glass self, Cooley said, “the mind is mental” because “the human mind is social. ” As children, humans begin to define themselves within the context of their socializations. The child learns that the symbol of his/her crying will elicit a response from his/her parents, not only when they are in need of necessities, such as food, but also as a symbol to receive their attention.
George Herbert Mead described self as “taking the role of the other,” the premise for which the self is actualized. Through interaction with others, we begin to develop an identity about who we are, as well as empathy for others. This is the notion of, “Do unto others, as you would have them do unto you. ” In respect to this, Cooley said, “The thing that moves us to pride or shame is not the mere mechanical reflection of ourselves, but an imputed sentiment, the imagined effect of this reflection upon another’s mind. “
It should be noted that symbolic interactionists advocate a particular methodology. Because they see meaning as the fundamental component of the interaction of human and society, studying human and social interaction requires an understanding of that meaning. Symbolic interactionists tend to employ more qualitative, rather than quantitative, methods in their research.
The most significant limitation of the symbolic interactionist perspective relates to its primary contribution: it overlooks macro-social structures (e.g., norms, culture) as a result of focusing on micro-level interactions. Some symbolic interactionists, however, would counter that the incorporation of role theory into symbolic interactionism addresses this criticism.
The Looking Glass Self
This drawing depicts the looking-glass self. The person at the front of the image is looking into four mirrors, each of which reflects someone else’s image of himself.
1.3.5: The Feminist Perspective
Feminist theory is a conflict theory that studies gender, patriarchy, and the oppression of women.
Learning Objective
Identify the main tenets of the feminist perspective and its research focus, distinguishing the three waves of feminist theory
Key Points
- Feminist theory has developed in three waves. The first wave focused on suffrage and political rights. The second focused on social inequality between the genders. The current, third wave emphasizes the concepts of globalization, postcolonialism, post-structuralism, and postmodernism.
- Third wave feminist theory critiques generalizations about sex and gender.
- Feminist critiques of heterosexism and is closely allied with queer theory and the work of Michel Foucault.
- Feminist theory also studies the intersections of sex, gender, sexuality, race, nationality, and economic status.
- Feminism may conflict with multiculturalism. While muticulturalism necessitates the tolerance of foreign cultural practices, some of those practices might maintain an oppression of women that feminists find essentially intolerable and unacceptable.
Key Terms
- multiculturalism
-
A characteristic of a society that has many different ethnic or national cultures mingling freely. It can also refer to political or social policies which support or encourage such a coexistence. Important in this is the idea that cultural practices, no matter how unusual, should be tolerated as a measure of respect.
- poststructuralism
-
an extension of structuralism influenced by the effort to deconstruct or challenge traditional categories
- postmodernism
-
any style in art, architecture, literature, philosophy, etc., that reacts against an earlier modernist movement
Example
- When was the last time you walked into a toy store? Next time you do, pause to take a look at the shelves. Most toy stores use gendered displays to market different toys to boys and girls. The section targeting girls will often be bathed in pink, with dolls, model kitchens, fake makeup sets, and other toys focused on child rearing, domestic chores, or personal hygiene and beauty. The section targeting boys will often be filled with violent toys, guns, action figures, toy monsters and aliens. It may also have toy sets for building structures, models, and robots. No formal rules keep girls from shopping in the boys section or vice versa, but the gendered marketing nevertheless reinforces gender stereotypes.
Feminism
The feminist perspective has much in common with the conflict perspective. However, instead of focusing broadly on the unequal distribution of power and resources, feminist sociology studies power in its relation to gender. This topic is studied both within social structures at large and at the micro level of face-to-face interaction, the latter of which incorporates the methodology of symbolic interactionism (popularized by Erving Goffman). Feminist scholars study a range of topics, including sexual orientation, race, economic status, and nationality. However, at the core of feminist sociology is the idea that, in most societies, women have been systematically oppressed and that men have been historically dominant. This is referred to as patriarchy.
Three Waves of Feminism
Feminist thought has a rich history, which is categorized into three waves. At the turn of the century, the first wave of feminism focused on official, political inequalities and fought for women’s suffrage. In the 1960s, second wave feminism, also known as the women’s liberation movement, turned its attention to a broader range of inequalities, including those in the workplace, the family, and reproductive rights. Currently, a third wave of feminism is criticizing the fact that the first two waves of feminism were dominated by white women from advanced capitalist societies. This movement emphasizes diversity and change, and focuses on concepts such as globalization, postcolonialism, poststructuralism, and postmodernism. Contemporary feminist thought tends to dismiss essentializing generalizations about sex and gender (e.g., women are naturally more nurturing) and to emphasize the importance of intersections within identity (e.g., race and gender). The feminist perspective also recognizes that women who suffer from oppression due to race, in addition to the oppression they suffer for being women, may find themselves in a double bind. The relationship between feminism and race was largely overlooked until the second wave of feminists produced literature on the topic of black feminism. This topic has received much more attention from third wave scholars and activists.
Feminism and Heterosexism
The feminist perspective also criticizes exclusive understandings of sexuality, such as heterosexism. Heterosexism is a system of attitudes, bias, and discrimination that favor male-female sexuality and relationships. At one point, heterosexual marriage was the only lawful union between two people that was recognized and given full benefits in the United States. This situated homosexual couples at a disadvantage, and made them ineligible for many of the government or employer-provided benefits afforded heterosexual married couples. However, heterosexism can extend far beyond government validation, as it describes a set of paradigms and institutionalized beliefs that systematically disadvantage anyone who does not fit into a normative mold. Like racism, heterosexism can operate on an institutional level (e.g., through government) and at an individual level (i.e., in face-to-face interactions). Feminist critiques of heterosexism thus align with queer theory and the ideas of Michel Foucault, who studied the relationship between power and sexuality.
Feminism and Multiculturalism
Though the feminist perspective focuses on diversity and liberation, it has been accused of being incompatible with multiculturalist policy. Multiculturalism aims to allow distinct cultures to reside together, either as distinct enclaves within ostensively Western societies, or as separate societies with national borders. One possible consequence of multiculturalism is that certain religious or traditional practices, that might disadvantage or oppress women, might be tolerated on the grounds of cultural sensitivity. From the Feminist perspective, such practices are objectionable to human rights and ought to be criminalized on those grounds. However, from a multiculturalist perspective, such traditions must be respected even if they seem to directly violate ideas about freedom or liberty. Controversies about this have arisen with both arranged marriages and female genital mutilation.
First-wave feminists fight for women’s suffrage
Over the years, feminist demands have changed. First-wave feminists fought for basic citizenship rights, such as the right to vote, while third wave feminists are concerned with more complex social movements, like post-structuralism.
1.3.6: Theory and Practice
Sociologists use both theory and practice to understand what is going on in the social world and how it happens.
Learning Objective
Recognize the relationship between theory and practice in sociological research
Key Points
- There is a reciprocal relationship between theory and practice in sociology.
- In practice, sociologists use an empirical approach that seeks to understand what is going on in the social world and how it happens.
- Practice, or empirical analysis, cannot stand on its own without underlying theoretical questions (the why) that guide the research.
- A theory is a proposed relationship between two or more observed phenomena.
- Grounded theory is an inductive research method that involves working upward from the data to generate a theory. Grounded theory is hinged upon the relationship between practice and theory.
- Starting from theory runs the risk of interpreting data strictly according the the perspective of that theory, which can create false results.
- Starting from theory runs the risk of interpreting data strictly according the the perspective of that theory.
- Grounded theory is an inductive research method that involves working from the data upward to generate a theory.
Key Terms
- theory
-
A coherent statement or set of ideas that explains observed facts or phenomena, or which sets out the laws and principles of something known or observed; a hypothesis confirmed by observation, experiment, etc.
- scientific method
-
A method of discovering knowledge about the natural world based in making falsifiable predictions (hypotheses), testing them empirically, and developing peer-reviewed theories that best explain the known data.
- practice
-
Actual operation or experiment, in contrast to theory.
Examples
- An example of a sociological theory is the work of Robert Putnam on the decline of civic engagement. Putnam found that Americans’ involvement in civic life (e.g., community organizations, clubs, voting, religious participation, etc. ) has declined over the last 40 to 60 years. While there are a number of factors that contribute to this decline (Putnam’s theory is quite complex), one of the prominent factors is the increased consumption of television as a form of entertainment. Putnam’s theory proposes:
- The more television people watch, the lower their involvement in civic life will be.
- This element of Putnam’s theory clearly illustrates the basic purpose of sociological theory: it proposes a relationship between two or more concepts. In this case, the concepts are civic engagement and television watching. The relationship is an inverse one – as one goes up, the other goes down. What’s more, it is an explanation of one phenomenon with another: part of the reason why civic engagement has declined over the last several decades is because people are watching more television. In short, Putnam’s theory clearly encapsulates the key ideas of a sociological theory.
- Another sociologist might choose to test this theory. For example, someone might seek to explore if the same correlation could be observed in China, where in the past couple decades watching television has become an integral part of urban life.
There is a reciprocal relationship between theory and practice in sociology. In practice, sociologists use an empirical approach that seeks to understand what is going on in the social world and how it happens. These practices, however, cannot stand on their own without underlying theoretical questions (the why) that guide the research. Without theory, interesting data may be gathered without any way to explain the relationships between different observed phenomena. Sociologists go back and forth between theory and practice as advances in one require modification of the other.
Theory and Practice Explained
Practice refers to the actual observation, operation, or experiment. Practice is the observation of disparate concepts (or a phenomenon) that needs explanation. A theory is a proposed explanation of the relationship between two or more concepts, or an explanation for how/why a phenomenon occurs.
Grounded Theory Method
Sociologists often work from an already existing theory, and seek to test that theory in new situations. In these cases, theory influences the practice of empirical research – it shapes what kinds of data will be gathered and how this data will be interpreted. This data may confirm the theory, lead to modifications of it, or disprove the theory altogether in that particular context. These changes to the theory then lead to further research.
When working from theory, sociological observation runs the risk of being directed by that theory. For example, if one is working from the perspective of a Marxist conflict theory, one might tend to interpret everything as a manifestation of bourgeoisie domination, from the patterns of seating at a school cafeteria to presidential election results.
A response to this problem was developed by two sociologists, Barney Glaser and Anselm Strauss, called grounded theory method; it is a systematic methodology in the social sciences involving the discovery of theory through the analysis of data. Grounded theory method is mainly used in qualitative research, but is also applicable to quantitative data.
Grounded theory method operates almost in a reverse fashion from traditional research, and at first sight may appear to be in contradiction to the scientific method. Rather than beginning with a hypothesis, the first step is data collection through a variety of methods. Using the collected data, the key points are marked with a series of codes, which are extracted from the text. The codes are grouped into similar concepts in order to make them more workable. From these concepts, categories are formed, which are the basis for the creation of a theory, or a reverse engineered hypothesis. This contradicts the traditional model of research, where the researcher chooses a theoretical framework and only then applies this model to the phenomenon to be studied.
Scientific Method: Practice and Theory
Social scientists begin with an observation (a practice), then they develop a hypothesis (or theory), and then, devise an empirical study to test their hypothesis.
1.4: The Sociological Approach
1.4.1: Sociology Today
Contemporary sociology does not have a single overarching foundation—it has varying methods, both qualitative and quantitative.
Learning Objective
Describe how the discipline of sociology has expanded since its foundation
Key Points
- The traditional focuses of sociology have included social stratification, social class, culture, social mobility, religion, secularization, law, and deviance.
- Sociology has gradually expanded its focus to include more diverse subjects such as health, medical, penal institutions, the Internet, or the role of social activity in the development of scientific knowledge.
- The linguistic and cultural turns of the mid-twentieth century led to increasingly interpretative, hermeneutic, and philosophic approaches to the analysis of society.
Key Terms
- secularization
-
The transformation of a society from close identification with religious values and institutions toward non-religious (or “irreligious”) values and secular institutions.
- hermeneutic
-
Something that explains, interprets, illustrates or elucidates.
- paradigm
-
A system of assumptions, concepts, values, and practices that constitutes a way of viewing reality.
Although sociology emerged from Comte’s vision of a discipline that would subsume all other areas of scientific inquiry, that was the future of sociology. Far from replacing the other sciences, contemporary sociology has taken its place as a particular perspective for investigating human social life.
The traditional focuses of sociology have included social stratification, social class, culture, social mobility, religion, secularization, law, and deviance. As all spheres of human activity are affected by the interplay between social structure and individual agency, sociology has gradually expanded to focus on more diverse subjects such as health, medical, military and penal institutions, the Internet, and the role of social activity in the development of scientific knowledge.
The range of social scientific methodology has also expanded. Social researchers draw upon a variety of qualitative and quantitative techniques. The linguistic and cultural turns of the mid-twentieth century led to increasingly interpretative, hermeneutic, and philosophic approaches to the analysis of society. Conversely, recent decades have seen the rise of new analytically, mathematically, and computationally rigorous techniques such as agent-based modelling and social network analysis.
Presently, sociological theories lack a single overarching foundation, and there is little consensus about what such a framework should consist of. However, a number of broad paradigms cover much modern sociological theorizing. In the humanistic parts of the discipline, these paradigms are referred to as social theory, often shared with the humanities. The discipline’s dominant scientifically-oriented areas generally focus on a different set of theoretical perspectives, generally referred to as sociological theory. These include new institutionalism, social networks, social identity, social and cultural capital, toolkit and cognitive theories of culture, and resource mobilization. Analytical sociology is an ongoing effort to systematize many of these middle-range theories.
American Sociological Association
The American Sociological Association annual meetings are a way for contemporary sociologists to share their work and discuss the future of the discipline.
1.4.2: Levels of Analysis: Micro and Macro
Sociological study may be conducted at both macro (large-scale social processes) and micro (small group, face-to-face interactions) levels.
Learning Objective
Analyze how symbolic interactionism plays a role in both macro and micro sociology
Key Points
- Macro-level sociology looks at large-scale social processes, such as social stability and change.
- Micro-level sociology looks at small-scale interactions between individuals, such as conversation or group dynamics.
- Micro- and macro-level studies each have their own benefits and drawbacks.
- Macrosociology allows observation of large-scale patterns and trends, but runs the risk of seeing these trends as abstract entities that exist outside of the individuals who enact them on the ground.
- Microsociology allows for this on-the-ground analysis, but can fail to consider the larger forces that influence individual behavior.
Key Terms
- microsociology
-
Microsociology involves the study of people in face-to-face interactions.
- symbolic interactionism
-
Symbolic interactionism is the study of the patterns of communication, interpretation, and adjustment between individuals.
- macrosociology
-
Macrosociology involves the study of widespread social processes.
Examples
- There are many examples of both micro- and macrosociological studies. One of the most famous early micro-sociological studies was, “The Cab Driver and His Fare,” published in 1959 by Fred Davis. Davis spent six months working as a taxi driver in Chicago, and observed the interactions between himself and his fares (the people who hired his taxi services). He found that the relationship between taxi driver and fare was unique because it was so short, random, and unlikely to be repeated. Given these characteristics of the relationship, how could riders trust that drivers would take them where they wanted without cheating them, and how could drivers trust that riders would pay and tip them fairly at the end of the trip? Davis suggested that much of the interaction between driver and rider boiled down to resolving these issues and attempting to ensure trust.
- Dramaturgical analysis can be used to explain many types of social interactions. Consider, for example, how front and back stage spaces are managed during a visit to the doctor. When you arrive at the doctor’s office, you are on stage as you present yourself to the receptionist. As you are shown to an exam room, you are briefly ushered into a back stage space. The attendant leaves and you briefly relax as you change into an exam gown and prepare yourself for your next performance, which begins when the doctor enters the room or pushes back the curtain. Once again, you are on stage.
Sociological approaches are differentiated by the level of analysis. Macrosociology involves the study of widespread social processes . Microsociology involves the study of people at a more interpersonal level, as in face-to-face interactions.
The macro-level study of widespread social processes has been the more dominant approach, and has been practiced since sociology’s origins in the founding work of figures like Emile Durkheim. Durkheim, for example, studied the large-scale shift from homogenous traditional societies to industrialized societies, where each individual played a highly specialized role. The tendency toward macrosociology is evident in the kinds of questions that early sociologists asked: What holds societies together? How are norms (and deviance) established and handled by societies? What factors lead to social change, and what are the results of this change? Macrosociologists focus on society as a whole, as something that is prior to, and greater than, the sum of individual people.
Studying social life on the micro-level is a more recent development (in the early and mid-twentieth century) in the history of the field, and was pioneered by proponents of the symbolic interactionism perspective, namely George Herbert Mead, Herbert Blumer, and Erving Goffmann. Mead was a pragmatist and behaviorist, which means several things.
- To pragmatists, true reality does not exist “out there” in the real world. It “is actively created as we act in and toward the world. “
- People remember and base their knowledge of the world on what has been useful to them, and are likely to alter what no longer “works. “
- People define the social and physical “objects” they encounter in the world according to their use for them.
- If we want to understand actors, we must base that understanding on what people actually do.
Blumer built on Mead’s work. He believed that individuals create their own social reality through collective and individual action, and that the creation of social reality is a continuous process. Goffman elaborated on both Mead and Blumer by formulating the dramaturgical approach . He saw a connection between the acts people put on in their daily life and theatrical performances. In social interaction, like in theatrical performance, there is a front region where the “actors” (individuals) are on stage in front of the audience. This is where the positive aspect of the idea of self and desired impressions is highlighted. There is a back region, or stage, that can also be considered a hidden or private place where individuals can be themselves and step out of their role or identity in society. Face-to-face interactions are, thus, a stage where people perform roles and practice impression management (i.e. “saving face”). Other scholars have since developed new research questions and methods for studying micro-level social processes.
Micro- and macro-level studies each have their own benefits and drawbacks. Macrosociology allows observation of large-scale patterns and trends, but runs the risk of seeing these trends as abstract entities that exist outside of the individuals who enact them on the ground. Microsociology allows for this on-the-ground analysis, but can fail to consider the larger forces that influence individual behavior.
A Taxonomy of Sociological Analysis
Sociological analysis can take place at the macro or micro level, and can be subjective or objective.
1.4.3: Applied and Clinical Sociology
Applied or clinical sociology uses sociological insights or methods to guide practice, research, or social reform.
Learning Objective
Identify ways sociology is applied in the real world
Key Points
- Sociological research can be divided into pure research and applied research. Pure research has no motive other than to further sociological knowledge, while applied research has a direct practical end.
- Applied research may be put into the service of the corporate world, governmental and international agencies, NGOs, or clinical work. In all these instances, they apply sociological theories and methods to further the goals of the organization they are working under.
- One budding area in modern retail firms is site selection, or the determination of the best locations for new stores. Site selection requires understanding human ecology and consumer spending patterns, both of which are addressed using the sociological imagination.
- Clinical sociology involves the study of groups of people using learned information in case and care management towards holistic life enrichment or improvement of social and life conditions. Clinical sociologists usually focus on vulnerable population groups, such as children, youths or elderly.
Key Terms
- Site Selection
-
Site selection indicates the practice of new facility location, both for business and government. Site selection involves measuring the needs of a new project against the merits of potential locations.
- clinical sociology
-
Clinical sociology courses give students the skills to be able to work effectively with clients, teach basic counseling skills, give knowledge that is useful for careers, such as victims assisting and drug rehabilitation, and teach the student how to integrate sociological knowledge with other fields. They may go into such areas as marriage and family therapy, and clinical social work.
- Sociotherapist
-
A sociotherapist practices sociotherapy, which is a social science and form of social work and sociology that involves the study of groups of people, its constituent individuals and their behavior, using learned information in case and care management towards holistic life enrichment or improvement of social and life conditions.
Example
- Applied sociologists work in all kinds of fields. One example is applied demography and population control. During the mid-twentieth century, the population of developing nations was growing rapidly and policymakers the world over were concerned about the effects of overpopulation. To address this problem, governments and international organizations, such as the UN, worked with sociologists and demographers (sociologists who study population) to devise strategies to reduce population growth. Sociologists led campaigns to distribute contraception, modernize countries, and encourage education and equal rights for women. Together, these efforts began to slow population growth. The strategies were based on sociological findings that fertility rates are lower in modern industrial or post-industrial economies, where people put off having children in order to pursue education and economic opportunities. They were also based on sociological findings that fertility rates are lower when women have opportunities to pursue education and work outside the home. Thus, applied sociologists took findings from pure research and applied them to solving real-world problems.
Researchers often differentiate between “pure” and “applied” research. Presumably, pure research has no direct ends than adding to the knowledge pool, whereas applied research is put toward some practical end, such as working for a marketing firm to understand the relationship between race and consumption patterns or working for a government agency to study the reasons why poverty continues to exist. Of course, the line between pure and applied research is often blurred. For example, “pure” researchers in a university might get government funding to do their research projects, which somewhat complicates their commitment to do pure research. Outside the academic world, sociologists apply their skills in a variety of settings. Here, we will discuss the possibilities of applied sociology and one subfield, clinical sociology.
Sociologists can be found working in a wide range of fields, including organizational planning, development, and training; human resource management; industrial relations; marketing; public relations; organizational research; and international business .In all these instances, they apply sociological theories and methods toward understanding social relations and human behavior to further the goals of the organization they are working under, whether this is a business, a governmental agency, or a non-profit organization.
The Corporate World
Some sociologists find that adapting their sociological training and insights to the business world is relatively easy. Corporations want and need to understand their customers’ habits and preferences in order to anticipate changes in their markets. This drive to understand consumers is called consumer research and is a growing interest of corporations. Sociologists are particularly well suited to apply their quantitative and qualitative understanding of human behavior to this field.
Another budding area in modern retail firms is site selection, or the determination of the best locations for new stores. Site selection requires understanding human ecology and consumer spending patterns, both of which are addressed using the sociological imagination. Some additional direct applications of sociology include concept and product testing (which will put to good use training in research methods), the evaluating of global market opportunities (which will draw upon understandings of various cultures), long-range planning and forecasting (which draws on both statistics and futurist perspectives), marketing and advertising (which applies consumer studies directly), and human resource management (which relies on studies of organizational behavior).
Governmental and International Agencies
Outside of the corporate world, sociology is often applied in governmental and international agencies such as the World Bank or United Nations. For example, a sociologist might work compiling and analyzing quantitative demographic data from the U.S. Census Bureau to understand patterns of population change. Or a sociologist might work for the United Nations to research global health trends and the efficacy of current public health initiatives.
Non-Governmental Organizations
Non-Governmental Organizations (or NGOs) are legally constituted organizations created by private persons or organizations with no participation or representation of any government. Examples of NGOs include Oxfam , Catholic Relief Services , CARE International , and Lutheran World Relief . Many NGOs are concerned with the very social problems and social issues that sociologists study, from poverty to gender stratification to world population growth. Sociologists play important roles in the work of NGO’s from community organizing to direct relief to lobbying, as they are able to apply sociological approaches (for example, the conflict approach) to understand structural patterns that have led to current social problems.
Clinical Sociology
Clinical sociology involves the study of groups of people using learned information in case and care management towards holistic life enrichment or improvement of social and life conditions. A clinical sociologist, who might also be called a sociotherapist or life enrichment therapist, is usually concurrently a member of another relevant profession: medical doctor, psychiatrist, psychologist, nurse, social worker, criminologist, or activity and recreational professionals, among others. Clinical sociologists usually focus on vulnerable population groups, such as children, youths or elderly, and are employed in various settings such as treatment facilities or life care communities like nursing homes. They are directly involved in case management and care planning .
Jane Addams, Applied Sociologist
Jane Addams is considered by many to be one of the earliest sociologists, though her contributions were mostly to the application of sociology to social work.
1.4.4: The Significance of Social Inequality
Sociologists study many types of inequality, including economic inequality, racial/ethnic inequality, and gender inequality.
Learning Objective
Describe different types of social inequality
Key Points
- People experience inequality throughout the life course, beginning in early childhood.
- Inequality early in life can affect life chances for the rest of one’s life.
- Inequality means people have unequal access to scarce and valued resources in society. These resources might be economic or political, such as health care, education, jobs, property and land ownership, housing, and ability to influence government policy.
Key Terms
- social stratification
-
The hierarchical arrangement of social classes, or castes, within a society.
- inequality
-
An unfair, not equal, state.
When we are growing up, we might hear our parents talk about others as being from the “wrong side of the tracks,” or not being “our kind. ” We also become aware of what kind of toys they have (or don’t have), the way others dress, what kind of house they live in, what jobs their parents have, and due to this, some are treated differently and have better opportunities than others. We see differences in elementary schools and high schools in our city. If our parents belong to the upper class, we have a good chance of graduating high school and entering higher education. The more education we have, the more active we will be in political life, the more traditional and conservative our religious affiliation, the more likely we will marry into a family with both economic and social capital, and the more likely we will eat better food, will be less exposed to unhygienic conditions, and be able to pay for good health care. Social stratification and inequality are everywhere and impact us throughout our lives.
Sociology has a long history of studying stratification and teaching about various kinds of inequality, including economic inequality, racial/ethnic inequality, gender inequality, and other types of inequality. Inequality means people have unequal access to scarce and valued resources in society. These resources might be economic or political, such as health care, education, jobs, property and land ownership, housing, and ability to influence government policy.
Statistics on United States and global inequality are alarming. Consider this:
- Just 400 Americans have the same wealth as half of all Americans combined.
- Just 25 Americans have a combined income almost as great as the combined income of 2 billion of the world’s poor.
- In 2007, more than 37 million U.S. citizens, or 12.5% of the population, were classified as poor by the Census Bureau.
- In 2007, CEOs in the Fortune 500 received an average of $10.5 million, 344 times the pay of the average worker.
- Four of the wealthiest people in the world come from one family, the Walton’s. They are the four children who inherited Sam Walton’s company Wal-Mart. Together, they are worth $83.6 billion.
- Half of American children will reside in a household that uses food stamps at some point during childhood.
- Life expectancy in Harlem is shorter than in Bangladesh.
Although inequality is everywhere, there are many controversies and questions about inequality that sociologists are interested in, such as where did inequality come from? Why does it continue? Do we justify inequality? Can we eliminate inequality? Can we make a society in which people are equal? The sociological approach gives us the methodological and theoretical tools to begin to answer these questions.
Cape Verde Water
The water situation in Cape Verde, an island country in the central Atlantic, is a poignant illustration of global social inequality. Most of the population in Cape Verde collects water at public water channels.
Income inequality
This chart shows the proportion of total income that goes to the richest 1% of Americans. After the Great Depression, this proportion fell as New Deal policies helped distribute income more evenly. But since the 1980s, the proportion rose rapidly, so that by 2007, the richest 1% of Americans earned almost a quarter of total income in the United States.
1.4.5: Thinking Globally
Increasingly, sociologists are turning their attention to the world at large and developing theories of global processes.
Learning Objective
Discuss different sociological approaches to the study of global processes
Key Points
- World systems theory refers to the international division of labor among core countries, semi-periphery countries, and periphery countries and argues that inequality among countries comes not from inherent differences but from relationships of domination.
- Sociologists also study the globalization of norms and culture through dynamics such as world society.
- Sociologists may also study globalization from below, or grassroots mobilization, including glocalization and hybridization.
- Sociologists may also study globalization from below, or grassroots mobilization, including glocalization and hybridization.
Key Terms
- globalization
-
A common term for processes of international integration arising from increasing human connectivity and interchange of worldviews, products, ideas, and other cultural phenomena. In particular, advances in transportation and telecommunications infrastructure, including the rise of the Internet, represent major driving factors in globalization and precipitate the further interdependence of economic and cultural activities.
- glocalization
-
The global distribution of a product or service that is tailored to local markets.
- international division of labor
-
The international division of labor is an outcome of globalization. It is the spatial division of labor which occurs when the process of production is no longer confined to national economies.
Examples
- Immanuel Wallerstein intended world systems theory to explain the entirety of world history, but it can also be applied to specific examples. Consider China’s current investments in Africa, which many observers have characterized as neocolonial. Chinese companies have invested in Africa, building infrastructure, hiring workers, and obtaining rights to extract oil and minerals. On the one hand, these investments could be considered free market trade: after all, China has paid for labor and mining rights. On the other hand, these investments can be seen as a relationship of domination. China has more wealth and more power than many African countries. In that sense, African countries are dependent on outside investment. They lack the capital to adequately exploit and market their own resources or to adequately provide for citizens, so they turn to outside investors to develop and extract resources. But the profit from this enterprise flows to China, not Africa. And although Africans may gain some capital from the exchange, they lack the domestic industries to build many of the consumer goods people desire, so much of the capital is returned to China as payment for goods like clothing or electronics. In this example, China is the core country, which gathers resources from and sells goods back to Africa, the periphery.
- John Meyer argued that, in world society, norms and values spread across the globe. Thus, we find similar institutions in far-flung places. But, when norms are cross-applied to new contexts, they may be implemented in strange ways. For example, the value of democracy and the norm of elections has spread widely and elections are now held across the globe. These elections appear similar on the surface: people go to polling places, cast votes, and elect leaders. But on close inspection, superficially similar global practices are inconsistent with local culture. Thus, in traditionally authoritarian countries, leaders may be elected with upwards of 90% of the vote, preserving, in effect, the local tradition of authoritarianism while adopting the global norm of elections.
Thinking Globally
Historically, sociologists have tended to focus their work on individual countries, studying the social processes and structures within a single country. Some current scholars criticize that approach as “methodological nationalism” because it fails to consider the global connections and patterns that shape local and national situations. In addition, sociology has traditionally focused on Western societies, but has recently expanded its focus to non-Western societies. These shifts illustrate the fact that it is no longer possible to study social life without thinking globally. Contemporary societies have become so porous and interconnected (a process that scholars have termed globalization) that to ignore the global patterns would be to present an incomplete picture of any social situation .
Globalization
Global processes touch all corners of the world, including this mall in Jakarta, Indonesia, where the fast-food business model originating in the United States is now a part of everyday life.
World Systems Theory
Thinking globally in sociology could entail a variety of different approaches. Some scholars use world systems theory. World systems theory stresses that the world system (not nation states) should be the basic unit of social analysis. The world-system refers to the international division of labor, which divides the world into core countries, semi-periphery countries, and the periphery countries. Core countries focus on higher-skill, capital-intensive production, and the rest of the world focuses on low-skill, labor-intensive production, and the extraction of raw materials. This constantly reinforces the dominance of the core countries. Nonetheless, the system is dynamic, and individual states can gain or lose their core (semi-periphery, periphery) status over time. For a time, some countries become the world hegemon; throughout the last few centuries, this status has passed from the Netherlands to the United Kingdom and, most recently, to the United States.
The most well-known version of the world system approach has been developed by Immanuel Wallerstein in 1970s and 1980s. Wallerstein traces the rise of the world system from the 15th century, when the European feudal economy suffered a crisis and was transformed into a capitalist one. Europe (the West) utilized its advantages and gained control over most of the world economy, presiding over the development and spread of industrialization and the capitalist economy, indirectly resulting in unequal development.
Other approaches that fall under world systems theory include dependency theory and neocolonialism. Dependency theory takes the idea of the international division of labor and states that peripheral countries are not poor because they have not adequately developed, but rather are poor because of the very nature of their relationship with core countries. This relationship is exploitative, as the resources needed by peripheral countries to develop are funneled to core countries. Poor countries are thus in a continual state of dependency to rich countries .
Dependency Theory
According to dependency theory, unequal exchange results in the unequal status of countries. Core countries accumulate wealth by gathering resources from and selling goods back to the periphery and semi-periphery.
Neocolonialism (also known as neoimperialism) also argues that poor countries are poor not because of any inherent inadequacy. Neocolonialism emphasizes the unequal relationships between former colonizing countries and colonized regions. Domination (not just economic, but also cultural and linguistic) still continues to occur even though poor countries are no longer colonies.
Global Institutions
The top-down approach is not only used to study the global economy, but also social norms. Sociologists who are interested in global social norms focus their attention on global institutions, such as the United Nations, the World Health Organization, the International Monetary Fund, or various other international organizations, such as human rights groups.
John Meyer, a Stanford sociologist, is one of these. Meyer coined the term “world society” (or “world polity”) to describe scripts, models, and norms for behavior that originate from global institutions and that transcend the nation state. These norms form a global civil society that operates independently of individual nations and to which individual nations often strive to conform in order to be recognized by the international community.
Globalization from Below
Another approach to studying globalization sociologically is to examine on-the-ground processes. Some sociologists study grassroots social movements, such as non-governmental organizations which mobilize on behalf of equality, justice, and human rights. Others study global patterns of consumption, migration, and travel. Still others study local responses to globalization.
Two ideas that have emerged from these studies are glocalization and hybridization. Glocalization was a term coined by a Japanese businessman in the 1980s and is a popular phrase in the transnational business world. It refers to the ability to make a global product fit a local market. Hybridization is a similar idea, emerging from the field of biology, which refers to the way that various sociocultural forms can mix and create a third form which draws from its sources, but is something entirely new.
The possibilities for thinking globally in sociology are as varied as the world we live in: global finance, global technology, global cities, global medicine, global food. The list is endless. If we examine any social situation closely, the global patterns and linkages behind it will undoubtedly emerge.
Chapter 18: Foreign Policy
18.1: Foreign Policy
18.1.1: Foreign Policy
A country’s foreign policy includes all of the policies it develops to pursue its national interests as it interacts with other countries.
Learning Objective
Compare and contrast the elements of U.S. foreign policy and how they have changed over time
Key Points
- A state’s national interests are its primary goals and ambitions (economic, military, or cultural). Foreign policies are implemented to ensure that these national interests are met.
- In the past, foreign policy was primarily military-related. Now, in a globalized world, foreign policies involve other areas as well such as trade, finance, human rights, environmental issues, etc.
- In the U.S., the executive branch is in charge of foreign policy, and the Secretary of State deals with the day-to-day diplomacy involved in formulating foreign policy. Congress also oversees some areas of foreign policy.
- Two primary visions of foreign policy in the U.S. have been isolationism, and more recently, internationalism.
- The U.S. has to deal with numerous foreign policy issues such as dependence on oil, the AIDS epidemic, Middle East instability, terrorism, a growing trade deficit, tense relations with Russia, drug violence with Mexico, and numerous other issues.
Key Terms
- national interest
-
A country’s goals and ambitions whether economic, military, or cultural. Primary is the state’s survival, welfare, and security. Also important is the pursuit of wealth, economic growth, and power.
- globalization
-
The process of international integration arising from the interchange of world views, products, ideas, and other aspects of culture; advances in transportation and telecommunications infrastructure, including the rise of the Internet, are major factors that precipitate interdependence of economic and cultural activities.
- foreign policy
-
A government’s policy relating to matters beyond its own jurisdiction: usually relations with other nations and international organisations.
What is Foreign Policy?
A country’s foreign policy consists of self-interest strategies chosen by the state to safeguard its national interests and to achieve its own goals through relations with other countries. The approaches are strategically employed to interact with other countries.
In recent times, due to the deepening level of globalization and transnational activities, states also have to interact with non-state actors. The aforementioned interaction is evaluated and monitored in an attempt to maximize benefits of multilateral international cooperation. Since the national interests are paramount, foreign policies are designed by the government through high-level decision making processes. National interest accomplishments can occur as a result of peaceful cooperation with other nations or through exploitation.
Elements of Foreign Policy
Foreign policy is designed to protect the national interests of the state. Modern foreign policy has become quite complex. In the past, foreign policy may have concerned itself primarily with policies solely related to national interest–for example, military power or treaties. Currently, foreign policy encompasses trade, finance, human rights, environmental, and cultural issues. All of these issues, in some way, impact how countries interact with one another and how they pursue their national interests worldwide.
Who Is in Charge of Foreign Policy?
Usually, creating foreign policy is designated to the head of government and the foreign minister (or equivalent). In some countries the legislature also has considerable oversight.
In the United States, foreign policy is made and carried out by the executive branch, particularly the president, with the national security adviser, the State Department, the Defense Department, the Department of Homeland Security, and the intelligence agencies. The National Security Act of 1947 and recent bureaucratic reorganization after 9/11 reshaped the structure of foreign policy making.
The U.S. Secretary of State is analogous to the foreign minister of other nations and is the official charged with state-to-state diplomacy, although the president has ultimate authority over foreign policy. The current U.S. Secretary of State is John Kerry.
Secretary of State
Former U.S. Secretary of State Hillary Rodham Clinton discusses agriculture and environmental issues in Kenya. The Secretary of State is a primary leader in determining U.S. foreign policy.
Congress is involved in foreign policy through its amending, oversight, and budgetary powers and through the constitutional power related to appointments, treaties, and war that it shares with the president. While Congress has sometimes worked to limit the president’s autonomy in foreign policy, the use of executive orders and the ability to enter military engagements without formal declarations of war have ensured the president’s continued primacy in international affairs. Forces that sometimes influence foreign and military policies from outside government are think tanks, interest groups, and public opinion.
U.S. Foreign Policy
The foreign policy of the United States is the way in which it interacts with foreign nations. Foreign policy sets standards of interaction for its organizations, corporations, and individual citizens. Two visions of foreign policy in the U.S. are isolationism and internationalism, which has been dominant since World War II. The main foreign policies during the Cold War were containment, deterrence, détente, arms control, and the use of military force like in Vietnam.
U.S. foreign policy is far-reaching because the United States is the global superpower and world leader. It operates in a world beset by famine, poverty, disease, and catastrophes both natural (tsunamis, earthquakes) and man-made (climate change, pollution of the seas and skies, and release of radioactive materials from nuclear plants). Parts of the world are plagued by genocide, regional and ethnic strife, and refugees. Terrorism, conflicts in Iraq and Afghanistan, the nuclear weapons programs of Iran and North Korea, the proliferation of weapons of mass destruction, the Arab-Israeli conflict, and instability and challenges to autocratic rulers in the Middle East are only the most obvious of the foreign policy issues that affect the United States. Others issues include economic upheavals, the rise of China to world economic and political power, relations with Russia, AIDS in Africa, dependence on oil from non-democratic states, the importation of illegal drugs, and the annual U.S. trade deficit of around $800 billion .
Obama and Putin
U.S. President Obama and Russian President Putin meet. Relations with other countries, such as the U.S.-Russia relationship, are a primary concern and focal point for U.S. foreign policy.
To prepare for these foreign policy issues, U.S. military expenditures are enormous. The annual defense budget is around 1.3 trillion. It has formal or informal agreements to defend 37 countries. It has more than 700 military installations abroad in approximately 130 countries . The United States is extraordinarily active, often militarily, in international affairs. Since 1989, it has intervened in Panama, Kuwait, Somalia, Bosnia, Haiti, Kosovo, Afghanistan, and Iraq.
U.S. Military Strength
U.S. soldiers patrolling streets in Iraq. The United States’ huge military budget and extensive military is intended to further U.S. foreign policy interests.
18.1.2: National Security Policy
National security policies, designed to protect the state, include military security as well as non-military security.
Learning Objective
Explain the tension that exists between national security and civil and political rights
Key Points
- To ensure national security, a state must possess military power. However, a state must also be economically, politically, and environmentally secure.
- National security policy became a prominent policy in the US after World War II, when President Truman signed legislation that established many of the current national security agencies, like the CIA.
- Pursuing national security can lead to a tension between the states’ need to protect itself, and individual freedoms. For example, after September 11, the USA PATRIOT Act was passed, significantly expanding the government’s powers to fight terrorism, but also decreased individual rights to privacy.
- Current national security problems facing the United States include the Drug War in Mexico, domestic terrorism, instability in the Middle East, the national debt, and the recent economic recession, among many others.
Key Terms
- USA PATRIOT Act
-
Signed by President Bush in 2001, the PATRIOT ACT was a response to the terrorist attacks of September 11th. It significantly reduced restrictions in the power of law enforcement agencies to gather intelligence, deport immigrants, and monitor financial transactions.
- diplomacy
-
The art and practice of conducting international relations by negotiating alliances, treaties, agreements, etc., bilaterally or multilaterally, between states and sometimes international organizations or even between policies with varying statuses, such as those of monarchs and their princely vassals.
- national security
-
The safety of a country as managed through the exercise of economic and political power, intelligence agencies and diplomacy.
National Security Policies
National security policies are policies related to the survival of the state. This security is guaranteed through the use of economic coercion, diplomacy, political power, and the projection of power. This concept developed primarily in the United States after World War II.
Initially focused on military might, national security now encompasses a broad range of concerns. In order to possess national security, a nation needs to possess economic security, energy security, and environmental security, in addition to a strong military. Security threats involve not only conventional foes, such as other nation-states, but also non-state actors, like violent non-state actors (al Queda, for example), narcotic cartels, multinational corporations and non-governmental organizations. Some authorities include natural disasters and other environmentally detrimental events in this category.
Policies and measures taken to ensure national security include:
- Using diplomacy to rally allies and isolate threats
- Marshaling economic power to facilitate or compel cooperation
- Maintaining effective armed forces
- Implementing civil defense and emergency preparedness measures (this includes anti-terrorism legislation)
- Ensuring the resilience of a critical national infrastructure
- Using intelligence services to defeat threats, and,
- Using counterintelligence services to protect the nation from internal threats.
History of National Security Policy
The concept of national security became an official guiding principle of US foreign policy when the National Security Act of 1947 was signed on July 26, 1947, by President Harry S. Truman. Together with its 1949 amendment, this act instantiated important organizations dedicated to American national security, such as the precursor to the Department of Defense. It also subordinated all military branches to the new cabinet level position of the Secretary of Defense, established the National Security Council, and established the Central Intelligence Agency.
CIA Headquarters
In 1949, the Central Intelligence Agency (headquarters depicted here) was established to further the United State’s national security.
Current National Security Policies
In 2010, Barack Obama included an all-encompassing world-view in his definition of America’s national security interests. His statement prioritized the following.
- The security of the United States, its citizens, and U.S. allies and partners
- A strong, innovative U.S. economy in an open international economic system that promotes opportunity and prosperity
- Respect for universal values at home and around the world
- An international order advanced by U.S. leadership that promotes peace, security, and opportunity through a stronger cooperation to meet global challenges, and,
- Prevention of polarization between Republicans and Democrats
Current national security concerns in the U.S. include the Drug War in Mexico, terrorism, instability in the Middle East, the national debt, and global warming, among others.
Elements of National Security
Military security was the earliest recognized form of national security. Military security implies the capability of a nation to defend itself and/or deter military aggression. Military security also implies the ability of a nation to enforce its policy choices through the use of military force .
US Military Security
Traditionally, military strength has been considered the most important component of national security policies.
The political aspect of security is another important facet of national security. Political security concerns the stability of the social order, and refers to policies related to diplomacy, negotiation, and other interactions.
Economic security is also a part of national security. In today’s complex system of international trade, characterized by multi-national agreements, mutual inter-dependence, and limited natural resources, economic security refers to whether or not a nation is free to develop its own economy in the manner desired. Economic security today is, arguably, as important a part of national security as military security.
Environmental security deals with environmental issues. While all environmental events are not considered significant enough to be categorized as threats, many transnational issues, both global and regional, stand to affect national security. These include global warming, deforestation, or conflicts over limited resources.
Energy security, as it relates to natural resources, is a final important component of national security. For a nation to be able to develop its industry and maintain economic competitiveness, it must have available and affordable natural resources.
Tension: Rights Versus Security
Measures adopted to maintain national security have led to an ongoing tension between the preservation of the state, and the rights and freedoms of individual citizens within that state. Although national security measures are imposed to protect society as a whole, many such measures restrict the rights and freedoms of individuals in society. Many are concerned that if national security policies are not subject to good governance, the rule of law, or strict checks and balances, that there is a risk that “national security policy” may simply serve as a pretext for suppressing unfavorable political and social views.
This Phone Is Tapped
The caption on this pay phone reads, “Your conversation is being monitored by the U.S. Government courtesy of the US Patriot Act of 2001. ” The PATRIOT Act is an example of the tension between protecting national security and promoting citizen’s rights.
In the United States, the controversial USA PATRIOT Act, as well as other recent government actions, has brought some of these issues to public attention. These issues have raised two main questions. First, to what extent, for the sake of national security, should individual rights and freedoms be restricted? Second, can the restriction of civil rights for the sake of national security be justified?
18.1.3: Diplomacy
Diplomacy refers to the art and practice of conducting negotiations and developing relationships between states.
Learning Objective
Explain how diplomatic recognition and informal diplomacy are tools of foreign policy
Key Points
- In diplomacy, representatives of states communicate on topics such as human rights, trade conditions, or war and peace. Diplomacy usually involves the negotiation of treaties, alliances, and organizations pertaining to these topics.
- Diplomatic recognition is an important aspect of diplomacy. Being formally recognized as a sovereign state is important for peaceful relationships and for participation in the world, as the situation in Taiwan demonstrates.
- In informal diplomacy, states communicate with each other through non-governmental means. For example, academics, members of think tanks, or former politicians may serve as channels of informal diplomacy.
- Diplomacy is often considered to be important in creating “soft power,” which is political influence based on non-military power.
Key Terms
- soft power
-
Political influence that is extended by means of diplomacy, international assistance, cultural exchanges, etc., rather than by such “hard” means as military intervention or punitive economic measures.
- diplomacy
-
The art and practice of conducting international relations by negotiating alliances, treaties, agreements, etc., bilaterally or multilaterally, between states and sometimes international organizations or even between policies with varying statuses, such as those of monarchs and their princely vassals.
- diplomat
-
A person who is accredited, such as an ambassador, to officially represent a government in its relations with other governments or international organizations.
What is Diplomacy?
Diplomacy is the art and practice of conducting negotiations between representatives of groups or states. It usually refers to the conduct of international relations through the intercession of professional diplomats with regard to issues of peace-making, trade, war, economics, culture, environment, and human rights. International treaties are usually negotiated by diplomats prior to endorsement by national politicians .
Signing Treaties
Obama and Afghanistan President Hamid Karzai sign a strategic partnership agreement. One of the main objectives of diplomacy and diplomatic negotiations is signing and negotiating treaties with other countries. If negotiation by national diplomats is successful, the national leaders (as depicted here) sign the treaties.
To some extent, all other tools of international relations can be considered the failure of diplomacy. Keeping in mind, the use of other tools are part of the communication and negotiation inherent within diplomacy. Sanctions, force, and adjusting trade regulations, while not typically considered part of diplomacy, are actually valuable tools in the interest of leverage and placement in negotiations.
Diplomatic Recognition
Diplomatic recognition is an important element of diplomacy because recognition often determines whether a nation is an independent state. Receiving recognition is usually difficult, even for countries which are fully sovereign.
Today there are a number of independent entities without widespread diplomatic recognition, most notably the Republic of China (ROC)/Taiwan on Taiwan Island. Since the 1970’s, most nations have stopped officially recognizing the ROC’s existence on Taiwan, at the insistence of the People’s Republic of China (PRC). Currently, the United States maintains informal relations through de facto embassies, with names such as the American Institute in Taiwan. Similarly, Taiwan’s de facto embassies abroad are known by names like the Taipei Economic and Cultural Representative Office. This was not always the case, with the U.S. maintaining official diplomatic ties with the ROC. The U.S. recognized it as the sole and legitimate government of “all of China” until 1979, when these relations were broken off as a condition for establishing official relations with PR China .
Taiwan and U.S. Diplomatic Recognition
In recent years, Taiwan, an island located off the east coast of China, has not been diplomatically recognized by the United States. The U.S. has adopted this policy in order to maintain more advantageous diplomatic relations with China.
The Palestinian National Authority has its own diplomatic service. However, Palestinian representatives in most Western countries are not accorded diplomatic immunity. Their missions are referred to as Delegations General.
Other unrecognized regions that claim independence include Abkhazia, Transnistria, Somaliland, South Ossetia, Nagorno Karabakh, and the Turkish Republic of Northern Cyprus. Lacking the economic and political importance of Taiwan, these nations tend to be much more diplomatically isolated.
Informal Diplomacy
Informal diplomacy is also a key component of diplomacy. Sometimes called “track II diplomacy,” the U.S. has used informal diplomacy for decades to communicate between powers. Most diplomats work to recruit figures in other nations who might be able to give informal access to a country’s leadership. In some situations, like between the United States and China, a large amount of diplomacy is done through semi-formal channels using interlocutors such as academic members of think tanks. This occurs in situations where governments wish to express intentions or to suggest methods of resolving a diplomatic situation, but do not wish to express a formal position.
On some occasions a former holder of an official position might continue to carry out an informal diplomatic activity after retirement. At times, governments welcome such activity, for example as a means of establishing an initial contact with a hostile state of group without being formally committed. However, in other cases such informal diplomats seek to promote a political agenda that is different from the agenda of the government currently in power. Such informal diplomacy is practiced by former U.S. Presidents Jimmy Carter and (to a lesser extent) Bill Clinton .
Jimmy Carter and Informal Diplomacy
Former U.S. President Jimmy Carter visits a referendum polling center in Sudan 2011. Former President Carter is leading the Carter Center’s international observation of the referendum. Even after leaving office, political leaders can remain active in informal diplomacy.
Diplomacy as Soft Power
The concept of power in international relations can be described as the degree of resources, capabilities, and influence in international affairs. It is often divided up into the concepts of hard power and soft power. Hard power relates primarily to coercive power, such as the use of force. Soft power commonly covers economics, diplomacy, and cultural influence. There is no clear dividing line between the two forms of power. However, diplomacy is usually regarded as being important in the creation of “soft” power, while military power is important for “hard” power.
18.1.4: International Humanitarian Policies and Foreign Aid
Humanitarian policies are ostensibly intended to help other countries, and include human rights policies, aid, and interventions.
Learning Objective
Analyze the emergence and justification for humanitarian intervention in world politics
Key Points
- Humanitarian interventions use military forces from one or more state to halt violence or human rights violations occurring in another state.
- Humanitarian interventions are often controversial. Some argue that countries like the United States may only use humanitarian reasons to justify intervening, when the true motivations involve non-altruistic, political concerns.
- Economic foreign aid is assistance given by one country to another country. Foreign aid can consist of humanitarian aid (which is designed to help in an emergency), development aid (which is designed to improve society long-term), and food aid.
- Giving foreign aid is one of the core components of U.S foreign policy and a large part of the foreign policy budget. The U.S. is the largest foreign aid donor in the world in terms of dollar amounts, but does not give as much foreign aid as a percentage of their GDP as other countries.
- The U.S. record on supporting human rights is mixed. Oftentimes, national interest or foreign policy concerns trump U.S. support for human rights; for example, when the U.S. supported nondemocratic regimes within the context of the Cold War.
Key Terms
- human rights
-
The basic rights and freedoms that all humans should be guaranteed, such as the right to life and liberty, freedom of thought and expression, and equality before the law.
- USAID
-
The United States Agency for International Development (USAID) is the United States federal government agency primarily responsible for administering civilian foreign aid.
- humanitarian intervention
-
Deployment of army personnel for humanitarian motivated goals.
Humanitarian Policies
In its most general form, humanitarianism is an ethic of kindness, benevolence, and sympathy extended universally and impartially to all human beings. International humanitarian policies, then, are policies presumably enacted to reduce suffering of human beings around the world. International humanitarian policies can take a number of different forms. For example, human rights and human rights laws seek to protect essential rights and fight for justice if these rights are violated. International humanitarian interventions are military or non-military interventions into another country to halt widespread violence or war. Foreign aid seeks to provide countries with resources (economic or otherwise) that they can use to ease the suffering of their people.
Humanitarian Intervention
Humanitarian intervention is a state’s use of “military force against another state when the chief publicly declared aim of that military action is ending human-rights violations being perpetrated by the state against which it is directed. “
The subject of humanitarian intervention has remained a compelling foreign policy issue, since it highlights the tension between the principle of state sovereignty – a defining pillar of the UN system and international law – and evolving international norms related to human rights and the use of force. Moreover, it has sparked debates over its legality, the ethics of using military force to respond to human rights violations, when it should occur, who should intervene, and whether it is effective.
Some argue that the United States uses humanitarian pretexts to pursue otherwise unacceptable goals.They argue that the United States has continued to act with its own interests in mind, with the only change being that humanitarianism has become a legitimizing ideology for projection of U.S. power. In particular, some argue that the 1999 NATO intervention in Kosovo was conducted largely to boost NATO’s credibility.
NATO Intervention
In this humanitarian intervention, NATO forces intervened in Kosovo. Humanitarian interventions are frequently controversial, and the motives of the intervening force are often called into question.
Types of Economic Aid
There are three main types of economic foreign aid: humanitarian aid, development aid, and food aid. Humanitarian aid or emergency aid is rapid assistance given to people in immediate distress to relieve suffering, during and after man-made emergencies (like wars) and natural disasters. Development aid is aid given by developed countries to support development in general. It is distinguished from humanitarian aid as being aimed at alleviating poverty in the long term, rather than alleviating suffering in the short term. Food aid can benefit people suffering from a shortage of food. It can be used to increase standard of living to the point that food aid is no longer required . Conversely, badly managed food aid can create problems by disrupting local markets, depressing crop prices, and discouraging food production.
United States Food Aid
Aid workers from USAID (the United States Agency for International Development) distribute food to Kenya during a food crisis.
The United States and Foreign Aid
Foreign assistance is a core component of the State Department’s international affairs budget and is considered an essential instrument of U.S. foreign policy. Foreign aid has been given to a variety of recipients, including developing countries, countries of strategic importance to the United States, and countries recovering from war. The government channels about half of its economic assistance through a specialized agency called the United States Agency for International Development (USAID) .
The 2010 United States federal budget spent 37.7 billion on economic aid (of which USAID received 14.1 billion) out of the 3.55 trillion budget. Aid from private sources within the United States in 2007 was probably somewhere in the 10 to $30 billion range. In absolute dollar terms, the United States is the largest international aid donor, but as a percent of gross national income, its contribution to economic aid is only 0.2%, proportionally much smaller than contributions of countries such as Sweden (1.04%) and the United Kingdom (0.52%).
The United States and Human Rights Policies
The United States’ record on human rights is mixed. The United States has backed unpopular leaders (the Shah of Iran, 1941-1979, for example), mired itself in losing battles (consider the Vietnam War, 1950-1975), ignored ethnic cleansing (as was the case in Rwanda, 1994), and given foreign aid to corrupt regimes (as it did to Egypt, 1952-2011). Too often, the United States has had to support the lesser of two evils when it comes to relations with developing nations. And too often, the blowback from these awkward relationships has resulted in resentment from both United States citizens and the oppressed citizens of the developing nations (Guatemala, 1950’s, and Nicaragua, 1912-1933). However, the United States remains the largest contributor of foreign aid, and is currently backing what some refer to as the awakening of the Arab world (Libya, 2011), supporting “the people” even though the outcome is not yet clear.
18.1.5: Economic Prosperity
Economic prosperity is necessary to achieve foreign policy goals, and despite the 2008 recession, the U.S. economy is still powerful.
Learning Objective
Identify the sources of the United States’ economic prosperity
Key Points
- The United States is economically one of the most prosperous countries in the world, composing nearly one-quarter of the world’s GDP.
- Reasons for the United States’ economic prosperity include a large unified market, plentiful natural resources, a strong and vibrant political regime, immigration, technological and industrial innovation, and a spirit of entrepreneurship and capitalism.
- With such a large economy, the United States has been able to maintain and develop a substantial military force, which is necessary for the pursuit of most foreign policies and national interests.
- The 2008 recession, despite government attempts to halt or diminish the effects, has and will undoubtedly affect the United State’s economic prosperity, and consequently, its ability to carry out its foreign policy goals.
Key Terms
- American Recovery and Reinvestment Act of 2009
-
An economic stimulus package enacted in 2009 to save and create jobs, provide relief for those industries most impacted by the recession, and to invest in infrastructure.
- human capital
-
The stock of competencies, knowledge, social and personality attributes, including creativity, embodied in the ability to perform labor so as to produce economic value. It is an aggregate economic view of the human being acting within economies.
- gross domestic product
-
Abbreviated GDP. A measure of the economic production of a particular territory in financial capital terms over a specific time period.
Sources of the United States’ Economic Prosperity
In the two hundred and thirty years since the independence of the United States, the country has grown to be a huge, integrated, industrialized economy that makes up nearly a quarter of the world economy. The main policies that contributed to this economic prosperity were a large unified market, a supportive political-legal system, vast areas of highly productive farmlands, vast natural resources (especially timber, coal, iron, and oil) , and an entrepreneurial spirit and commitment to investing in material and human capital. The United States’ economy has maintained high wages, attracting immigrants by the millions from all over the world. Technological and industrial factors have also played a major role in the United States’ economic prosperity .
Advanced Technology
The United States has been able to grow into a world economic power in part due to the rapid advances of technology and industry.
Trans-Alaska Pipeline
One of the reasons for the United State’s economic prosperity is the abundance of natural resources, such as oil. This picture shows the trans-Alaska oil pipeline, which carries oil from northern Alaska to the rest of the United States.
Economic Prosperity and Foreign Policy
The United States is highly influential in the world, primarily because the United States’s foreign policy is backed by a $15 trillion economy, which is approximately a quarter of the global gross domestic product (GDP). Economic prosperity is a central component of any states’ foreign policy. Without substantial economic means, a state cannot expect to have influence on the world stage. Similarly, economic prosperity is tied to the maintenance of a global military presence. Without a strong military, the pursuit of national interests becomes more difficult.
Continued Economic Prosperity?
In 2008, a perfect storm of economic disasters hit the United States and indeed the entire world. The most serious began with the collapse of housing bubbles in California and Florida, along with the collapse of housing prices and the construction industry. A series of the largest banks in the United States and Europe also collapsed; some went bankrupt, others were bailed out by the government. The United States government voted 700 billion in bailout money, committed trillions of dollars to shoring up the financial system, but the measures did not reverse the declines. Banks drastically tightened their lending policies, despite infusions of federal money. The stock market plunged 40%, wiping out tens of trillions of dollars in wealth; housing prices fell 20% nationwide wiping out trillions more. By late 2008, distress was spreading beyond the financial and housing sectors. President Barack Obama signed the American Recovery and Reinvestment Act of 2009 in February 2009; the bill provides 787 billion in stimulus through a combination of spending and tax cuts.
Due to the close relationship between economic prosperity and foreign policy, the recession has impacted all elements of the United States’ foreign policy. Cuts to the military and defense spending have been threatened, and this economic crisis will undoubtedly take a toll on the United State’s position as a global superpower. However, despite the economic recession, the sheer size of the United State’s economy ensures that it will remain an important actor in the world economy .
The United States’ Share of World GDP
The United States’ share of world GDP (nominal) peaked in 1985 with 32.74% of global GDP (nominal). The second highest share was 32.24% in 2001. Note that it is been declining since then.
18.2: Who Makes U.S. Foreign Policy?
18.2.1: The President
The president is very influential in US foreign policy, and directs the nation’s war-waging, treaties, and diplomatic relations.
Learning Objective
Explain the President’s authority as Commander-in-Chief of the United States
Key Points
- Throughout the course of their time in office, most presidents gravitate towards foreign policy. It is often argued that the president has more autonomy in foreign policy as compared to domestic policy.
- The president is commander-in-chief of the armed forces, but only Congress has authority to declare war and provide funding. The War Powers Act attempted to limit the president’s war-waging powers.
- The president has the power to make treaties, with a two-thirds vote of the Senate, and has the power to make international agreements.
- The president is the chief diplomat as head of state. The president can also influence foreign policy by appointing US diplomats and foreign aid workers.
Key Terms
- congressional-executive agreements
-
An accord made by joint authority of the Congress and the President covering areas of International Law that are not within the ambit of treaties.
- War Powers Act
-
A federal law intended to check the President’s power to commit the United States to an armed conflict without the consent of Congress.
- treaty
-
A binding agreement under international law concluded by subjects of international law, namely states and international organizations.
Example
- Sometimes, presidents amass several different ways of authorizing the use of force. In his last press conference before the start of the invasion of Iraq in 2003, President Bush invoked the congressional authorization of force, UN resolutions, and the inherent power of the president to protect the United States derived from his oath of office.
The President’s Influence on US Foreign Policy
Presidents have more power and responsibility in foreign and defense policy than in domestic affairs. They are the commanders in chief of the armed forces; they decide how and when to wage war. As America’ chief diplomat, the president has the power to make treaties to be approved by the Senate. And as head of state, the president speaks for the nation to other world leaders and receives ambassadors.
Presidents almost always point to foreign policy as evidence of their term’s success. Domestic policy wonk Bill Clinton metamorphosed into a foreign policy enthusiast from 1993 to 2001. Even prior to 9/11, the notoriously untraveled George W. Bush underwent the same transformation. President Obama has been just as involved, if not more, in foreign policy than his predecessors. Congress—as long as it is consulted—is less inclined to challenge presidential initiatives in foreign policy than in domestic policy. The idea that the president has greater autonomy in foreign than domestic policy is known as the “Two Presidencies Thesis.”
The President and Waging War
The President is the Commander-in-Chief of the United States Armed Forces and as such has broad authority over the armed forces. However, only Congress has authority to declare war and decide the civilian and military budget.
War powers provide a key avenue for presidents to act in foreign policy. After the 9/11 attacks, President Bush’s Office of Legal Counsel argued that as commander in chief President Bush could do what was necessary to protect the American people. Since World War II, presidents have never asked Congress for (or received) a declaration of war. Instead, they relied on open-ended congressional authorizations to use force, United Nations resolutions, North American Treaty Organization (NATO) actions, and orchestrated requests from tiny international organizations like the Organization of Eastern Caribbean States.
Congress can react against undeclared wars by cutting funds for military interventions. Such efforts are time consuming and not in place until long after the initial incursion. Congress’s most concerted effort to restrict presidential war powers, the War Powers Act, passed despite President Nixon’s veto in 1973. It was established to limit presidential war powers, but it gave presidents the right to commit troops for sixty days with only the conditions being to consult with and report to Congress—conditions presidents often feel free to ignore. Since Vietnam, the act has done little to prevent presidents from unilaterally launching invasions.
President Obama did not seek congressional authorization before ordering the US military to join attacks on the Libyan air defenses and government forces in March 2011. After the bombing campaign started, Obama sent Congress a letter contending that as Commander-in-Chief he had constitutional authority for the attacks. White House lawyers used the distinction between “limited military operation” and “war” to justify this.
The President, Treaties, and Agreements
Article II, Section 2 of the United States Constitution grants power to the president to make treaties with the “advice and consent” of two-thirds of the Senate. This is different from normal legislation which requires approval by simple majorities in both the Senate and the House of Representatives. .
President Wilson
Wilson had disagreements with Congress over how the peace treaty ending World War I should be handled. Presidents often have a wide range of influence on US foreign policy.
Throughout U.S. history, the President has also made international “agreements” through congressional-executive agreements (CEAs) that are ratified with only a majority from both houses of Congress, or sole-executive agreements made by the President alone. The Supreme Court of the United States has considered congressional-executive and sole-executive agreements to be valid, and they have been common throughout American history.
The President and Diplomacy
Another section of the Constitution that gives the president power over foreign affairs is Article II, Section 2, Clause 2 of the United States Constitution, known as the Appointments Clause. This clause empowers the President to appoint certain public officials with the “advice and consent” of the Senate. This clause also allows lower-level officials to be appointed without the advice and consent process. Thus, the President is responsible for the appointment of both upper- and lower-level diplomats and foreign-aid workers.
For example, the United States Secretary of State is the Foreign Minister of the United States and the primary conductor of state-to-state diplomacy. Both the Secretary of State and ambassadors are appointed by the President, with the advice and consent of the Senate .
Hillary Rodham Clinton
Hillary Clinton served as Secretary of State, which is the US’s Foreign Minister. The President has the power to appoint diplomats (such as the Secretary of State), giving him or her substantial influence in US foreign policy.
As head of state, the President serves as the nation’s top diplomat. Presidents are often depicted as speaking for and symbolically embodying the nation: giving a State of the Union address, welcoming foreign leaders, traveling abroad, or representing the United States at an international conference. All of these duties serve an important function in US foreign policy.
18.2.2: The Cabinet
The secretary of state and secretary of defense play key roles in assisting the president with foreign policy.
Learning Objective
Compare and contrast the roles of the Secretary of State and Secretary of Defense in U.S. foreign policy
Key Points
- The secretary of state assists the president in foreign affairs and advises him on representatives and international relations.
- The secretary of defense, among other things, advises the president on military affairs and hot spots throughout the world.
- Since 9/11 many functions of the secretary of state has been shifted to other departments so the secretary can focus on pressing foreign matters.
Key Term
- commander-in-chief
-
A commander-in-chief is the person exercising supreme command authority over a nation’s military forces or significant element of those forces.
The presidential cabinet has several secretaries who aid the president in foreign affairs. This includes the secretary of state and the secretary of defense.
The United States Secretary of State is the head of the United States Department of State, which is concerned with foreign affairs. The Secretary is a member of the cabinet and the highest-ranking cabinet secretary both in line of succession and order of precedence. The current Secretary of State is John Kerry, the 68th person to hold the post. The specific duties of the Secretary of State include:
- Organizes and supervises the entire United States Department of State and the United States Foreign Service.
- Advises the President on matters relating to U.S. foreign policy, including the appointment of diplomatic representatives to other nations, and on the acceptance or dismissal of representatives from other nations.
- Participates in high-level negotiations with other countries, either bilaterally or as part of an international conference or organization, or appoints representatives to do so. This includes the negotiation of international treaties and other agreements.
- Responsible for overall direction, coordination, and supervision of interdepartmental activities of the U.S. Government overseas.
- Provides information and services to U.S. citizens living or traveling abroad. Also provides credentials in the form of passports and visas.
- Supervises the United States immigration policy at home and abroad.
- Communicates issues relating the United States foreign policy to Congress and U.S. citizens.
Most of the domestic functions of the Department of State have been transferred to other agencies. Those that remain include storage and use of the Great Seal of the United States, performance of protocol functions for the White House, and the drafting of certain proclamations. The Secretary also negotiates with the individual states over the extradition of fugitives to foreign countries. Under Federal Law, the resignation of a President or of a Vice-President is only valid if declared in writing in an instrument delivered to the office of the Secretary of State. Accordingly, the resignations of President Nixon and of Vice-President Spiro Agnew, domestic issues, were formalized in instruments delivered to the Secretary of State.
As the highest-ranking member of the cabinet, the Secretary of State is the third-highest official of the executive branch of the Federal Government of the United States, after the President and Vice President. The Secretary of State is fourth in line to succeed the Presidency, coming after the Vice President, the Speaker of the House of Representatives, and the President Pro Tempore of the Senate. Six Secretaries of State have gone on to be elected President.
As the head of the United States Foreign Service, the Secretary of State is responsible for managing the diplomatic service of the United States. The foreign service employs about 12,000 people domestically and internationally. It supports 265 United States Diplomatic missions around the world, including ambassadors to various nations.
The Secretary of Defense is the head and chief executive officer of the Department of Defense, which is an Executive Department of the Government of the United States of America . This position corresponds to what is generally known as a defense Minister in many other countries. The Secretary of Defense is appointed by the president with the advice and consent of the senate. The individual is by custom a member of the cabinet and by law a member of the National Security Council.
Flag of the Secretary of Defense
The flag of the secretary of defense.
Secretary of Defense is a statutory office. It is the general provision in administrative law that provides that the Secretary of Defense has “authority, direction and control over the Department of Defense. ” The Secretary of Defense is further designated by the same statute as “the principal assistant to the President in all matters relating to the Department of Defense. ” Ensuring civilian control of the military, an individual may not be appointed as Secretary of Defense within seven years after relief from active duty as a commissioned officer of a regular (i.e., non-reserve) component of an armed force.
The Secretary of Defense is in the chain of command and exercises command and control, subject only to the orders of the President, over all Department of Defense forces (Army, Navy, Air Force, and Marine Corps) for both operational and administrative purposes. Only the Secretary of Defense (or the President) can authorize the transfer of operational control of forces between the three Military Departments and between the combatant commands. Because the Office of Secretary of Defense is vested with legal powers which exceeds those of any commissioned officer, and is second only to the Office of President in the military hierarchy, it has sometimes unofficially been referred to as a de facto “deputy commander-in-chief. ” The Chairman of the Joint Chiefs of Staff is the principal military adviser to the Secretary of Defense and the President. While the Chairman may assist the Secretary and President in their command functions, the Chairman is not in the chain of command.
18.2.3: The Bureaucracy
Prominent bureaucratic organizations shaping U.S. foreign policy include the State Department, the Defense Department, and the CIA.
Learning Objective
Compare and contrast the roles of the State Department, the Defense Department, and the Central Intelligence Agency in shaping U.S. foreign policy.
Key Points
- The State Department’s responsibilities include protecting and assisting U.S. citizens living or traveling abroad; assisting U.S. businesses in the international marketplace; and coordinating and providing support for international activities of other U.S. agencies.
- The Department of Defense is the executive department of the U.S. government concerned directly with national security and the U.S. armed forces.
- The Central Intelligence Agency (CIA) is an independent civilian intelligence agency of the U.S. government that provides national security intelligence assessments to senior U.S. policymakers.
Key Terms
- tactical
-
of, or relating to military operations that are smaller or more local than strategic ones
- diplomatic immunity
-
A diplomat’s immunity to prosecution and/or litigation under local law.
There are several bureaucratic organizations that are actively involved in shaping U.S. foreign policy. Prominent among them are the State Department, the Defense Department, and the Central Intelligence Agency.
The United States Department of State (DoS), often referred to as the State Department, is the U.S. federal executive department responsible for the international relations of the United States, equivalent to the foreign ministries of other countries. The Department was created in 1789 and was the first executive department established. The Department is led by the Secretary of State, who is nominated by the President, confirmed by the Senate, and is a member of the Cabinet. As stated by the Department of State, its purpose includes:
U.S. State Department
The State Department is one bureaucratic agency that shapes U.S. foreign policy
- Protecting and assisting U.S. citizens living or traveling abroad;
- Assisting U.S. businesses in the international marketplace;
- Coordinating and providing support for international activities of other U.S. agencies (local, state, or federal government), official visits overseas and at home, and other diplomatic efforts.
- Keeping the public informed about U.S. foreign policy and relations with other countries and providing feedback from the public to administration officials.
- Providing automobile registration for non-diplomatic staff vehicles and the vehicles of diplomats of foreign countries having diplomatic immunity in the United States
The Department of Defense (also known as the Defense Department, USDOD, DOD, DoD or the Pentagon) is the executive department of the U.S. government charged with coordinating and supervising all agencies and functions of the government concerned directly with national security and the U.S. armed forces. The Department – headed by the Secretary of Defense – has three subordinate military departments: the Department of the Army, the Department of the Navy, and the Department of the Air Force. The Military Departments are each headed by their own Secretary, appointed by the President, with the advice and consent of the Senate.
The Central Intelligence Agency (CIA) is an independent civilian intelligence agency of the U.S. government. It is an executive agency that reports directly to the Director of National Intelligence with responsibility for providing national security intelligence assessments to senior U.S. policymakers. Intelligence-gathering, a core function of the agency, is performed by non-military commissioned civilian intelligence agents, many of whom are trained to avoid tactical situations. The CIA also oversees and sometimes engages in tactical and covert activities at the request of the U.S. President. Often, when such field operations are organized, the U.S. military or other warfare tacticians carry these tactical operations out on behalf of the agency while the CIA oversees them.
18.2.4: Congress
Two constitutional clauses, the Constitution and Foreign Commerce Clause and the War Power Clause, give Congress foreign policy powers.
Learning Objective
Evaluate the War Powers Clause and how the United States’ process of declaring and entering into war has changed over time, identifying the general role that Congress plays in making and coordinating foreign policy
Key Points
- The War Power clause states that only Congress can declare war. This has been evoked five times in American history.
- Sometimes, this clause directly conflicts with what the president wants to do. As a result, the president will create a “police action” in a hostile territory instead of declaring war.
- Trade is also an important policy-making tool. Congress has the power to regulate foreign trade.
Key Term
- police action
-
Police action in military/security studies and international relations is a euphemism for a military action undertaken without a formal declaration of war.
Congress is given several powers to engage in foreign policy, but also to check the president’s actions foreign policy, especially in the event of war. Perhaps the most important powers are in the War Power Clause which was given to Congress in the Constitution and Foreign Commerce Clause. This clause provides Congress with the power to regulate commerce overseas. Five wars have been declared under the Constitution: the War of 1812, the Mexican-American War, the Spanish-American War, World War I, and World War II.
In the instance of the Mexican-American War, President James Polk explained that Texas was about to become a part of United States of America. Mexico threatened to invade Texas. The President gathered troops near Corpus Christi. U.S. troops moved into an area in which the new international boundary was being disputed. Mexican troops moved into the same area and the two forces clashed. The President claimed that Mexico had passed the boundary into the United States. Some individuals in Congress, including Abraham Lincoln, wondered if this was true.
However, U.S. presidents have not sought formal declarations of war often. Instead, they maintain that they have the Constitutional authority, as commander in chief to use the military for “police actions. ” According to historian Thomas Woods, “Ever since the Korean, Article II, Section 2 of the Constitution — which refers to the president as the ‘Commander in Chief of the Army and Navy of the United States’ — has been interpreted to mean that the president may act with an essentially free hand in foreign affairs, or at the very least that he may send men into battle without consulting Congress. ” Some people have argued this could pass as offensive actions, although historically police actions fell mostly under the purview of protecting embassies, U.S. citizens overseas, and shipping such as the quasi war.
The Korean War was the first modern example of the U.S. going to war without a formal declaration. This has been repeated in every armed conflict since that time. However, beginning with the Vietnam, Congress has given other forms of authorizations to declare war . Some debate continues about whether the actions are appropriate. The tendency of the Executive Branch to engage in the origination of such a push, its marketing, and even propagandizing or related activities to generate such support is also highly debated.
Johnson and His Advisors
Johnson being shown a map of an area in Vietnam. The police action spiraled into a war-like situation quickly, although it was one war never waged by Congress.
Therefore, in light of the speculation concerning the Gulf of Tonkin and the possible abuse of the authorization that followed, Congress passed the War Powers Resolution in 1973. It requires the president to obtain either a declaration of war or a resolution authorizing the use of force from Congress within 60 days of initiating hostilities with a full disclosure of facts in the process. The constitutionality of the resolution has never been settled. Some presidents have criticized it as an unconstitutional encroachment upon the president.
Some legal scholars maintain that offensive, non-police military actions, while a quorum can still be convened, taken without a formal Congressional declaration of war is unconstitutional. They believe this because no amendment with two-thirds majority of states has changed the original intent to make the War Powers Resolution legally binding. However, the Supreme Court has never ruled directly on the matter and to date no counter-resolutions have come to a vote. This separation of powers stalemate effect creates a “functional,” if not unanimous, governmental opinion and outcome on the matter.
The Commerce Clause in the Constitution also give Congress the power to regulate trade between nations. The Commerce Clause is an enumerated list in the United States Constitution. The clause states that the United States Congress shall have power “to regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes. ” These powers are sometimes discussed as separate powers, but they are essentially important because trade is considered to be an important form of economic diplomacy between the United States and foreign nations.
18.2.5: Interest Groups
Foreign policy interest groups are domestic advocacy organizations which seek to influence the government’s foreign policy.
Learning Objective
Illustrate how interest groups influence U.S. foreign policy
Key Points
- In order to build and maintain their influence, they use tactics such as framing the issue and shaping the terms of debate, offering information and analysis to elected representatives, and monitoring the policy process and reacting to it.
- Foreign policy interest groups often overlap with so-called “ethnic” interest groups, as they try to influence the foreign policy of the United States for the benefit of the foreign “ethnic kin” or homeland with whom respective ethnic groups identify.
- Though ethnic interest groups have existed for many decades, they have become a particularly influential phenomenon since the end of the Cold War.
Key Term
- advocacy
-
The act of arguing in favor of, or supporting something.
Foreign policy interest groups, which are domestic advocacy organizations seeking to directly or indirectly influence the government’s foreign policy, are a key player in U.S. foreign policy.
According to U.S. scholar John Dietrich, these interest groups have mobilized to represent a diverse array of business, labor, ethnic, human rights, environmental, and other organizations. In order to build and maintain their influence, they use tactics, such as framing the issue and shaping the terms of debate; offering information and analysis to elected representatives (who may not have the time to research the issue himself or herself); and monitoring the policy process and reacting to it through disseminating supplementary information, letter-writing campaigns, calling for additional hearings or legislation, and supporting or opposing candidates during elections.
Foreign policy interest groups often overlap with so-called “ethnic” interest groups, as they try to influence the foreign policy and, to a lesser extent, the domestic policy of the United States for the benefit of the foreign “ethnic kin” or homeland with whom respective ethnic groups identify. Though ethnic interest groups have existed for many decades, they have become a particularly influential phenomenon since the end of the Cold War.
According to political scientist Thomas Ambrosio, this is a result of growing acceptance that ethnic identity groups have the right to mobilize politically for the purpose of influencing U.S. policies at home and abroad. Prominent examples of these organizations include the American Israel Public Affairs Committee, the Cuban American National Foundation, the Armenian Assembly of America, the U.S.-India Political Action Committee, and the National Iranian American Council.
the American Israel Public Affairs Committee
The American Israel Public Affairs Committee is a prominent foreign policy interest group
18.2.6: The Media
The media has changed how citizens perceive and approach about U.S. Foreign Policy in the 20th century.
Learning Objective
Explain the media’s role in setting the agenda for foreign policy debate
Key Points
- The media is most influential when it covers foreign policy that directly affects Americans, especially affairs with which Americans are not acquainted.
- Vietnam was a time when many people watched the horrors of war on television. This helped the popularity of the war sink.
- After these viewings, the military founds itself involved in politics and having to do damage control to ease the public’s and the politicians’ concerns.
Key Terms
- media bias
-
A bias in journalistic reporting, in programming selection, etc., in mass communications media.
- media
-
Means and institutions for publishing and broadcasting information.
Agenda-Setting in Foreign Policy
One way in which the media could set the agenda is if it is in an area in which very few Americans have direct knowledge of the issues. This applies to foreign policy. When American military personnel are involved, the media needs to report because the personnel are related to the American public. The media is also likely to have an interest in reporting issues that have substantial effects on American workers, such as major trade agreements with Mexico during the NAFTA negotiations in the 1990’s.
David McKay, author of American Politics and Society, lists as one of the three main distortions of information by the media, “Placing high priority on American news to the detriment of foreign news. And when the U.S. is engaged in military action abroad, this ‘foreign news’ crowds out other foreign news. “
News Media and the Vietnam War
In the media’s most famous case in involvement on foreign affairs was its involvement in the Vietnam War. From 40 press corpsmen in 1964, the number in South Vietnam had grown to 282 by January 1966. By August that number had jumped to 419. Of the 282 at the beginning of the year, only 110 were Americans. 67 were South Vietnamese, 26 Japanese, 24 British, 13 Korean, 11 French, and seven German. The media caught many combat events, usually on live television, which prompted many American citizens to be concerned about foreign policy .
Soldier in Vietnam
Graphics like this helped contribute to Americans’ concern over foreign policy in Vietnam.
The U.S. Mission and the MACV (Military Assistance Command) also installed an “information czar,” the U.S. Mission’s Minister-Counselor for Public Affairs, Barry Zorthian, advised General William Westmoreland on public affairs matters. He had theoretical responsibility under the ambassador for the development of all information policy. He maintained liaison between the embassy, MACV, and the press; publicized information to refute erroneous and misleading news stories; and sought to assist the Saigon correspondents in covering the side of the war most favorable to the policies of the U.S. government. Zorthian possessed both experience with the media and a great deal of patience and tact while maintaining reasonably good relations with the press corps. Media correspondents were invited to attend nightly MACV briefings covering the day’s events that became known as the “Five O’Clock Follies. ” Most correspondents considered these briefings to be a waste of time. The Saigon bureau chiefs were also often invited to closed sessions at which presentations would be made by a briefing officer, the CIA station chief, or an official from the embassy who would present background or off-the-record information on upcoming military operations or Vietnamese political events.
According to Daniel Hallin, the dramatic structure of the uncensored “living room war” as reported during 1965–1967 remained simple and traditional: “the forces of good were locked in battle once again with the forces of evil. What began to change in 1967 was the conviction that the forces of good would inevitably prevail. ” During late 1967 the MACV had also begun to disregard the decision it had made at the Honolulu Conference that the military should leave the justification of the war to elected officials in Washington. The military found itself drawn progressively into politics, to the point that it had become as involved in “selling” the war to the American public as the political appointees it served. This change would have far-reaching detrimental effects.
Media Bias
A self-described liberal media watchdog group, Fairness and Accuracy in Reporting (FAIR), in consultation with the Survey and Evaluation Research Laboratory at Virginia Commonwealth University, sponsored an academic study in which journalists were asked a range of questions about how they did their work and about how they viewed the quality of media coverage in the broad area of politics and economic policy. “They were asked for their opinions and views about a range of recent policy issues and debates. Finally, they were asked for demographic and identifying information, including their political orientation.” They then compared to the same or similar questions posed with “the public” based on Gallup, and Pew Trust polls. Their study concluded that a majority of journalists, although relatively liberal on social policies, were significantly to the right of the public on economic, labor, health care, and foreign policy issues.
18.3: The History of American Foreign Policy
18.3.1: Isolationism
Isolationism or non-interventionism was a tradition in America’s foreign policy for its first two centuries.
Learning Objective
Explain the historical reasons for American isolationism in foreign affairs
Key Points
- President George Washington established non-interventionism in his farewell address, and this policy was continued by Thomas Jefferson.
- The United States policy of non-intervention was maintained throughout most of the nineteenth century. The first significant foreign intervention by the United States was the Spanish-American War, which saw it occupy and control the Philippines.
- In the wake of the First World War, the non-interventionist tendencies of U.S. foreign policy were in full force. First, the United States Congress rejected President Woodrow Wilson’s most cherished condition of the Treaty of Versailles, the League of Nations.
- The near total humiliation of Germany in the wake of World War I, laid the groundwork for a pride-hungry German people to embrace Adolf Hitler’s rise to power. Non-intervention eventually contributed to Hitler’s rise to power in the 1930s.
Key Terms
- isolationism
-
The policy or doctrine of isolating one’s country from the affairs of other nations by declining to enter into alliances, foreign economic commitments, foreign trade, international agreements, etc..
- brainchild
-
A creation, original idea, or innovation, usually used to indicate the originators
- non-interventionism
-
Non-interventionism, the diplomatic policy whereby a nation seeks to avoid alliances with other nations in order to avoid being drawn into wars not related to direct territorial self-defense, has had a long history in the United States.
Background
For the first 200 years of United States history, the national policy was isolationism and non-interventionism. George Washington’s farewell address is often cited as laying the foundation for a tradition of American non-interventionism: “The great rule of conduct for us, in regard to foreign nations, is in extending our commercial relations, to have with them as little political connection as possible. Europe has a set of primary interests, which to us have none, or a very remote relation. Hence she must be engaged in frequent controversies the causes of which are essentially foreign to our concerns. Hence, therefore, it must be unwise in us to implicate ourselves, by artificial ties, in the ordinary vicissitudes of her politics, or the ordinary combinations and collisions of her friendships or enmities.”
No Entangling Alliances in the Nineteenth Century
President Thomas Jefferson extended Washington’s ideas in his March 4, 1801 inaugural address: “peace, commerce, and honest friendship with all nations, entangling alliances with none. ” Jefferson’s phrase “entangling alliances” is, incidentally, sometimes incorrectly attributed to Washington.
Non-interventionism continued throughout the nineteeth century. After Tsar Alexander II put down the 1863 January Uprising in Poland, French Emperor Napoleon III asked the United States to “join in a protest to the Tsar. ” Secretary of State William H Seward declined, “defending ‘our policy of non-intervention — straight, absolute, and peculiar as it may seem to other nations,'” and insisted that “the American people must be content to recommend the cause of human progress by the wisdom with which they should exercise the powers of self-government, forbearing at all times, and in every way, from foreign alliances, intervention, and interference. “
The United States’ policy of non-intervention was maintained throughout most of the nineteenth century. The first significant foreign intervention by the United States was the Spanish-American War, which saw the United States occupy and control the Philipines.
Twentieth Century Non-intervention
Theodore Roosevelt’s administration is credited with inciting the Panamanian Revolt against Colombia in order to secure construction rights for the Panama Canal, begun in 1904. President Woodrow Wilson, after winning re-election with the slogan, “He kept us out of war,” was nonetheless compelled to declare war on Germany and so involve the nation in World War I when the Zimmerman Telegram was discovered. Yet non-interventionist sentiment remained; the U.S. Congress refused to endorse the Treaty of Versailles or the League of Nations.
Non-Interventionism between the World Wars
In the wake of the First World War, the non-interventionist tendencies of U.S. foreign policy were in full force . First, the United States Congress rejected president Woodrow Wilson’s most cherished condition of the Treaty of Versailles, the League of Nations. Many Americans felt that they did not need the rest of the world, and that they were fine making decisions concerning peace on their own. Even though “anti-League” was the policy of the nation, private citizens and lower diplomats either supported or observed the League of Nations. This quasi-isolationism shows that the United States was interested in foreign affairs but was afraid that by pledging full support for the League, it would lose the ability to act on foreign policy as it pleased.
Wake Up America!
At the dawn of WWI, posters like this asked America to abandon its isolationist policies.
Although the United States was unwilling to commit to the League of Nations, they were willing to engage in foreign affairs on their own terms. In August 1928, 15 nations signed the Kellogg-Briand Pact, brainchild of American Secretary of State Frank Kellogg and French Foreign Minister Aristride Briand. This pact that was said to have outlawed war and showed the United States commitment to international peace had its semantic flaws. For example, it did not hold the United States to the conditions of any existing treaties, it still allowed European nations the right to self-defense, and stated that if one nation broke the pact, it would be up to the other signatories to enforce it. The Kellogg-Briand Pact was more of a sign of good intentions on the part of the United States, rather than a legitimate step towards the sustenance of world peace.
Non-interventionism took a new turn after the Crash of 1929. With the economic hysteria, the United States began to focus solely on fixing its economy within its borders and ignored the outside world. As the world’s democratic powers were busy fixing their economies within their borders, the fascist powers of Europe and Asia moved their armies into a position to start World War II. With military victory came the spoils of war –a very draconian pummeling of Germany into submission, via the Treaty of Versailles. This near-total humiliation of Germany in the wake of World War I, as the treaty placed sole blame for the war on the nation, laid the groundwork for a pride-hungry German people to embrace Adolf Hitler’s rise to power.
18.3.2: World War I and the League of Nations
The League of Nations was created as an international organization after WWI.
Learning Objective
Explain the historical rise and fall of the League of Nations after World War I
Key Points
- The League of Nations was suggested in Wilson’s 14 points.
- The League of Nations’ functions included arbitration and peace-keeping. However, it did not have an army to enforce power.
- The League of Nations was the precursor to the United Nations.
Key Terms
- arbitration
-
A process through which two or more parties use an arbitrator or arbiter (an impartial third party) in order to resolve a dispute.
- World Trade Organization
-
an international organization designed by its founders to supervise and liberalize international trade
- disarmament
-
The reduction or the abolition of the military forces and armaments of a nation, and of its capability to wage war
- intergovernmental
-
Of, pertaining to, or involving two or more governments
An Early Attempt at International Organization
The League of Nations was an intergovernmental organization founded as a result of the Paris Peace Talks that ended the First World War. The form and ideals were drawn to some extent from US President Woodrow Wilson’s 14 Points. The League was the first permanent international organization whose principal mission was to maintain world peace. Its primary goals, as stated in its Covenant, included preventing wars through collective security and disarmament, and settling international disputes through negotiation and arbitration. Other issues in this and related treaties included labor conditions, just treatment of native inhabitants, human and drug trafficking, arms trade, global health, prisoners of war, and protection of minorities in Europe. At the height of its development, from 28 September 1934 to 23 February 1935, it had 58 member nations.
Map of League of Nations
The countries on the map represent those that have been involved with the League of Nations.
A Lack of Leverage
The diplomatic philosophy behind the League represented a fundamental shift from the preceding hundred years. The League lacked its own armed force, and depended on the Great powers to enforce its resolutions, keep to its economic sanctions, or provide an army when needed. However, the Great Powers were often reluctant to do so. Sanctions could hurt League members, so they were reluctant to comply with them.
Failure of the League
After a number of notable successes and some early failures in the 1920s, the League ultimately proved incapable of preventing aggression by the Axis powers. In the 1930s, Germany withdrew from the League, as did Japan, Italy, Spain, and others. The onset of World War II showed that the League had failed its primary purpose, which was to prevent any future world war. The United Nations (UN) replaced it after the end of the war and inherited a number of agencies and organizations founded by the League.
18.3.3: World War II
Although isolationists kept the U.S. out of WWII for years, the interventionists eventually had their way and the U.S. declared war in 1941.
Learning Objective
Compare and contrast the arguments made by interventionists and non-interventionists with respect to American involvement in World War II
Key Points
- Fascism was becoming a growing fear in the United States, and after Germany invaded Poland, many wondered if the US should intervene.
- Many famous public figures called for isolationism, such as professors and even Charles Lindburg.
- The Lend Lease program was a way to ease into interventionism, though the US stayed out militarily.
Key Term
- Neutrality Act
-
The Neutrality Acts were passed by the United State Congress in the 1930’s and sought to ensure that the US would not become entangled again in foreign conflicts.
As Europe moved closer and closer to war in the late 1930s, the United States Congress was doing everything it could to prevent it. Between 1936 and 1937, much to the dismay of the pro-British President Roosevelt, Congress passed the Neutrality Acts. In the final Neutrality Act, Americans could not sail on ships flying the flag of a belligerent nation or trade arms with warring nations, potential causes for U.S. entry into war.
On September 1, 1939, Germany invaded Poland, and Britain and France subsequently declared war on Germany, marking the start of World War II . In an address to the American people two days later, President Roosevelt assured the nation that he would do all he could to keep them out of war. However, he also said: “When peace has been broken anywhere, the peace of all countries everywhere is in danger. “
germany invades poland
Germany invading Poland caused the United States to reconsider intervening.
The war in Europe split the American people into two distinct groups: non-interventionists and interventionists. The basic principle of the interventionist argument was fear of German invasion. By the summer of 1940, France had fallen to the Germans, and Britain was the only democratic stronghold between Germany and the United States. Interventionists were afraid of a world after this war, a world where they would have to coexist with the fascist power of Europe. In a 1940 speech, Roosevelt argued, “Some, indeed, still hold to the now somewhat obvious delusion that we … can safely permit the United States to become a lone island … in a world dominated by the philosophy of force. ” A national survey found that in the summer of 1940, 67% of Americans believed that a German-Italian victory would endanger the United States, that if such an event occurred 88% supported “arm[ing] to the teeth at any expense to be prepared for any trouble”, and that 71% favored “the immediate adoption of compulsory military training for all young men.”
Ultimately, the rift between the ideals of the United States and the goals of the fascist powers is what was at the core of the interventionist argument. “How could we sit back as spectators of a war against ourselves? ” writer Archibald MacLeish questioned. The reason why interventionists said we could not coexist with the fascist powers was not due to economic pressures or deficiencies in our armed forces, but rather because it was the goal of fascist leaders to destroy the American ideology of democracy. In an address to the American people on December 29, 1940, President Roosevelt said, “…the Axis not merely admits but proclaims that there can be no ultimate peace between their philosophy of government and our philosophy of government.”
However, there were still many who held on to the age-old tenets of non-interventionism. Although a minority, they were well organized, and had a powerful presence in Congress. Non-interventionists rooted a significant portion of their arguments in historical precedent, citing events such as Washington’s farewell address and the failure of World War I. In 1941, the actions of the Roosevelt administration made it clearer and clearer that the United States was on its way to war. This policy shift, driven by the President, came in two phases. The first came in 1939 with the passage of the Fourth Neutrality Act, which permitted the United States to trade arms with belligerent nations, as long as these nations came to America to retrieve the arms and paid for them in cash. This policy was quickly dubbed “Cash and Carry. ” The second phase was the Lend-Lease Act of early 1941. This act allowed the President “to lend, lease, sell, or barter arms, ammunition, food, or any ‘defense article’ or any ‘defense information’ to ‘the government of any country whose defense the President deems vital to the defense of the United States.’ He used these two programs to side economically with the British and the French in their fight against the Nazis.
On December 7, 1941, Japan attacked the American fleet at Pearl Harbor, Hawaii. The attack was intended as a preventive action in order to keep the U.S. Pacific Fleet from interfering with military actions the Empire of Japan was planning in Southeast Asia against overseas territories of the United Kingdom, the Netherlands, and the United States. The following day, the United States declared war on Japan. Domestic support for non-interventionism disappeared. Clandestine support of Britain was replaced by active alliance. Subsequent operations by the U.S. prompted Germany and Italy to declare war on the U.S. on December 11, which was reciprocated by the U.S. the same day.
During the final stages of World War II in 1945, the United States conducted atomic bombings of the cities of Hiroshima and Nagasaki in Japan. These two events represent the only use of nuclear weapons in war to date.
18.3.4: Interventionism
After WWII, the US’s foreign policy was characterized by interventionism, which meant the US was directly involved in other states’ affairs.
Learning Objective
Define interventionism and its relation to American foreign policy
Key Points
- In the period between World War I and World War II, the US’s foreign policy was characterized by isolationism, which meant it preferred to be isolated from the affairs of other countries.
- The ideological goals of the fascist powers in Europe during World War II and the growing aggression of Germany led many Americans to fear for the security of their nation, and thus call for an end to the US policy of isolationism.
- In the early 1940s, US policies such as the Cash and Carry Program and the Lend-Lease Act provided assistance to the Allied Powers in their fight against Germany. This growing involvement by the US marked a move away from isolationist tendencies towards interventionism.
- After World War II, the US became fully interventionist. US interventionism was motivated primarily by the goal of containing the influence of communism, and essentially meant the US was now a leader in global security, economic, and social issues.
Key Terms
- interventionism
-
The political practice of intervening in a sovereign state’s affairs.
- isolationism
-
The policy or doctrine of isolating one’s country from the affairs of other nations by declining to enter into alliances, foreign economic commitments, foreign trade, international agreements, etc..
Abandoning Isolationism
As the world was quickly drawn into WWII, the United States’ isolationist policies were replaced by more interventionism. In part, this foreign policy shift sprung from Euro-American relations and public fear.
On September 1, 1939, Germany invaded Poland; Britain and France subsequently declared war on Germany, marking the start of World War II. In an address to the American People two days later, President Roosevelt assured the nation that he would do all he could to keep them out of war. However, even though he was intent on neutrality as the official policy of the United States, he still echoed the dangers of staying out of this war. He also cautioned the American people to not let their wish to avoid war at all costs supersede the security of the nation.
The war in Europe split the American people into two distinct groups: non-interventionists and interventionists. The two sides argued over America’s involvement in this Second World War. The basic principle of the interventionist argument was fear of German invasion. By the summer of 1940, France had fallen to the Germans, and Britain was the only democratic stronghold between Germany and the United States. Interventionists feared that if Britain fell, their security as a nation would shrink immediately. A national survey found that in the summer of 1940, 67% of Americans believed that a German-Italian victory would endanger the United States, that if such an event occurred 88% supported “arm[ing] to the teeth at any expense to be prepared for any trouble”, and that 71% favored “the immediate adoption of compulsory military training for all young men”.
Ultimately, the ideological rift between the ideals of the United States and the goals of the fascist powers is what made the core of the interventionist argument.
Moving Towards War
As 1940 became 1941, the actions of the Roosevelt administration made it more and more clear that the United States was on a course to war. This policy shift, driven by the President, came in two phases. The first came in 1939 with the passage of the Fourth Neutrality Act, which permitted the United States to trade arms with belligerent nations, as long as these nations came to America to retrieve the arms, and pay for them in cash. This policy was quickly dubbed ‘Cash and Carry.’ The second phase was the Lend-Lease Act of early 1941 . This act allowed the President “to lend, lease, sell, or barter arms, ammunition, food, or any ‘defense article’ or any ‘defense information’ to ‘the government of any country whose defense the President deems vital to the defense of the United States.’ He used these two programs to side economically with the British and the French in their fight against the Nazis.
President Roosevelt signing the Lend-Lease Act
The Lend Lease Act allowed the United States to tip-toe from isolationism while still remaining militarily neutral.
Policies of Interventionism
After WWII, the United States took a policy of interventionism in order to contain communist influence abroad. Such forms of interventionism included giving aid to European nations to rebuild , having an active role in the UN, NATO, and police actions around the world, and involving the CIA in several coup take overs in Latin America and the Middle East. The US was not merely non-isolationist (i.e. the US was not merely abandoning policies of isolationism), but actively intervening and leading world affairs.
Marshall Plan and US Interventionism
After WWII, the US’s foreign policy was characterized by interventionism. For example, immediately after the end of the war, the US supplied Europe with monetary aid in hopes of combating the influence of communism in a vulnerable, war-weakened Europe. This label was posted on Marshall Aid packages.
18.3.5: The Cold War and Containment
Truman’s Containment policy was the first major policy during the Cold War and used numerous strategies to prevent the spread of communism abroad.
Learning Objective
Discuss the doctrine of Containment and its role during the Cold War
Key Points
- Containment was suggested by diplomat George Kennan who eagerly suggested the United States stifle communist influence in Eastern Europe and Asia.
- One of the ways to accomplish this was by establishing NATO so the Western European nations had a defense against communist influence.
- After Vietnam and détente, President Jimmy Carter focused less on containment and more on fighting the Cold War by promoting human rights in hot spot countries.
Key Terms
- deterrence
-
Action taken by states or alliances of nations against equally powerful alliances to prevent hostile action
- rollback
-
A withdrawal of military forces.
The Cold War and Containment
Containment was a United States policy using numerous strategies to prevent the spread of communism abroad. A component of the Cold War, this policy was a response to a series of moves by the Soviet Union to enlarge its communist sphere of influence in Eastern Europe, China, Korea, and Vietnam. It represented a middle-ground position between détente and rollback.
The basis of the doctrine was articulated in a 1946 cable by United States diplomat, George F. Kennan (below). As a description of United States foreign policy, the word originated in a report Kennan submitted to the U.S. defense secretary in 1947—a report that was later used in a magazine article.
George F. Kennan
George F. Kennan was the diplomat behind the doctrine of containment.
The word containment is associated most strongly with the policies of United States President Harry Truman (1945–53), including the establishment of the North Atlantic Treaty Organization (NATO), a mutual defense pact. Although President Dwight Eisenhower (1953–61) toyed with the rival doctrine of rollback, he refused to intervene in the Hungarian Uprising of 1956. President Lyndon Johnson (1963–69) cited containment as a justification for his policies in Vietnam. President Richard Nixon (1969–74), working with his top advisor Henry Kissinger, rejected containment in favor of friendly relations with the Soviet Union and China; this détente, or relaxation of tensions, involved expanded trade and cultural contacts.
President Jimmy Carter (1976–81) emphasized human rights rather than anti-communism, but dropped détente and returned to containment when the Soviets invaded Afghanistan in 1979. President Ronald Reagan (1981–89), denouncing the Soviet state as an “evil empire”, escalated the Cold War and promoted rollback in Nicaragua and Afghanistan. Central programs begun under containment, including NATO and nuclear deterrence, remained in effect even after the end of the war.
18.3.6: Détente and Human Rights
Détente was a period in U.S./Soviet relations in which tension between the two superpowers was eased.
Learning Objective
Explain the significance of the Helsinki Accords for the history of human rights in the 20th century and define the doctrine of Détente and its use by the United States during the Cold War
Key Points
- Détente was an effort by the super powers to ease tensions in the Cold War.
- The Nixon and Brezhnev administrations led the way with détente, talking about world issues and signing treaties such as SALT I and the Anti-Ballistic Missile Treaty.
- The Carter administration ushered in a human rights component to détente, criticizing the USSR’s poor record of human rights. The USSR countered by criticizing the US for its own human rights record, and for interfering in USSR domestic affairs.
- During the Carter administration, the Conference on Security and Cooperation in Europe created the Helsinki Accords, which addressed human rights in the USSR.
- Détente ended in 1980 with Soviet interference in Afghanistan, the US boycott of the 1980 Moscow Olympics, and Reagan’s election.
Key Terms
- Warsaw Pact
-
A pact (long-term alliance treaty) signed on May 14, 1955 by the Soviet Union and its Communist military allies in Europe.
- Détente
-
French for “relaxation,” détente is the easing of tense relations, particularly in a political situation. The term is often used in reference to the general easing of geo-political tensions between the Soviet Union and the US, which began in 1971 and ended in 1980.
- Helsinki Accords
-
A declaration in an attempt to improve relations between the Communist bloc and the West. Developed in Europe, the Helsinki Accords called for human rights improvements in the USSR.
Détente
Détente, French for “relaxation”, is an international theory that refers to the easing of strained relations, especially in a political situation. The term is often used in reference to the general easing of relations between the Soviet Union and the United States in 1971, a thawing at a period roughly in the middle of the Cold War.
Treaties Toward Peace
The most important treaties of détente were developed when the Nixon Administration came into office in 1969. The Political Consultative Committee of the Warsaw Pact sent an offer to the West, urging to hold a summit on “security and cooperation in Europe”. The West agreed and talks began towards actual limits to the nuclear capabilities of the two superpowers. This ultimately led to the signing of the treaty in 1972. This treaty limited each power’s nuclear arsenals, though it was quickly rendered out-of-date as a result of the development of a new type of warhead. In the same year that SALT I was signed, the Biological Weapons Convention and the Anti-Ballistic Missile Treaty were also concluded.
A follow up treaty, SALT II was discussed but was never ratified by the United States. There is debate among historians as to how successful the détente period was in achieving peace. The two superpowers agreed to install a direct hotline between Washington DC and Moscow, the so-called “red telephone,” enabling both countries to quickly interact with each other in a time of urgency. The SALT II pact of the late 1970s built on the work of the SALT I talks, ensuring further reduction in arms by the Soviets and by the US .
Nixon and Brezhnev
President Nixon and Premier Brezhnev lead in the high period of détente, signing treaties such as SALT I and the Helsinki Accords.
The Helsinki Accords and Human Rights in the USSR
The Helsinki Accords, in which the Soviets promised to grant free elections in Europe, has been seen as a major concession to ensure peace by the Soviets. The Helsinki Accord were developed by the Conference on Security and Cooperation in Europe, a wide ranging series of agreements on economic, political, and human rights issues. The CSCE was initiated by the USSR, involving 35 states throughout Europe.
Among other issues, one of the most prevalent and discussed after the conference was the human rights violations in the Soviet Union. The Soviet Constitution directly violated the declaration of Human Rights from the United Nations, and this issue became a prominent point of dissonance between the United States and the Soviet Union.
Because the Carter administration had been supporting human rights groups inside the Soviet Union, Leonid Brezhnev accused the administration of interference in other countries’ internal affairs. This prompted intense discussion of whether or not other nations may interfere if basic human rights are being violated, such as freedom of speech and religion. The basic differences between the philosophies of a democracy and a single-party state did not allow for reconciliation of this issue. Furthermore, the Soviet Union proceeded to defend their internal policies on human rights by attacking American support of countries like South Africa and Chile, which were known to violate many of the same human rights issues.
Détente ended after the Soviet intervention in Afghanistan, which led to America’s boycott in the 1980s Olympics in Moscow . Ronald Reagan’s election in 1980, based on an anti-détente campaign, marked the close of détente and a return to Cold War tension.
1980 Moscow Olympics
After the Soviet invasion of Afghanistan, many countries boycotted the 1980 Olympic Games, held in Moscow. This photograph depicts Olympic runners in the 1980 games in front of Saint Basil’s Cathedral in Moscow.
18.3.7: Foreign Policy After the Cold War
The post-Cold War era saw optimism, and the balance of power shifted solely to the United States.
Learning Objective
Explain the origins and elements of the New World Order after the end of the Cold War
Key Points
- The post-Cold War era saw the United States as the sole leader of the world affairs.
- The Cold War set the standard for military-industrial complexes which, while weaker than during the Cold War, is a legacy that continues to exist.
- The new world order as theorized between Bush and Gorbachev saw optimism and democratization for countries.
Key Terms
- military-industrial complex
-
The armed forces of a nation together with the industries that supply their weapons and materiel.
- new world order
-
The term new world order has been used to refer to any new period of history evidencing a dramatic change in world political thought and the balance of power. Despite various interpretations of this term, it is primarily associated with the ideological notion of global governance only in the sense of new collective efforts to identify, understand, or address worldwide problems that go beyond the capacity of individual nation-states to solve. The most widely discussed application of the phrase in recent times came at the end of the Cold War.
- War on Terror
-
The war on terror is a term commonly applied to an international military campaign begun by the United States and the United Kingdom with support from other countries after the September 11, 2001 terrorist attacks.
Post-Cold War Foreign Policy
Introduction
With the breakup of the Soviet Union into separate nations, and the re-emergence of the nation of Russia, the world of pro-U.S. and pro-Soviet alliances broke down. Different challenges presented themselves, such as climate change and the threat of nuclear terrorism. Regional powerbrokers in Iraq and Saddam Hussein challenged the peace with a surprise attack on the small nation of Kuwait in 1991.
President George H.W. Bush organized a coalition of allied and Middle Eastern powers that successfully pushed back the invading forces, but stopped short of invading Iraq and capturing Hussein. As a result, the dictator was free to cause mischief for another twelve years. After the Gulf War, many scholars like Zbigniew Brzezinski claimed that the lack of a new strategic vision for U.S. foreign policy resulted in many missed opportunities for its foreign policy. The United States mostly scaled back its foreign policy budget as well as its cold war defense budget during the 1990s, which amounted to 6.5% of GDP while focusing on domestic economic prosperity under President Clinton, who succeeded in achieving a budget surplus for 1999 and 2000.
The aftermath of the Cold War continues to influence world affairs. After the dissolution of the Soviet Union, the post–Cold War world was widely considered as unipolar, with the United States the sole remaining superpower. The Cold War defined the political role of the United States in the post–World War II world: by 1989 the U.S. held military alliances with 50 countries, and had 526,000 troops posted abroad in dozens of countries, with 326,000 in Europe (two-thirds of which in west Germany) and about 130,000 in Asia (mainly Japan and South Korea). The Cold War also marked the apex of peacetime military-industrial complexes, especially in the United States, and large-scale military funding of science. These complexes, though their origins may be found as early as the 19th century, have grown considerably during the Cold War. The military-industrial complexes have great impact on their countries and help shape their society, policy and foreign relations.
New World Order
A concept that defined the world power after the Cold-War was known as the new world order. The most widely discussed application of the phrase of recent times came at the end of the Cold War. Presidents Mikhail Gorbachev and George H.W. Bush used the term to try to define the nature of the post Cold War era, and the spirit of a great power cooperation they hoped might materialize . Historians will look back and say this was no ordinary time but a defining moment: an unprecedented period of global change, and a time when one chapter ended and another began.
Bush and Gorbachev
Bush and Gorbachev helped shape international relation theories after the cold war.
War on Terrorism
A concept that defined the world power after the Cold-War was known as the new world order. The most widely discussed application of the phrase of recent times came at the end of the Cold War. Presidents Mikhail Gorbachev and George H.W. Bush used the term to try to define the nature of the post Cold War era, and the spirit of a great power cooperation they hoped might materialize . Historians will look back and say this was no ordinary time but a defining moment: an unprecedented period of global change, and a time when one chapter ended and another began.
Furthermore, when no weapons of mass destruction were found after a military conquest of Iraq, there was worldwide skepticism that the war had been fought to prevent terrorism, and the continuing war in Iraq has had serious negative public relations consequences for the image of the United States.
Multipolar World
The big change during these years was a transition from a bipolar world to a multipolar world. While the United States remains a strong power economically and militarily, rising nations such as China, India, Brazil, and Russia as well as a united Europe have challenged its dominance. Foreign policy analysts such as Nina Harchigian suggest that the six emerging big powers share common concerns: free trade, economic growth, prevention of terrorism, and efforts to stymie nuclear proliferation. And if they can avoid war, the coming decades can be peaceful and productive provided there are no misunderstandings or dangerous rivalries.
18.3.8: The War on Terrorism
The War on Terror refers to an international military campaign begun by the U.S. and the U.K. after the 9/11 terrorist attacks.
Learning Objective
Identify the main elements of U.S. foreign policy during the War on Terror
Key Points
- The campaign’s official purpose was to eliminate al-Qaeda and other militant organizations, and the two main military operations associated with the War on Terror were Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq.
- The Bush administration and the Western media used the term to denote a global military, political, legal and ideological struggle targeting both organizations designated as terrorist and regimes accused of supporting them.
- On 20 September 2001, in the wake of the 11 September attacks, George W. Bush delivered an ultimatum to the Taliban government of Afghanistan to turn over Osama bin Laden and al-Qaeda leaders operating in the country or face attack.
- In October 2002, a large bipartisan majority in the United States Congress authorized the president to use force if necessary to disarm Iraq in order to “prosecute the war on terrorism. ” The Iraq War began in March 2003 with an air campaign, which was immediately followed by a U.S. ground invasion.
Key Terms
- Islamist
-
A person who espouses Islamic fundamentalist beliefs.
- War on Terror
-
The war on terror is a term commonly applied to an international military campaign begun by the United States and the United Kingdom with support from other countries after the September 11, 2001 terrorist attacks.
- terrorism
-
The deliberate commission of an act of violence to create an emotional response through the suffering of the victims in the furtherance of a political or social agenda.
Introduction
The War on Terror is a term commonly applied to an international military campaign begun by the United States and United Kingdom with support from other countries after the September 11, 2001 terrorist attacks . The campaign’s official purpose was to eliminate al-Qaeda and other militant organizations. The two main military operations associated with the War on Terror were Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq.
9/11 Attacks on the World Trade Center
The north face of Two World Trade Center (south tower) immediately after being struck by United Airlines Flight 175.
The phrase “War on Terror” was first used by U.S. President George W. Bush on 20 September 2001. The Bush administration and the Western media have since used the term to denote a global military, political, legal, and ideological struggle targeting organizations designated as terrorist and regimes accused of supporting them. It was typically used with a particular focus on Al-Qaeda and other militant Islamists. Although the term is not officially used by the administration of U.S. President Barack Obama, it is still commonly used by politicians, in the media, and officially by some aspects of government, such as the United States’ Global War on Terrorism Service Medal.
Precursor to 9/11 Attacks
The origins of al-Qaeda as a network inspiring terrorism around the world and training operatives can be traced to the Soviet war in Afghanistan (December 1979–February 1989). The United States supported the Islamist mujahadeen guerillas against the military forces of the Soviet Union and the Democratic Republic of Afghanistan. In May 1996 the group World Islamic Front for Jihad Against Jews and Crusaders (WIFJAJC), sponsored by Osama bin Laden and later reformed as al-Qaeda, started forming a large base of operations in Afghanistan, where the Islamist extremist regime of the Taliban had seized power that same year. In February 1998, Osama bin Laden, as the head of al-Qaeda, signed a fatwā declaring war on the West and Israel, and later in May of that same year al-Qaeda released a video declaring war on the U.S. and the West.
U.S. Military Responses (Afghanistan)
On 20 September 2001, in the wake of the 11 September attacks, George W. Bush delivered an ultimatum to the Taliban government of Afghanistan to turn over Osama bin Laden and al-Qaeda leaders operating in the country or face attack. The Taliban demanded evidence of bin Laden’s link to the 11 September attacks and, if such evidence warranted a trial, they offered to handle such a trial in an Islamic Court. The US refused to provide any evidence.
Subsequently, in October 2001, US forces invaded Afghanistan to oust the Taliban regime. On 7 October 2001, the official invasion began with British and U.S. forces conducting airstrike campaigns over enemy targets. Kabul, the capital city of Afghanistan, fell by mid-November. The remaining al-Qaeda and Taliban remnants fell back to the rugged mountains of eastern Afghanistan, mainly Tora Bora. In December, Coalition forces (the U.S. and its allies) fought within that region. It is believed that Osama bin Laden escaped into Pakistan during the battle.
U.S. Military Responses (Iraq)
Iraq had been listed as a State Sponsor of Terrorism by the U.S. since 1990, when Saddam Hussein invaded Kuwait. Iraq was also on the list from 1979 to 1982; it had been removed so that the U.S. could provide material support to Iraq in its war with Iran. Hussein’s regime proved a continuing problem for the U.N. and Iraq’s neighbors due to its use of chemical weapons against Iranians and Kurds.
In October 2002, a large bipartisan majority in the United States Congress authorized the president to use force if necessary to disarm Iraq in order to “prosecute the war on terrorism. ” After failing to overcome opposition from France, Russia, and China against a UNSC resolution that would sanction the use of force against Iraq, and before the U.N. weapons inspectors had completed their inspections, the U.S. assembled a “Coalition of the Willing” composed of nations who pledged support for its policy of regime change in Iraq.
The Iraq War began in March 2003 with an air campaign, which was immediately followed by a U.S.-led ground invasion . The Bush administration stated that the invasion was the “serious consequences” spoken of in the UNSC Resolution 1441. The Bush administration also stated that the Iraq War was part of the War on Terror, a claim that was later questioned.
Baghdad, Iraq’s capital city, fell in April 2003 and Saddam Hussein’s government quickly dissolved. On 1 May 2003, Bush announced that major combat operations in Iraq had ended. However, an insurgency arose against the U.S.-led coalition and the newly developing Iraqi military and post-Saddam government. The insurgency, which included al-Qaeda affiliated groups, led to far more coalition casualties than the invasion. Iraq’s former president, Saddam Hussein, was captured by U.S. forces in December 2003. He was executed in 2006.
18.4: Challenges of Foreign Policy
18.4.1: Trade
U.S. foreign policy is characterized by a commitment to free trade and open borders to promote and strengthen national interests.
Learning Objective
Discuss the historical institutional arrangements that created the current framework of international trade and criticisms of it
Key Points
- While international trade has been present throughout much of history, its economic, social, and political importance have increased in recent centuries, mainly because of industrialization, advanced transportation, globalization, the growth of multinational corporations, and outsourcing.
- During the World War II, 44 countries signed the Bretton Woods Agreement, a system of monetary management that established the rules for commercial and financial relations among the world’s major industrial states.
- This Agreement resulted in the creation of organizations such as the the International Monetary Fund (IMF) and the International Bank for Reconstruction and Development (later divided into the World Bank and Bank for International Settlements).
- The World Trade Organization (WTO) is an organization that was formed in 1995 to supervise and liberalize international trade.
- International trade greatly contributes to the process of globalization, the processes of international integration arising from the interchange of world views, products, ideas, and other aspects of culture.
- The anti-globalization movement has grown in recent decades in reaction to the unequal power dynamics of globalization and international trade, and the policies that are used to exploit developing countries for the profit of the developed Western world.
Key Term
- globalization
-
The process of international integration arising from the interchange of world views, products, ideas, and other aspects of culture; advances in transportation and telecommunications infrastructure, including the rise of the Internet, are major factors that precipitate interdependence of economic and cultural activities.
International Trade
International trade is the exchange of goods and services across national borders. In most countries, it represents a significant part of the Gross Domestic Product (GDP). While international trade has been practiced throughout much of history, its economic, social, and political importance have become increasingly relevant in recent times, mainly due to industrialization, advanced transportation, globalization, the growth of multinational corporations, and outsourcing .
The Bretton Woods Agreement
During World War II, 44 countries signed the Bretton Woods Agreement. This system of monetary management established the rules for commercial and financial relations among the world’s major industrial states, and was the first example of a fully negotiated monetary order intended to govern monetary relations among independent nation-states. The agreement was intended to prevent national trade barriers that could create global economic depressions. The political basis for the Bretton Woods Agreement was in the confluence of two key conditions: the shared experiences of the Great Depression, and the concentration of power in a small number of states which was further enhanced by the exclusion of a number of important nations due to ongoing war.
The agreement set up rules and institutions to regulate the international political economy, resulting in the creation of organizations such as the the International Monetary Fund (IMF) and the International Bank for Reconstruction and Development (later divided into the World Bank and Bank for International Settlements). These organizations became operational in 1946 after enough countries ratified the agreement. Currently, the Doha round of World Trade Organization negotiations aims to lower barriers to trade around the world, with a focus on making trade more favorable for so-called “developing” countries, though talks have faced a divide between “developed” countries and the major “developing” countries.
The World Trade Organization (WTO)
The World Trade Organization (WTO) is an organization that was formed in 1995 to supervise and liberalize international trade . The organization deals with regulation of trade between participating countries; it provides a framework for negotiating and formalizing trade agreements, and a dispute resolution process aimed at enforcing participants’ adherence to WTO agreements which are signed by representatives of member governments and ratified by their parliaments.
WTO Logo
The WTO, succeeding GATT in 1995, is an organization that seeks to liberalize international trade.
Trade, Globalization, and the Anti-Globalization Movement
International trade greatly contributes to the process of globalization, the processes of international integration arising from the interchange of world views, products, ideas, and other aspects of culture. Advances in transportation and telecommunications infrastructure, including the rise of the telegraph and its posterity the Internet, are major factors in globalization, generating further interdependence of economic and cultural activities. In 2000, the International Monetary Fund (IMF) identified four basic aspects of globalization: trade and transactions, capital and investment movements, migration and movement of people, and the dissemination of knowledge.
Globalization has been criticized in recent decades for the unequal power dynamics of international trade, and the policies that are used to exploit developing countries for the profit of the developed Western world. The anti-globalization movement is critical of the globalization of corporate capitalism for these reasons. Many anti-globalization activists, however, call for forms of global integration that provide better democratic representation, advancement of human rights, fair trade and sustainable development and therefore feel the term “anti-globalization” is misleading.
In general, the anti-globalization movement is especially opposed to the various abuses which are perpetuated by globalization and the international institutions which are believed to promote neoliberalism without regard to ethical standards. Common targets include the World Bank (WB), International Monetary Fund (IMF), the Organisation for Economic Co-operation and Development (OECD) and the World Trade Organization (WTO) and free trade treaties like the North American Free Trade Agreement (NAFTA), Free Trade Area of the Americas (FTAA), the Trans Pacific Trade Agreement (TPPA), the Multilateral Agreement on Investment (MAI) and the General Agreement on Trade in Services (GATS). In light of the economic gap between rich and poor countries, movement adherents claim “free trade” without regulations in place to protect the environment, the health and well being of workers, and the economies of “developing” countries contributes only to strengthening the power of industrialized nations (often termed the “global North” in opposition to the developing world’s “global South”).
The anti-globalization movement is considered a rather new and modern day social movement, as the issues it is fighting against are relevant in today’s time. However, the events that occurred which fuels the movement can be traced back through the lineage of the movement of a 500-year old history of resistance against European colonialism and U.S. imperialism, in which the continent of Africa and many other areas of the world were colonized and stripped of their resources for the profit of the Western world.
One of the most infamous tactics of the movement is the Battle of Seattle in 1999, where grassroots activists organized large and creative protests against the World Trade Organization’s Third Ministerial Meeting in order to gain the attention towards the issue of globalization. It is still one of the most significant and memorable social movement protests in the past 20 years.
Contemporary Issues in International Trade
Issues currently associated with international trade are: intellectual property rights, in that creations of the mind for which exclusive rights are recognized in law are considered essential for economic growth; smuggling, especially as it relates to human and drug trafficking; outsourcing, the contracting out of business processes to another country, generally one with lower wages; fair trade, which promotes the use of labor, environmental, and social standards for the production of commodities; and trade sanctions, in which punitive economic measures are taken against a defaulting country.
18.4.2: Immigration and Border Security
Immigration and border security are two important issues for United States policy.
Learning Objective
Identify the relationship between immigration issues and national security
Key Points
- Illegal immigrants are those non-citizens who enter the United States without government permission and in violation of United States nationality law or stay beyond the termination date of a visa, also in violation of the law.
- Illegal immigrants who come generally for economic opportunities or to escape political oppression continue to outpace the number of legal immigrants – a trend that has held steady since the 1990s.
- The challenge of illegal immigration is closely linked with that of border security, the concept of which is related to the persistent threat of terrorism.
Key Term
- visa
-
A permit to enter and leave a country, normally issued by the authorities of the country to be visited.
Immigration and border security are two important issues for U.S. policy.
Though immigration to the United States has been a major source of economic growth and cultural change throughout American history, the recent discourse surrounding immigration deals mostly with illegal immigration. Illegal immigrants are those non-citizens who enter the United States without government permission and are in violation of United States nationality law or stay beyond the termination date of a visa, also in violation of the law.
The illegal immigrant population in the United States in 2008 was estimated by the Center for Immigration Studies to be about 11 million people, down from 12.5 million people in 2007. Other estimates range from 7 to 20 million. According to a Pew Hispanic Center report, in 2005, 56% of illegal immigrants were from Mexico; 22% were from other Latin American countries, primarily from Central America; 13% were from Asia; 6% were from Europe and Canada; and 3% were from Africa and the rest of the world.
Immigration to the U.S.
Rate of immigration to the United States relative to sending countries’ population size, 2001–2005
Illegal immigrants who come generally for economic opportunities or to escape political oppression, continue to outpace the number of legal immigrants – a trend that has held steady since the 1990s. While the majority of illegal immigrants continue to concentrate in places with existing large Hispanic communities, an increasing number of them are settling throughout the rest of the country.
The challenge of illegal immigration is closely linked with that of border security, the concept of which is related to the persistent threat of terrorism. Border security includes the protection of land borders, ports, and airports and after the September 11, 2001 terrorist attacks, many questioned whether the threat posed by the largely unchecked 3,017 mile Canadian border, the 1,933 mile Mexican border, and the many unsecured ports.
18.4.3: Terrorism
The threat of terrorism is one of the greatest challenges facing the United States and the international community.
Learning Objective
Discuss the War on Terror campaign against religious fundamentalist groups and individuals who engage in terrorism
Key Points
- Terrorism generally refers to those violent acts that are intended to create fear (terror). The acts are perpetrated for a religious, political, and/or ideological goal. They deliberately target or disregard the safety of civilians in order to gain publicity for a group, cause, or individual.
- In current international affairs, the threat of Islamic terrorism, a form of religious terrorism committed by Muslims for the purpose of achieving varying political and/or religious ends, has been particularly prevalent.
- The September 11, 2001 terrorist attacks, committed by members of Al-Qaeda, left nearly 3,000 people dead and would mark the beginning of the War on Terror.
Key Term
- revolutionary
-
Of or pertaining to a revolution in government; tending to, or promoting, revolution; as, revolutionary war; revolutionary measures; revolutionary agitators.
The threat of terrorism is one of the greatest challenges facing the United States and the international community. Common definitions of terrorism refer to those violent acts that are intended to create fear (terror). The acts are perpetrated for a religious, political, and/or ideological goal. They deliberately target or disregard the safety of civilians in order to gain publicity for a group, cause, or individual. Terrorism has been practiced by a broad array of political organizations, including right-wing and left-wing political parties, nationalistic groups, religious groups, revolutionary groups, and ruling governments.
Islamic Terrorism
In current international affairs, the threat of Islamic terrorism, a form of religious terrorism committed by Muslims for the purpose of achieving varying political and/or religious ends, has been particularly prevalent. Islamic terrorism has taken place in the Middle East, Africa, Europe, South Asia, Southeast Asia, and the United States since the 1970’s. Islamic terrorist organizations have been known to engage in tactics including suicide attacks, hijackings, kidnappings, and recruiting new members through the Internet. Well-known Islamic terrorist organizations include Al-Qaeda, Hamas, Hezbollah, and Islamic Jihad.
The 9/11 Attacks and the War on Terror
The September 11, 2001 terrorist attacks, in which members of Al-Qaeda under the leadership of Osama bin Laden hijacked and crashed four passenger jets in New York, Virginia, and Pennsylvania, left nearly 3,000 people dead. These attacks marked the beginning of the “War on Terror,” an international military campaign led by the United States and the United Kingdom (with the support of NATO and non-NATO allies) against Al-Qaeda and other associated militant organizations with the stated goal of eliminating them. The War on Terror would include the military campaigns in Afghanistan and Iraq .
September 11, 2001 attacks
The attack on the World Trade Center in New York City on September 11, 2001.
18.4.4: Nuclear Weapons
The proliferation of nuclear weapons, explosive devices which derive force from nuclear reactions, is a key challenge of foreign policy.
Learning Objective
Identify the history of nuclear weapons and international efforts to regulate them
Key Points
- Only two nuclear weapons have been used in the course of warfare, both by the United States against Japan near the end of World War II.
- In 1957, the International Atomic Energy Agency (IAEA) was established under the mandate of the United Nations to encourage development of peaceful applications for nuclear technology, provide international safeguards against its misuse, and facilitate the application of safety measures in its use.
- Currently, the prospect of nuclear technology falling into the hands of rogue states and terrorist organizations is considered a major threat to international security.
Key Terms
- fission
-
The process of splitting the nucleus of an atom into smaller particles; nuclear fission
- fusion
-
A nuclear reaction in which nuclei combine to form more massive nuclei with the concomitant release of energy
The proliferation of nuclear weapons, explosive devices which derive their destructive force from nuclear reactions (either fission or a combination of fission and fusion), is an important challenge of foreign policy.
Only a few nations possess such weapons or are suspected of seeking them. The only countries known to have detonated nuclear weapons—and that acknowledge possessing such weapons—are (chronologically by date of first test) the United States, the Soviet Union (succeeded as a nuclear power by Russia), the United Kingdom, France, China, India, Pakistan, and North Korea. In addition, Israel is also widely believed to possess nuclear weapons, though it does not acknowledge having them. One state, South Africa, fabricated nuclear weapons in the past, but has since disassembled their arsenal and submitted to international safeguards.
Only two nuclear weapons have been used in the course of warfare, both by the United States near the end of World War II. On August 6, 1945, a uranium gun-type fission bomb was detonated over the Japanese city of Hiroshima. Three days later, on August 9, a plutonium implosion-type fission bomb was exploded over Nagasaki, Japan. These two bombings resulted in the deaths of approximately 200,000 Japanese people—mostly civilians—from acute injuries sustained from the explosions.
Since the bombings of Hiroshima and Nagasaki, nuclear weapons have been detonated on over two thousand occasions for testing purposes and demonstrations. Because of the immense military power they can confer, the political control of nuclear weapons has been a key issue for as long as they have existed; in most countries the use of nuclear force can only be authorized by the head of government or head of state. In 1957, the International Atomic Energy Agency (IAEA) was established under the mandate of the United Nations to encourage development of peaceful applications for nuclear technology, provide international safeguards against its misuse, and facilitate the application of safety measures in its use.
By the 1960s, steps were being taken to limit both the proliferation of nuclear weapons to other countries and the environmental effects of nuclear testing. The Partial Test Ban Treaty (1963) restricted all nuclear testing to underground facilities, to prevent contamination from nuclear fallout, while the Nuclear Non-Proliferation Treaty (1968) attempted to place restrictions on the types of activities signatories could participate in, with the goal of allowing the transference of non-military nuclear technology to member countries without fear of proliferation. Currently, the prospect of nuclear technology falling into the hands of rogue states and terrorist organizations is considered a major threat to international security.
18.4.5: Iraq
Particularly since the beginning of Operation Iraqi Freedom in 2003, U.S. relations with Iraq have been central to its foreign policy.
Learning Objective
Discuss the history of U.S.-Iraq relations and the U.S. Occupation of Iraq
Key Points
- After the September 11, 2001 attacks, the governments of the United States and the United Kingdom claimed that Iraq’s alleged possession of weapons of mass destruction (WMD) posed a threat to their security and that of their coalitional and regional allies.
- On March 20, 2003, a U.S.-led coalition conducted a military invasion of Iraq, referred to as Operation Iraqi Freedom, without declaring war.
- The last U.S. troops left Iraqi territory on December 18, 2011 after Barack Obama announced an eighteen month withdrawal window for combat forces.
Key Terms
- Shia
-
the second largest denomination of Islam; “followers”, “faction”, or “party” of Muhammad’s son-in-law Ali, whom the Shia believe to be Muhammad’s successor
- sectarian
-
Of, or relating to a sect.
- Sunni
-
The branch of Islam that believes that the Qur’an is the final authority, and that their leaders have no special sacred wisdom.
Since the United States recognized an independent Iraq in 1930, relations with that nation have been an important aspect of U.S. foreign policy.
After the September 11, 2001 attacks, the governments of the United States and the United Kingdom claimed that Iraq’s alleged possession of weapons of mass destruction (WMD) posed a threat to their security and that of their coalitional and regional allies. Some U.S. officials also accused Iraqi President Saddam Hussein of harboring and supporting al-Qaeda, but no evidence of a meaningful connection was ever found. Other proclaimed accusations against Iraq included its financial support for the families of Palestinian suicide bombers, Iraqi government human rights abuses, and an effort to spread democracy to the country.
On March 20, 2003, a U.S.-led coalition conducted a military invasion of Iraq without declaring war. The invasion, referred to as Operation Iraqi Freedom, led to an occupation and the eventual capture of President Hussein, who was later tried in an Iraqi court of law and executed by the new Iraqi government. Violence against coalition forces and among various sectarian groups soon led to the Iraqi insurgency, strife between many Sunni and Shia Iraqi groups, and the emergence of a new faction of al-Qaeda in Iraq.
Operation Iraqi Freedom
A U.S. Marine tank in Baghdad during the Iraq War.
The “One weekend a month, two weeks a year” slogan has lost most of its relevance since the Iraq War, when nearly 28% of total US forces in Iraq and Afghanistan at the end of 2007 consisted of mobilized personnel of the National Guard and other Reserve components. [35] In July 2012, the Army’s top general stated his intention to increase the annual drill requirement from two weeks per year to up to seven weeks per year.
As public opinion favoring troop withdrawals increased and as Iraqi forces began to take responsibility for security, member nations of the Coalition withdrew their forces. In late 2008, the U.S. and Iraqi governments approved a Status of Forces Agreement, effective through January 1, 2012. The Iraqi Parliament also ratified a Strategic Framework Agreement with the U.S., aimed at ensuring cooperation in constitutional rights, threat deterrence, education, energy development, and in other areas.
In late February 2009, newly-elected U.S. President Barack Obama announced an eighteen month withdrawal window for combat forces, with approximately 50,000 troops remaining in the country “to advise and train Iraqi security forces and to provide intelligence and surveillance. ” On October 21, 2011, President Obama announced that all U.S. troops and trainers would leave Iraq by the end of the year, bringing the U.S. mission in Iraq to an end. The last U.S. troops left Iraqi territory on December 18, 2011.
18.4.6: Afghanistan
The relationship between the United States and Afghanistan has become an integral aspect of U.S. foreign policy.
Learning Objective
Discuss the nature of the U.S. foreign policy toward Afghanistan since 2001
Key Points
- Following the attacks of September 11, 2001– thought to be orchestrated by Osama bin Laden, who was residing in Afghanistan under asylum at the time– the United States launched and led Operation Enduring Freedom.
- The United States has taken a leading role in the overall reconstruction of Afghanistan by investing billions of dollars in national roads, government and educational institutions, and the Afghan military and national police force.
- U.S. forces are scheduled to begin leaving between mid-2011 to the end of 2014. Concerns remain regarding the Taliban insurgency, the role of Pakistan in training those insurgents, and the risk of Afghanistan degenerating into a failed state after the withdrawal.
Key Terms
- insurgency
-
rebellion; revolt; the state of being insurgent
- Taliban
-
A Sunni Islamic student movement in Afghanistan; organized in 1994 by the radical mullah “Mohammad Omar”
The relationship between the United States and Afghanistan has become an integral aspect of U.S. foreign policy.
Following the attacks of September 11, 2001– thought to be orchestrated by Osama bin Laden, who was residing in Afghanistan under asylum at the time– the United States launched and led Operation Enduring Freedom. This major military operation was aimed at removing the Taliban government from power and capturing Al-Qaeda members, including Osama bin Laden himself. Following the overthrow of the Taliban, the U.S. supported the new government of Afghan President Hamid Karzai by maintaining a high level of troops in the area, as well as by combating Taliban insurgency. Afghanistan and the United States resumed diplomatic ties in late 2001.
The United States has taken a leading role in the overall reconstruction of Afghanistan by investing billions of dollars in national roads, government and educational institutions, and the Afghan military and national police force. In 2005, the United States and Afghanistan signed a strategic partnership agreement, committing both nations to a long-term relationship.
The U.S. Armed Forces has been gradually increasing its troop level in Afghanistan since 2002, reaching about 100,000 in 2010. They are scheduled to begin leaving between mid-2011 to the end of 2014. In 2012, Presidents Obama and Karzai signed a strategic partnership agreement between their respective countries, designating Afghanistan as a major non-NATO ally. Concerns remain regarding the Taliban insurgency, the role of Pakistan in training those insurgents, the drug trade, the effectiveness of Afghan security forces, and the risk of Afghanistan degenerating into a failed state after the withdrawal.
Operation Enduring Freedom
An American soldier on patrol in Afghanistan
18.4.7: China
Three issues of particular importance in Chinese-American relations are economic trade, the contested status of Taiwan, and human rights.
Learning Objective
Examine the social, political and economic issues that are significant for U.S.-China relations
Key Points
- China, which became the world’s second largest economy in 2010, may overtake the United States and become the world’s largest economy by 2030, if current trends persist.
- American support for the island nation of Taiwan, which China claims as one of its provinces and has threatened to take over by force, is another source of tension.
- The Chinese government’s stance toward human rights, which has been criticized by international humanitarian groups, is another source of controversy.
Key Terms
- joint venture
-
A cooperative partnership between two individuals or businesses in which both profits and risks are shared
- one child policy
-
A policy of population control in China that officially limits married, urban couples to having only one child
The political, economic, and military rise of China, with its enormous population of more than 1.3 billion people, is a key foreign policy challenge for the United States. Within current U.S.-China relations, three issues of particular importance stand out: economic trade, the status of Taiwan, and human rights.
U.S.-China relations
President Obama and Chinese Premier Wen Jiabao
Since China and the United States resumed trade relations in 1972 and 1973, U.S. companies have entered into numerous agreements with Chinese counterparts that have established more than 20,000 equity joint ventures, contractual joint ventures, and wholly foreign-owned enterprises. The American trade deficit with China exceeded $350 billion in 2006, and is the U.S.’ s largest bilateral trade deficit.
China, which became the world’s second largest economy in 2010, may overtake the United States and become the world’s largest economy by 2030, if current trends continue (although this growth might be limited by domestic challenges facing China, including income inequality and pollution). Among foreign nations, China holds the largest amount of U.S. public debt and has been a vocal critic of U.S. deficits and fiscal policy. In turn, the United States has criticized China’s undervaluation of its currency, the Renminbi.
American support for the island of Taiwan, which China claims as one of its provinces and has threatened to take over by force, is another source of tension. The U.S. maintains sympathy for a independent Taiwan due to its liberal, pluralistic democracy, and gives Taiwan extensive political and military support. This support has resulted in threats of retaliation from China.
The Chinese government’s policy toward human rights is another source of controversy. International human rights organizations have identified a number of potential violations in China, including the use of capital punishment, the application of the one child policy, the denial of independence to Tibet, the absence of a free press, the absence of an independent judiciary with due process, the absence of labor rights, and the absence of religious freedom.
18.4.8: Israel and Palestine
The conflict between the State of Israel and the Palestinians is an important issue affecting American and international policy.
Learning Objective
Explain the importance of the Israeli-Palestinian conflict for American foreign policy in the Middle East
Key Points
- Many currently consider the central foreign policy issue in the conflict to be the creation of an independent Palestinian state next to the existing Jewish state of Israel.
- The Oslo Accords of 1993 allowed the Palestinian National Authority to have autonomy over large parts of the West Bank and the Gaza Strip, although terrorism from Palestinian extremist groups and the assassination of Israeli Prime Minister Yitzhak Rabin in 1995 would derail further negotiations.
- Current issues for negotiations include: mutual recognition, borders, terrorism and security, water rights, control of Jerusalem, Israeli settlements, Palestinian incitement, and finding a solution for Palestinian refugees from Israel’s War of Independence in 1948.
Key Term
- Palestinian
-
An inhabitant of the West Bank and Gaza Strip, legally governed by the Palestinian National Authority.
The conflict between the State of Israel and the Palestinians is an important issue affecting American and international policy. While the United States has a longstanding policy of political, military, and economic support for Israel, it often must balance such support with its relations with Arab nations and its commitment to a Palestinian state.
The conflict dates back to early Arab opposition to Jewish national sovereignty and numerous wars fought between Israel and neighboring Arab states. However, many currently consider the central foreign policy issue to be the creation of an independent Palestinian state next to the existing Jewish state of Israel. Most of the West Bank and the Gaza Strip, territories taken by Israel during the Six-Day War in 1967, are considered acceptable locations for a future Palestinian state.
Numerous efforts have been made to achieve peace through a negotiated settlement between the Israeli government and its Palestinian counterparts. Most prominently, the Oslo Accords of 1993 allowed the Palestinian National Authority to have autonomy over large parts of the West Bank and the Gaza Strip, although a campaign of terrorism from Palestinian extremist groups and the assassination of Israeli Prime Minister Yitzhak Rabin in 1995 would derail further negotiations.
The Oslo Accords
The signing of the Oslo Accords in 1993
Current issues for negotiations include: mutual recognition, borders, terrorism and security, water rights, control of Jerusalem, Israeli settlements, Palestinian incitement, and finding a solution for Palestinian refugees from Israel’s War of Independence in 1948. Another challenge is the lack of unity among Palestinians, reflected in the political struggle between Fatah, which controls the Palestinian areas of the West Bank, and the terrorist group Hamas, which has controlled the Gaza Strip since Israel’s withdrawal from that territory in 2005.
18.4.9: Humanitarian Efforts
Humanitarian aid is material or logistical assistance in response to crises including natural and man-made disasters.
Learning Objective
Compare and contrast humanitarian aid with development aid
Key Points
- The primary objective of humanitarian aid is to save lives, alleviate suffering and maintain human dignity.
- The humanitarian community has initiated a number of inter-agency initiatives to improve accountability, quality and performance in humanitarian action.
- Prominent humanitarian organizations include Doctors Without Borders, Mercy Corps and the International Red Cross.
Key Terms
- humanitarian
-
Concerned with people’s welfare and the alleviation of suffering; humane or compassionate.
- socioeconomic
-
Of or pertaining to social and economic factors.
Humanitarian aid is material or logistical assistance in response to crises including natural and man-made disasters. The primary objective of humanitarian aid is to save lives, alleviate suffering and maintain human dignity. Humanitarian aid differs from development aid, which seeks to address the underlying socioeconomic factors leading to a crises.
Aid is funded by donations from individuals, corporations, governments and other organizations. The funding and delivery of humanitarian aid has become increasingly international in scope. This makes it much more responsive and effective in coping with major emergencies. With humanitarian aid efforts sometimes criticized for a lack of transparency, the humanitarian community has initiated a number of inter-agency initiatives to improve its accountability, quality and performance.
The People in Aid initiative, for example, links seven areas that would improve the operations of aid organizations – health, safety and security learning; training and development; recruitment and selection; consultation and communication; support management and leadership; staff policies and practices; and human resources strategy.
Prominent humanitarian organizations include Doctors Without Borders, Mercy Corps and the International Red Cross. Major humanitarian projects include the Berlin Airlift, in which U.S. and U.K governments flew supplies into the Western-held sectors of Berlin during the Soviet blockade of 1948-1949. Another example is the aid efforts for refugees felling from the fighting in Bosnia and Kosovo in 1993 and 1999, respectively .
Humanitarian aid
Aid for refugees of the Kosovo War
18.5: Modern Foreign Policy
18.5.1: Diplomacy
Standard diplomacy involves government-to-government communication; modern diplomacy has begun to emphasize public diplomacy as well.
Learning Objective
Compare and contrast public diplomacy with standard diplomacy
Key Points
- Through diplomacy, governments of one country engage governments of another country. Usually this is accomplished by diplomats (e.g. ambassadors) in an official, U.S. government capacity in order to improve relationships between two countries, negotiate treaties, etc.
- Through public diplomacy, governments attempt to influence and communicate with the societies of another country. Film, music, arts, social and educational exchange programs, direct radio broadcasts, and the Internet can all be used to achieve public diplomacy.
- Public diplomacy has increased in importance since 9/11. The U.S. government has actively sought to use public diplomacy to improve their reputation abroad, particularly in the Middle East.
- Increasing globalization has caused public diplomacy to grow in importance in modern foreign policy. People– not just states– matter in an increasingly interconnected world.
Key Terms
- diplomacy
-
The art and practice of conducting international relations by negotiating alliances, treaties, agreements, etc., bilaterally or multilaterally, between states and sometimes international organizations or even between policies with varying statuses, such as those of monarchs and their princely vassals.
- foreign policy
-
A government’s policy relating to matters beyond its own jurisdiction: usually relations with other nations and international organisations.
- public diplomacy
-
The communication between foreign societies, intended primarily to establish a dialogue designed to inform and influence.
Public Diplomacy
Public diplomacy has become increasingly important in modern foreign policy. Public diplomacy– or people’s diplomacy, broadly speaking– is the communication between foreign societies, intended primarily to establish a dialogue designed to inform and influence. It is practiced through a variety of instruments and methods ranging from personal contact and media interviews to the Internet and educational exchanges.
Public Diplomacy Versus Standard Diplomacy
Standard diplomacy can be described as the way in which government leaders communicate with each other at the highest levels; it is the elite diplomacy we are all familiar with. By contrast, public diplomacy focuses on the way in which a country (or multi-lateral organization such as the United Nations) communicates with citizens in other societies . A country may be acting deliberately or inadvertently, and through both official and private individuals and institutions. Effective public diplomacy begins with the premise that dialogue, rather than a sales pitch, is often central to achieving the goals of foreign policy: public diplomacy must be seen as a two-way street. Furthermore, public diplomacy advocates many differing views as represented by private American individuals and organizations in addition to the official views of the US government.
Ambassadors and Fulbright Scholars
Eric G. John, the U.S. Ambassador to Thailand from 2007-2010, speaks at a reception for Fulbright Grantees in Thailand. As an ambassador and formal representative of the U.S. government, John addresses traditional, elite-to-elite diplomacy, while the Fulbright program emphasizes public diplomacy.
Traditional diplomacy actively engages one government with another government. In traditional diplomacy, U.S. embassy officials represent the U.S. government in a host country primarily by maintaining relations and conducting official U.S. government business with the officials of the host government, whereas public diplomacy primarily engages many diverse, non-governmental elements of a society .
US Embassies
Maintaining an embassy in every recognized country is an important traditional diplomatic task. Depicted here is the U.S. embassy in London, England.
Avenues for Public Diplomacy
Film, television, music, sports, video games and other social/cultural activities are seen by public diplomacy advocates as enormously important avenues for otherwise diverse citizens to understand each other, as well as integral to international cultural understanding, considered a key goal of modern public diplomacy. This goal involves not only shaping the messages that a country wishes to present abroad– by developing the necessary tools of conversation and persuasion– but also analyzing and understanding the ways that the message is interpreted by diverse societies.
Instruments used for practicing public diplomacy include broadcasting stations (The Voice of America, Radio Free Europe, Radio Liberty), exchange programs (Fulbright, the International Visitor Leadership Program), American arts and performances in foreign countries, the Internet, and personal contact .
Voice of America
Pictured is the Voice of America headquarters in Washington, DC. Media such as Voice of America seek to influence foreign societies by making American policy seem more favorable. This is a key component of modern public diplomacy.
Globalization and Increase in U.S. Public Diplomacy
Public diplomacy has been an essential element of American foreign policy for decades. It was an important tool in influencing public opinion during the Cold War. Since the attacks of September 11, 2001, the term has come back into vogue and the practice has increased in importance as the United States government works to improve their reputation abroad, particularly in the Middle East and among those in the Islamic world. Numerous panels, including those sponsored by the Council on Foreign Relations, have evaluated American efforts in public diplomacy since 9/11 and have written reports recommending that the United States take various actions to improve the effectiveness of its public diplomacy.
This traditional concept is expanded on with the idea of adopting what is called “population-centric foreign affairs” within which foreign populations assume a central component of foreign policy. Since people, not just states, are of global importance in a world where technology and migration increasingly face everyone, an entire new door of policy is opened.
18.5.2: The United Nations
The United Nations is the most important and influential international, intergovernmental organization.
Learning Objective
Analyze the United States position toward the United Nations
Key Points
- The United Nations was established in 1945 to replace the failed League of Nations, with the goal of promoting peace by establishing a forum for cooperation, dialogue, and humanitarian response.
- The UN’s main bodies include the Security Council, the General Assembly, the Economic and Social Council, the Secretariat, and the International Court of Justice. The U.S. is a permanent member of the UN Security Council, which is arguably the most influential body of the UN.
- The UN’s main objectives include peacekeeping and security, disarmament, human rights protection and humanitarian assistance, and social and economic development.
- The U.S. led the creation of the UN, and has been an influential player in the UN’s work since its beginning.
- Since the U.S. has taken on the role of the world’s sole superpower, the UN and the U.S. have often conflicted. In particular, the UN and the U.S. have conflicted over the U.S.’ s debts to the UN, as well as the U.S.’ s 2003 near-unilateral invasion of Iraq.
Key Terms
- multilateral
-
Involving more than one party (often used in politics to refer to negotiations, talks, or proceedings involving several nations).
- international organization
-
Often referred to as intergovernmental organization; organizations that are made up primarily of sovereign states (referred to as member states).
- Universal Declaration of Human Rights
-
A declaration adopted by the United Nations General Assembly after World War II that represents the first global expression of rights to which all human beings are inherently entitled. It includes economic, political and social rights.
What is the United Nations?
The United Nations (UN) is an intergovernmental, international organization consisting of all 193 states in the world, whose stated aims are facilitating cooperation in international law, international security, economic development, social progress, human rights, and the achievement of world peace. The UN was founded in 1945 to stop wars between countries and to provide a platform for dialogue. It contains multiple subsidiary organizations to carry out its missions.
History of the UN
After World War II, most government leaders recognized that humankind could not afford a third world war. The United Nations was established to replace the flawed League of Nations, in order to maintain international peace and promote cooperation in solving international economic, social and humanitarian problems.
In 1945, the UN officially came into existence upon ratification of the United Nations Charter by the five then-permanent members of the Security Council—France, the Republic of China, the Soviet Union, the United Kingdom and the United States—and by a majority of the other 46 signatories.
Organization of the UN
The organization’s principle organs are as follows:
- The General Assembly is the main deliberative assembly and is composed of all United Nations member states.
- The Security Council (UNSC) is charged with maintaining peace and security among countries. It is composed of 15 member states, including 5 permanent members–China, France, Russia, the UK, and the US.
- The Economic and Social Council promotes international economic and social progress through cooperation and development.
- The Secretariat, headed by the Secretary-General, provides studies, information and facilities needed by the UN bodies.
- The International Court of Justice is the primary judicial organ of the UN.
Other prominent UN agencies exist to work on particular issues. Some of the most well-known agencies are the International Atomic Energy Agency, the World Bank, the World Health Organization (WHO), the World Food Programme (WFP), and the United Nation’s Children’s Fund (UNICEF). It is through these agencies that the UN performs most of its humanitarian work.
The UN is financed from assessed and voluntary contributions from member states. The General Assembly approves the regular budget and determines the assessment for each member.
Objectives of the UN
One of the main objectives of the UN is peacekeeping and security . With approval from the Security Council, the UN sends peacekeepers, voluntarily provided by UN member states, to regions where armed conflict has recently ceased. The goal of the peacekeepers is to enforce the terms of peace agreements and to discourage combatants from resuming hostilities.
UN Peacekeepers
A UN peacekeeper carries out operations in Haiti. Peacekeeping and security are primary objectives of the UN, and UN peacekeepers have been deployed around the world.
The UN is a world leader in human rights protection and humanitarian assistance. The Universal Declaration of Human Rights, though not legally binding, was adopted by the General Assembly in 1948 as a common standard of achievement for all. The UN and its agencies are central in upholding and implementing the principles enshrined in the Declaration, from assisting countries transitioning to democracy, to supporting women’s rights, to providing humanitarian aid.
Lastly, the UN also focuses on social and economic development through the UN Development Program (UNDP) and other agencies like the WHO and the World Bank.
The United States and the UN
The United States is a charter member of the United Nations and one of five permanent members of the UN Security Council. The most important American contribution to the United Nations system is perhaps the Bretton Woods conference. This conference took place in 1944, and its goal was “to create a new international monetary and trade regime that was stable and predictable.” This new system opened world markets, promoted a liberal economy and was implemented through different institutions, such as the World Bank and the International Monetary Fund.
Since 1991 the United States has been the world’s dominant military, economic, social and political power (plus it hosts the UN Headquarters itself in New York City ). The United Nations was not designed for such a unipolar world with a single superpower, and conflict between an ascendant U.S. and other UN members has increased.
UN Building in New York
This picture shows the UN Secretariat’s headquarters in New York City.
One of such conflicts occurred when the U.S. refused to pay its arrears in order to force UN compliance with U.S. wishes, as well as to cause the UN to reduce the U.S. assessment.
Another conflict between the U.S. and some UN members arose in 2003 over the issue of Iraq. U.S. President Bush maintained that Iraqi President Saddam Hussein possessed weapons of mass destruction (WMDs) in violation of his post-Gulf War obligations. In order to find these WMDs, Bush and a “coalition of the willing” invaded Iraq without explicit UN Security Council approval, causing friction within the multilateral UN .
2003 Invasion of Iraq
President George W. Bush addresses the nation in 2003, announcing the beginning of Operation Iraqi Freedom. The U.S.’ s near-unilateral invasion of Iraq caused tension in the multilateral UN.
18.5.3: The International Monetary Structure
The international monetary structure involves international institutions, regional trading blocs, private players, and national governments.
Learning Objective
Explain the role played by the United States over the history of the international monetary structure
Key Points
- The International Monetary Fund (IMF) is one of the most prominent institutions in the international monetary structure. The IMF oversees the global financial system and offers assistance to states.
- Another international institution, the World Bank, is important in the global monetary structure as it provides assistance and loans to developing nations.
- The World Trade Organization is an international institution that helps settle trade disputes and negotiate trade arrangements among states.
- Other important institutions in the international monetary structure include private participants (such as banks or insurance companies), regional trading blocs (such as the Eurozone or NAFTA), and national governments.
- Some argue that because of the U.S.’ s economic power and global influence, the international monetary structure has been created to match the U.S.’ s preferences and national interests.
- The U.S. helped establish the current structure of the international monetary system by leading the creation of the Bretton Woods system in 1944.
Key Terms
- Washington Consensus
-
A term that refers to a set of ten relatively specific economic policy prescriptions that constitute the “standard” reform package promoted for crisis-wracked developing countries.
- World Bank
-
A group of five financial organizations whose purpose is economic development and the elimination of poverty.
- International Monetary Fund
-
The international organization entrusted with overseeing the global financial system by monitoring foreign exchange rates and balance of payments, as well as offering technical and financial assistance when asked. Abbreviated as IMF.
Major Components of the International Monetary Structure
The main components in the international monetary structure are global institutions (such as the International Monetary Fund and Bank for International Settlements), national agencies and government departments (such as central banks and finance ministries), private institutions acting on the global scale (such as banks and hedge funds), and regional institutions (like the Eurozone or NAFTA).
International Institutions
The most prominent international institutions are the International Monetary Fund (IMF) , the World Bank, and the World Trade Organization (WTO). The IMF keeps account of the international balance of payments accounts of member states, but also lends money as a last resort for members in financial distress. Membership is based on the amount of money a country provides to the fund relative to the size of its role in the international trading system.
IMF Headquarters
The headquarters of the International Monetary Fund in Washington, DC.
The World Bank aims to provide funding, takes up credit risk, or offers favorable terms to developing countries for development projects that couldn’t be obtained by the private sector.
The World Trade Organization settles trade disputes and negotiates international trade agreements in its rounds of talks (currently the Doha Round) .
Members of the WTO
This map depicts the member states of the World Trade Organization (WTO). Dark green states are members; light green are members of the EU and thus members of the WTO as well; blue states are observer states; and gray states have no official interaction with the WTO. Notice the global reach of organizations like the WTO.
Private Participants
Also important to the international monetary structure are private participants, such as players active in the markets of stocks, bonds, foreign exchange, derivatives, and commodities, as well as investment banking. This includes commercial banks, hedge funds and private equity, pension funds, insurance companies, mutual funds, and sovereign wealth funds.
Regional Institutions
Certain regional institutions also play a role in the structure of the international monetary system. For example, the Commonwealth of Independent States (CIS), the Eurozone, Mercosur, and North American Free Trade Agreement (NAFTA) are all examples of regional trade blocs, which are very important to the international monetary structure .
Bill Clinton Signs NAFTA
In this picture, President Bill Clinton signs the North American Free Trade Agreement into law. NAFTA, a free trade area between Canada, the U.S., and Mexico, is an example of the importance of regional trade blocs to the international monetary structure. NAFTA is also an example of the U.S.’ s disproportionate power in determining the direction of the international monetary structure.
Government Institutions
Governments are also a part of the international monetary structure, primarily through their finance ministries: they pass the laws and regulations for financial markets, and set the tax burden for private players such as banks, funds, and exchanges. They also participate actively through discretionary spending. They are closely tied to central banks that issue government debt, set interest rates and deposit requirements, and intervene in the foreign exchange market.
Perspectives on the International Monetary Structure
The liberal view of the international monetary structure holds that the exchange of currencies should be determined not by state institutions but instead individual players at a market level. This view has been labelled as the Washington Consensus. The social democratic view challenges the liberal view, advocating for the tempering of market mechanisms and the institution of economic safeguards in an attempt to ensure financial stability and redistribution. Besides the liberal and social democratic views, neo-Marxists are highly critical of the modern financial system in that it promotes inequality between state players, particularly holding the view that the wealthier countries abuse the financial system to exercise control of developing countries’ economies.
U.S. Influence on the International Monetary Structure
The most important American contribution to the global financial system is perhaps the introduction of the Bretton Woods system. The Bretton Woods system of monetary management, created at a conference in 1944, established the rules for commercial and financial relations among the world’s major industrial states in the mid-20th century. The Bretton Woods system was the first example of a fully negotiated monetary order intended to govern monetary relations among independent nation-states. Setting up a system of rules, institutions, and procedures to regulate the international monetary system, the planners at Bretton Woods established the IMF and the International Bank for Reconstruction and Development (IBRD), which today is part of the World Bank Group.
Besides the influence of the U.S. on the Bretton Woods system, it is often claimed that the United States’s transition to neoliberalism and global capitalism also led to a change in the identity and functions of international financial institutions like the IMF. Because of the high involvement and voting power of the United States, the global economic ideology could effectively be transformed to match that of the U.S. This is consistent with the IMF’s function change during the 1970s after a change in President Nixon’s policies (when the Nixon Shock ended the Bretton Woods gold standard). Others also claim that, because of the disproportionate economic power of the United States, allies of the United States are able to receive bigger loans with fewer conditions.
18.5.4: Collective Military Force
A collective military force (when multiple countries pool their militaries) involves both collective security and collective defense.
Learning Objective
Compare and contrast the concepts of collective security and collective defense
Key Points
- Collective security is more far-reaching than collective defense as it addresses a broader range of threats.
- States usually establish an organization in order to pursue collective security. The UN is the most prominent example of a collective security organization.
- In collective defense (usually formalized by a treaty or an organization) states agree to come to the defense of another state; an attack on one state is considered an attack on all.
- The North Atlantic Treaty Organization (NATO) is one of the most prominent collective defense organizations. The US is a prominent and leading member of NATO.
- The 1991 Gulf War is an example of states successfully creating and deploying a collective military force.
Key Terms
- collective defense
-
An arrangement, usually formalized by a treaty and an organization, among participant states that commit support in defense of a member state if it is attacked by another state outside the organization.
- collective security
-
The concept of maintaining peace among all nations or members of a group by making the security concerns of one member important to all members. This is broader than mere military alliances; a primary example of collective security is the UN.
- North Atlantic Treaty Organization
-
An intergovernmental military alliance based on the North Atlantic Treaty which was signed on 4 April 1949. The organization constitutes a system of collective defense whereby its member states agree to mutual defense in response to an attack by any external party.
Collective Military Force
A collective military force is what arises when countries decide that it is in their best interest to pool their militaries in order to achieve a common goal. The use of collective military force in the global environment involves two primary concepts: collective security and collective defense. These concepts are similar but not identical.
Collective Security
Collective security can be understood as a security arrangement, regional or global, in which each state in the system accepts that the security of one is the concern of all, and agrees to join in a collective response to threats to, and breaches of, the peace. Collective security is more ambitious than collective defense in that it seeks to encompass the totality of states within a region or indeed globally, and to address a wide range of possible threats.
Collective security is achieved by setting up an international cooperative organization, under the auspices of international law. This gives rise to a form of international collective governance, albeit limited in scope and effectiveness. The collective security organization then becomes an arena for diplomacy.
The UN and Collective Security
The UN is often provided as the primary example of collective security. By employing a system of collective security, the UN hopes to dissuade any member state from acting in a manner likely to threaten peace, thereby avoiding any conflict.
Collective Defense
Collective defense is an arrangement, usually formalized by a treaty and an organization, among participant states that commit support in defense of a member state if it is attacked by another state outside the organization .
NATO and Collective Defense
The North Atlantic Treaty Organization (NATO) is the best known collective defense organization . Its now famous Article V calls on (but does not fully commit) member states to assist another member under attack. This article was invoked after the September 11 attacks on the United States, after which other NATO members provided assistance to the US War on Terror in Afghanistan . As a global military and economic superpower, the US has taken charge of leading many of NATO’s initiatives and interventions.
NATO in Afghanistan
In 2003, NATO took command of ISAF (International Security Assistance Force), which was the group of international troops operating in Afghanistan. This picture depicts a commander passing the NATO flag during a change of command in Afghanistan.
September 11 and Collective Defense
The 11 September attacks in the United States caused NATO to invoke its collective defense article for the first time.
NATO Coverage
This map depicts current members of the North Atlantic Treaty Organization, one of the primary examples of a collective defense organization.
Benefits and Drawbacks to Collective Defense
Collective defense entails benefits as well as risks. On the one hand, by combining and pooling resources, it can reduce any single state’s cost of providing fully for its security. Smaller members of NATO, for example, have leeway to invest a greater proportion of their budget on non-military priorities, such as education or health, since they can count on other members to come to their defense, if needed.
On the other hand, collective defense also involves risky commitments. Member states can become embroiled in costly wars in which neither the direct victim nor the aggressor benefit. In the First World War, countries in the collective defense arrangement known as the Triple Entente (France, Britain, Russia) were pulled into war quickly when Russia started full mobilization against Austria-Hungary, whose ally Germany subsequently declared war on Russia.
Using Collective Military Force: The 1991 Gulf War
The Gulf War (2 August 1990 – 28 February 1991), codenamed Operation Desert Storm, was a war waged by a UN-authorized coalition force from 34 nations led by the United States, against Iraq in response to Iraq’s invasion and annexation of Kuwait. This invasion is often given as an example of the successful deployment of a collective military force.
In August, 1990, Iraqi troops invaded Kuwait. This invasion met with unified international condemnation, and brought immediate economic sanctions against Iraq by members of the UN Security Council. U.S. President George H. W. Bush deployed American forces into Saudi Arabia, and an array of nations joined the coalition. In this conflict, the UN, the US, and other nations were united into a military force that successfully propelled the Iraqi aggressor out of sovereign Kuwait.
18.5.5: Economic Aid and Sanctions
States can give economic aid to help another country, or implement economic sanctions to try and force another country to change policies.
Learning Objective
Analyze criticisms of the institutions, practices and policies of economic aid
Key Points
- Economic aid is given, at least partly, with the motivation of helping the recipient country. Aid can also be a tool of foreign policy given to show diplomatic approval, reward a government, or gain some other benefit.
- Economic aid is often given with conditions, meaning that in order to receive the aid, the recipient country must change economic policies, agree to spend the aid only on certain items, etc.
- Economic aid is often criticized for being motivated more by donor concerns than recipient concerns, for being a form of neocolonialism, and for simply not being effective.
- Economic sanctions are a foreign policy tool in which one country places trade penalties on another country, and may include trade barriers, duties, or quotas.
- Economic sanctions can be put in place as a foreign policy measure designed to make another country change some sort of human rights or political policy (for example, the US has sanctions against Iran).
- One country can declare sanctions against another country in protest of unfair trading practices.
Key Terms
- conditionality
-
A condition applied to the access of a government to credit facilities and other international financial assistance, especially from the IMF and the World Bank.
- neocolonialism
-
The control or domination by a powerful country over weaker ones (especially former colonies) by the use of economic pressure, political suppression and cultural dominance.
- sanction
-
A penalty, or some coercive measure, intended to ensure compliance; especially one adopted by several nations, or by an international body.
Economic Aid and Sanctions
As part of foreign policy, states can use money and monetary policies to either help or penalize other states. Economic aid is a voluntary transfer of resources from one country to another, given at least partly with the objective of benefiting the recipient country . Sanctions, on the other hand, are penalties (usually in the form of trade policies) that are applied to one country by another.
Development Aid
States can give economic aid to help another country’s economic development. In this picture, the US government has supplied water pumps to a village in Afghanistan.
Economic Aid
Aid may have other functions besides humanitarian: it may be given to signal diplomatic approval, strengthen a military ally, reward a government for behavior desired by the donor, extend the donor’s cultural influence, provide infrastructure needed by the donor for resource extraction from the recipient country, or gain other kinds of commercial access. The threat of withdrawing aid can be another means by which a state can pursue its national interest. Humanitarianism and altruism are, nevertheless, significant motivations for the giving of aid.
A major proportion of aid from donor nations is based on conditionality, meaning that the aid comes with conditions. For example, some donors mandate that a receiving nation must spend the aid on products and expertise originating only from the donor country. Similarly, the World Bank and the International Monetary Fund, as primary holders of developing countries’ debt, attach structural adjustment conditionalities to loans which generally include eliminating state subsidies and the privatizing state services.
Criticisms of Economic Aid
Aid is seldom given from motives of pure altruism; instead, it is often used as a tool of foreign policy. It may be given as a means of supporting an ally in international politics. It may also be given with the intention of influencing the political process in the receiving nation. Aid to underdeveloped countries is often more in the interest of the donor than the recipient, or even a form of neocolonialism. In recent decades, aid by organizations such as the International Monetary Fund and the World Bank has been criticized as being primarily a tool used to open new areas up to global capitalists, and being only secondarily, if at all, concerned with the well-being of the people in the recipient countries.
Economic aid is often criticized for simply not being effective: it does not do what it was intended to do or help the people it was intended to help. Statistical studies have produced widely differing assessments of the correlation between aid and economic growth, and no firm consensus has emerged to suggest that foreign aid boosts growth. It has also been argued that foreign aid harms recipient governments, often because aid distributed by local politicians finances the creation of corrupt government.
Economic Sanctions: Resolving Trade Disputes
Economic sanctions are domestic penalties applied by one country or group of countries on another for a variety of reasons. Economic sanctions include, but are not limited to, tariffs, trade barriers, import duties, and import or export quotas.
Sanctions can arise from an unresolved trade or policy dispute, such as a disagreement about the fairness of a policy affecting international trade. For instance, one country may conclude that another is unfairly subsidizing exports of one or more products, or unfairly protecting some sector from competition from imported goods or services. The first country may retaliate by imposing import duties on goods or services from the second.
For example, in March 2010, Brazil introduced new sanctions against the US. These sanctions were on the basis that the US government was paying cotton farmers for their products, an action not allowed by the WTO. The WTO is currently supervising talks between the states to remove the sanctions.
Economic Sanctions: Achieving Policy Goals
Economic sanctions also can be a coercive foreign policy measure used to achieve particular policy goals related to trade, governance, or humanitarian violations. For example, the United States has imposed economic sanctions against Iran for years on the basis that the Iranian government sponsors groups who work against US interests . The United Nations imposed stringent economic sanctions on Iraq after the first Gulf War, partly as an attempt to make the Iraqi government cooperate with the UN weapons inspectors’ monitoring of Iraq’s weapons programs.
Iran and Sanctions
The US has long upheld economic sanctions against Iran, arguing that Iran pursues policies that are contrary to US national interest.
18.5.6: Arbitration
Arbitration is a form of dispute resolution that can be used to resolve international commercial, investment, and interstate conflicts.
Learning Objective
Explain the advantages of international commercial and investment arbitration
Key Points
- Arbitration is a form of alternative dispute resolution in which a third party reviews evidence in a dispute and makes a decision that is legally binding for all involved.
- International arbitration has frequently been used in resolving international commercial or investment disputes.
- International commercial and investment arbitration is popular primarily because it avoids the uncertainties and problems that come with litigating in a foreign, national court, and arbitration is confidential and enforceable.
- The main legal instrument that governs international commercial arbitration is the 1958 New York Convention, which was created under the auspices of the UN and has been signed by more than 140 countries.
- Arbitration can also be an important tool of foreign policy, as it provides a way for states to resolve their conflicts peaceably. For example, in 1903, arbitration settled a border dispute between the United States and Canada.
Key Terms
- New York Convention
-
Widely considered the foundational instrument for international arbitration, this agreement requires the courts of states who signed the agreement to give effect to private arbitration agreements and to recognize and enforce arbitration awards made in other contracting states.
- alternative dispute resolution
-
Resolution of a dispute through negotiation, mediation, arbitration, or similar means, as opposed to litigation
- arbitration
-
A process through which two or more parties use an arbitrator or arbiter (an impartial third party) in order to resolve a dispute.
What is Arbitration?
Arbitration, a form of alternative dispute resolution, is a legal technique for the resolution of disputes outside the courts, where the parties to a dispute refer it to one or more persons by whose decision they agree to be bound. It is a resolution technique in which a third party reviews the evidence in the case and imposes a decision that is legally binding for both sides and enforceable.
Arbitration is often used for the resolution of commercial disputes, particularly in the context of international commercial transactions. Arbitration can be an important tool of foreign policy, as it allows states a forum to resolve disputes.
International Arbitration
International arbitration is a leading method for resolving disputes arising from international commercial agreements and other international relationships. As with arbitration generally, international arbitration is a creature of contract. In other words, the parties’ agree to submit disputes to binding resolution by arbitrators, usually by including a provision for the arbitration of future disputes in their contract. The practice of international arbitration has developed so as to allow parties from different legal and cultural backgrounds to resolve their disputes, generally without the formalities of their respective legal systems.
Advantages to International Arbitration
International arbitration has enjoyed growing popularity with business and other users over the past 50 years. There are a number of reasons that parties elect to have their international disputes resolved through arbitration. These include: the desire to avoid the uncertainties and local practices associated with litigation in national courts, the desire to obtain a quicker, more efficient decision, the relative enforceability of arbitration agreements and awards, the commercial expertise of arbitrators, the parties’ freedom to select and design the arbitral procedures, confidentiality of arbitration, and other benefits.
The New York Convention
The principal instrument governing the enforcement of commercial international arbitration agreements and awards is the United Nations Convention on Recognition and Enforcement of Foreign Arbitral Awards of 1958 (the “New York Convention”). The New York Convention was drafted under the auspices of the United Nations and has been ratified by more than 140 countries, including most major countries involved in significant international trade and economic transactions . The New York Convention requires that the states that have ratified it to recognize and enforce international arbitration agreements and foreign arbitral awards issued in other contracting states, subject to certain limited exceptions. These provisions of the New York Convention, together with the large number of contracting states, has created an international legal regime that significantly favors the enforcement of international arbitration agreements and awards.
New York Convention Signatories
This map depicts all of the countries who have signed on to the New York Convention. This extensive treaty is often recognized as the most important instrument governing international commercial arbitration.
International Commercial and Investment Arbitration
The resolution of disputes under international commercial contracts is widely conducted under the auspices of several major international institutions and rule making bodies. Specialist dispute resolution bodies also exist, such as the World Intellectual Property Organisation (WIPO), which has an arbitration and mediation center and a panel of international neutrals specializing in intellectual property and technology related disputes.
The last few decades have seen the promulgation of numerous investment treaties such as the Energy Charter Treaty, which are designed to encourage investment in signatory countries by offering protections to investors from other signatory states.
Interstate Arbitration
Arbitration has been used for centuries for the resolution of disputes between states and state-like entities. The 1899 and 1907 Hague Conferences addressed arbitration as a mechanism for resolving state-to-state disputes, leading to the adoption of the Hague Conventions for the Pacific Settlement of International Disputes. The Conventions established the Permanent Court of Arbitration and a rudimentary institutional framework for international arbitration of inter-state disputes .
Members of the Permanent Court of Arbitration
These states are parties to the Permanent Court of Arbitration (the green states signed on to the 1907 agreement and the blue ones to the 1899 agreement).
For example, in 1903, arbitration resolved a dispute over the Canada-Alaska border. The Alaska Purchase of 1867 drew the boundary between Canada and Alaska in ambiguous fashion. With the gold rush into the Yukon in 1898, miners had to enter through Alaska and Canada wanted the boundary redrawn to obtain its own port. The issue went to arbitration and the Alaska boundary dispute was resolved in US favor by an arbitration in 1903 . In recent years, international arbitration has been used to resolve a number of disputes between states or state-like entities, thus making arbitration an important tool in modern foreign policy.
Alaska Boundary Dispute
Blue is the border as was claimed by the United States, red is the border as was claimed by Canada. The Canadian province of British Columbia claimed a greater area, shown in green. Yellow shows the current border, after the boundary dispute was resolved by arbitration in 1903. Arbitration can be an important tool in solving interstate disputes.
Chapter 17: Social Policy
17.1: The Welfare State
17.1.1: History of the Welfare State
The Welfare State originated in Germany during 19th century with the policies implemented by German Chancellor Otto von Bismarck.
Learning Objective
Discuss the historical origins and principles of the welfare state as a concept of government and identify its features in the United States
Key Points
- Otto von Bismarck, the first Chancellor of Germany, created the modern welfare state by building on a tradition of welfare programs in Prussia and Saxony that began as early as in the 1840s, and by winning the support of business.
- His paternalistic programs won the support of German industry because its goals were to win the support of the working class for the German Empire and reduce the outflow of immigrants to the United States, where wages were higher but welfare did not exist.
- The United Kingdom, as a modern welfare state, started to emerge with the Liberal welfare reforms of 1906–1914 under Liberal Prime Minister Herbert Asquith. These included the passing of the Old-Age Pensions Act in 1908, the introduction of free school meals in 1909, the 1909 Labour Exchanges Act.
- Although the United States lagged far behind European countries in instituting concrete social welfare policies, the earliest and most comprehensive philosophical justification for the welfare state was produced by the American sociologist Lester Frank Ward.
- The welfare system in the United States began in the 1930s, during the Great Depression. After the Great Society legislation of the 1960s, for the first time a person who was not elderly or disabled could receive need-based aid from the federal government.
Key Terms
- Otto von Bismarck
-
Otto von Bismarck was a conservative German statesman who dominated European affairs from the 1860s to his dismissal in 1890. After a series of short victorious wars he unified numerous German states into a powerful German Empire under Prussian leadership, then created a “balance of power” that preserved peace in Europe from 1871 until 1914.
- welfare
-
Various forms of financial aid provided by the government to those who are in need of it (abbreviated form of Welfare assistance).
Example
- The sociologist T.H. Marshall identified the welfare state as a distinctive combination of democracy, welfare and capitalism. Examples of early welfare states in the modern world are Germany, all of the Nordic countries, the Netherlands, Uruguay and New Zealand and the United Kingdom in the 1930s.
Introduction
A welfare state is a “concept of government in which the state plays a key role in the protection and promotion of the economic and social well-being of its citizens. It is based on the principles of equality of opportunity, equitable distribution of wealth, and public responsibility for those unable to avail themselves of the minimal provisions for a good life. The general term may cover a variety of forms of economic and social organization. “
History of the Welfare State
Otto von Bismarck , the first Chancellor of Germany, created the modern welfare state by building on a tradition of welfare programs in Prussia and Saxony that began as early as in the 1840s, and by winning the support of business. Bismarck introduced old age pensions, accident insurance and medical care that formed the basis of the modern European welfare state. His paternalistic programs won the support of German industry because its goals were to win the support of the working class for the German Empire and reduce the outflow of immigrants to the United States, where wages were higher but welfare did not exist.
Otto von Bismarck
Otto von Bismarck, the first Chancellor of Germany, created the modern welfare state by building on a tradition of welfare programs in Prussia and Saxony that began as early as in the 1840s, and by winning the support of business.
The United Kingdom, as a modern welfare state, started to emerge with the Liberal welfare reforms of 1906–1914 under Liberal Prime Minister Herbert Asquith . These included the passing of the Old-Age Pensions Act in 1908, the introduction of free school meals in 1909, the 1909 Labour Exchanges Act, the Development Act 1909, which heralded greater Government intervention in economic development, and the enacting of the National Insurance Act 1911 setting up a national insurance contribution for unemployment and health benefits from work.
Herbert Asquith
The United Kingdom, as a modern welfare state, started to emerge with the Liberal welfare reforms of 1906–1914 under Liberal Prime Minister Herbert Asquith.
The Welfare State in the United States
Although the United States lagged far behind European countries in instituting concrete social welfare policies, the earliest and most comprehensive philosophical justification for the welfare state was produced by the American sociologist Lester Frank Ward (1841–1913, ) whom the historian Henry Steele Commager called “the father of the modern welfare state”. Reforms like those instituted by Bismarck in Germany were strongly opposed by conservative thinkers such as the very influential English philosopher and evolutionary theorist Herbert Spencer, who argued that coddling the poor and unfit would simply allow them to reproduce and delay social progress. Ward set out to systematically dismantle Spencer’s arguments which he saw as delaying and paralyzing progressive government action. Central to Ward’s theories was his belief that a universal and comprehensive system of education was necessary if a democratic government was to function successfully. Ward’s writings had a profound influence on a young generation of progressive thinkers and politicians whose work culminated in President Franklin D. Roosevelt’s New Deal welfare state policies of the 1930s.
Lester Ward
Although the United States lagged far behind European countries in instituting concrete social welfare policies, the earliest and most comprehensive philosophical justification for the welfare state was produced by the American sociologist Lester Frank Ward (1841–1913) whom the historian Henry Steele Commager called “the father of the modern welfare state.”
The welfare system in the United States began in the 1930s, during the Great Depression. After the Great Society legislation of the 1960s, for the first time a person who was not elderly or disabled could receive need-based aid from the federal government. Aid could include general welfare payments, health care through Medicaid, food stamps, special payments for pregnant women and young mothers, and federal and state housing benefits. In 1968, a woman receiving welfare assistance headed 4.1% of families; by 1980, the percentage increased to 10%. In the 1970s, California was the U.S. state with the most generous welfare system. The federal government pays virtually all food stamp costs. In 2008, 28.7 percent of the households headed by single women were considered poor.
Modern Model
Modern welfare programs differed from previous schemes of poverty relief due to their relatively universal coverage. The development of social insurance in Germany under Bismarck was particularly influential. Some schemes were based largely in the development of autonomous, mutualist provision of benefits. Others were founded on state provision. The term was not, however, applied to all states offering social protection. The sociologist T.H. Marshall identified the welfare state as a distinctive combination of democracy, welfare and capitalism. Examples of early welfare states in the modern world are Germany, all of the Nordic countries, the Netherlands, Uruguay and New Zealand and the United Kingdom in the 1930s.
17.1.2: Foundations of the Welfare State
The welfare system in the United States was created on the grounds that the market cannot provide goods and services universally.
Learning Objective
Compare and contrast the social-democratic welfare state, the Christian-democratic welfare state and the liberal welfare state
Key Points
- The welfare state involves a transfer of funds from the state, to the services provided – examples include healthcare, education and housing – as well as directly to individuals.
- Unlike welfare states built on social democracy foundations it was not designed to promote a redistribution of political power from capital to labor; nor was it designed to mediate class struggle.
- Eligibility for welfare depends on a variety of factors, including gross and net income, family size, and other circumstances like pregnancy, homelessness, unemployment, and medical conditions.
- The ideal Social-Democratic welfare state is based on the principle of universalism granting access to benefits and services based on citizenship. Such a welfare state is said to provide a relatively high degree of autonomy, limiting the reliance of family and market.
- Christian-democratic welfare states are based on the principle of subsidiarity and the dominance of social insurance schemes, offering a medium level of decommodification and a high degree of social stratification.
- The Liberal welfare state is based on the notion of market dominance and private provision; ideally, the state only interferes to ameliorate poverty and provide for basic needs, largely on a means-tested basis.
Key Term
- entitlement
-
A legal obligation on a government to make payments to a person, business, or unit of government that meets the criteria set in law, such as the Pell Grant and social security in the US.
Example
- In 2002, total U.S. social welfare expenditure constitutes roughly 35% of GDP, with purely public expenditure constituting 21%, publicly supported but privately provided welfare services constituting 10% of GDP and purely private services constituting 4% of GDP.
Introduction
Modern welfare states include the Nordic countries, such as Iceland, Sweden, Norway, Denmark, and Finland which employ a system known as the Nordic model. The welfare state involves a transfer of funds from the state, to the services provided – examples include healthcare, education and housing – as well as directly to individuals. The welfare state is funded through redistributionist taxation and is often referred to as a type of “mixed economy.”
Three types of Welfare States
According to the Political Scientist Esping-Andersen, there are three ways of organizing a welfare state instead of only two. Esping-Andersen constructed the welfare regime typology acknowledging the ideational importance and power of the three dominant political movements of the long 20th century in Western Europe and North America: Social Democracy, Christian Democracy and Liberalism. The ideal Social-Democratic welfare state is based on the principle of universalism granting access to benefits and services based on citizenship. Such a welfare state is said to provide a relatively high degree of autonomy, limiting the reliance of family and market. In this context, social policies are perceived as “politics against the market.” Examples of Social Democratic states include Denmark, Finland, The Netherlands, Norway and Sweden.
Christian-democratic welfare states are based on the principle of subsidiarity and the dominance of social insurance schemes, offering a medium level of decommodification and a high degree of social stratification. Examples include Austria, Belgium, France, Germany, Spain and Italy. On the other hand, the liberal regime is based on the notion of market dominance and private provision; ideally, the state only interferes to ameliorate poverty and provide for basic needs, largely on a means-tested basis. Examples of the Liberal welfare state include Australia, Canada, Japan, Switzerland and the United States.
The American welfare state was designed to address market shortcomings and do what private enterprises cannot or will not do. Unlike welfare states built on social democracy foundations it was not designed to promote a redistribution of political power from capital to labor; nor was it designed to mediate class struggle. Income redistribution, through programs such as the Earned income tax credit (EITC), has been defended on the grounds that the market cannot provide goods and services universally, while interventions going beyond transfers are justified by the presence of imperfect information, imperfect competition, incomplete markets, externalities, and the presence of public goods. The welfare state, whether through charitable redistribution or regulation that favors smaller players, is motivated by reciprocal altruism.
Unlike in Europe, Christian democratic and social democratic theories have not played a major role in shaping welfare policy in the United States. Entitlement programs in the U.S. were virtually non-existent until the administration of Franklin Delano Roosevelt and the implementation of the New Deal programs in response to the Great Depression. Between 1932 and 1981, modern American liberalism dominated U.S. economic policy and the entitlements grew along with American middle class wealth.
The New Deal
Top left: The Tennessee Valley Authority, part of the New Deal, being signed into law in 1933.Top right: FDR (President Franklin Delano Roosevelt) was responsible for the New Deal.Bottom: A public mural from one of the artists employed by the New Deal’s WPA program.
Costs
In 2002, total U.S. social welfare expenditure constitutes roughly 35% of GDP, with purely public expenditure constituting 21%, publicly supported but privately provided welfare services constituting 10% of GDP and purely private services constituting 4% of GDP. This compared to France and Sweden whose welfare spending ranges from 30% to 35% of GDP.
In a 2011 article, Forbes reported, “The best estimate of the cost of the 185 federal means tested welfare programs for 2010 for the federal government alone is nearly $700 billion, up a third since 2008, according to the Heritage Foundation. Counting state spending, total welfare spending for 2010 reached nearly $900 billion, up nearly one-fourth since 2008 (24.3%).
17.1.3: Welfare Reform
Welfare reform has attempted many times to remove welfare altogether by promoting self-sufficiency, but has been unsuccessful in this regard thus far.
Learning Objective
Describe the features of the Welfare Reform Act of 1996 under President Bill Clinton
Key Points
- Prior to reform, states were given “limitless” money by the federal government, increasing per family on welfare, under the 60-year-old Aid to Families with Dependent Children (AFDC) program.
- In 1996, under the Bill Clinton administration, Congress passed the Personal Responsibility and Work Opportunity Reconciliation Act, which gave more control of the welfare system to the states though there are basic requirements the states need to meet with regards to welfare services.
- Each state must meet certain criteria to ensure recipients are being encouraged to work themselves out of welfare. The new program is called Temporary Assistance for Needy Families (TANF), which was formally instituted in 1997.
- TANF encourages states to require some sort of employment search in exchange for providing funds to individuals, and imposes a five-year lifetime limit on cash assistance. The bill restricts welfare from most legal immigrants and increased financial assistance for child care.
- Critics of the reforms sometimes point out that the massive decrease of people on the welfare rolls during the 1990s was due almost exclusively to their offloading into workfare, giving them a different classification than classic welfare recipient.
Key Terms
- reform
-
To form again or in a new configuration.
- Temporary Assistance for Needy Families
-
The Temporary Assistance for Needy Families (TANF), passed in 1997, encourages states to require some sort of employment search in exchange for providing funds to individuals, and imposes a five-year lifetime limit on cash assistance.
Example
- In July 2012, the Department of Health and Human Services released a memo notifying states that they are able to apply for a waiver for the work requirements of the TANF program, but only if states were also able to find credible ways to increase employment by 20%. The waiver would allow states to provide assistance without having to enforce the work component of the program, which currently states that 50 percent of a state’s TANF caseload must meet work requirements.
Introduction
Welfare reform refers to improving how a nation helps those citizens in poverty. In the United States, the term was used to get Congress to enact the Personal Responsibility and Work Opportunity Act, which further reduced aid to the poor, to reduce government deficit spending without coining money. Social programs in the United States are welfare subsidies designed to aid the needs of the U.S. population. Proposals for federal programs began with Theodore Roosevelt’s New Nationalism and expanded with Woodrow Wilson’s New Freedom, Franklin D. Roosevelt’s New Deal, John F. Kennedy’s New Frontier, and Lyndon B. Johnson’s Great Society.
Welfare Reform under President Bill Clinton
Before the Welfare Reform Act of 1996, welfare assistance was “once considered an open-ended right,” but welfare reform converted it “into a finite program built to provide short-term cash assistance and steer people quickly into jobs. ” Prior to reform, states were given “limitless” money by the federal government, increasing per family on welfare, under the 60-year-old Aid to Families with Dependent Children (AFDC) program. This gave states no incentive to direct welfare funds to the neediest recipients or to encourage individuals to go off welfare benefits (the state lost federal money when someone left the system).
In 1996, under the Bill Clinton administration, Congress passed the Personal Responsibility and Work Opportunity Reconciliation Act, which gave more control of the welfare system to the states though there are basic requirements the states need to meet with regards to welfare services . Still, most states offer basic assistance, such as health care, food stamps, child care assistance, unemployment, cash aid, and housing assistance. After reforms, which President Clinton said would “end welfare as we know it,” amounts from the federal government were given out in a flat rate per state based on population.
Bill Clinton Signing Welfare Reform Act of 1996
A central pledge of Clinton’s campaign was to reform the welfare system, adding changes such as work requirements for recipients.
Each state must meet certain criteria to ensure recipients are being encouraged to work themselves out of welfare. The new program is called Temporary Assistance for Needy Families (TANF), which was formally instituted in 1997. It encourages states to require some sort of employment search in exchange for providing funds to individuals, and imposes a five-year lifetime limit on cash assistance. The bill restricts welfare from most legal immigrants and increased financial assistance for child care. The federal government also maintains an emergency $2 billion TANF fund to assist states that may have rising unemployment.
Following these changes, millions of people left the welfare rolls (a 60% drop overall), employment rose, and the child poverty rate was reduced. A 2007 Congressional Budget Office study found that incomes in affected families rose by 35%. The reforms were “widely applauded” after “bitter protest. ” The Times called the reform “one of the few undisputed triumphs of American government in the past 20 years. “
Critics of the reforms sometimes point out that the massive decrease of people on the welfare rolls during the 1990s wasn’t due to a rise in actual gainful employment in this population, but rather, was due almost exclusively to their offloading into workfare, giving them a different classification than classic welfare recipient. The late 1990s were also considered an unusually strong economic time, and critics voiced their concern about what would happen in an economic downturn.
17.2: Social Policies
17.2.1: Education Policy
Government supported, free public schools were established after the revolution, and expanded in the 19th century.
Learning Objective
Identify the main issues and institutions behind education policy
Key Points
- Towards the 20th century, states started passing laws to make schooling compulsory, and by 1910, 72 percent of children were attending school. Private schools continued to spread during this time, as well as colleges and—in the rural centers—land grant colleges.
- The landmark Supreme Court case Brown v. Board of Education made the desegregation of elementary and high schools a national priority, while the Pell Grant program helped poor minorities gain access to college.
- The resulting No Child Left Behind Act of 2001 was controversial and its goals proved to be unrealistic. A commission established in 2006 evaluated higher education, but its recommendations are yet to be fully implemented.
- School districts are usually separate from other local jurisdictions, with independent officials and budgets. State governments usually make educational standards and standardized testing decisions.
- The reliance on local funding sources has led to a long history of court challenges about how states fund their schools. These challenges have relied on interpretations of state constitutions after a U.S. Supreme Court ruling that school funding was not a matter of the U.S. Constitution.
- At the college and university level, student loan funding is split in half; half is managed by the Department of Education directly, called the Federal Direct Student Loan Program (FDSLP).
Key Terms
- standardized
-
A designed or constructed in a standard manner or according to an official standard
- compulsory
-
Required; obligatory; mandatory
Background
Government supported, free public schools was introduced after the revolution, and expanded in the 19th century as a result of the efforts of men like Horace Mann and Booker T. Washington. By 1870, all states had free elementary schools, but only in urban centers. Towards the 20th century, states started passing laws that made schooling compulsory. By 1910, 72 percent of children were attending school. Private schools continued to spread during this time, as well as colleges and—in the rural centers—land grant colleges. The year of 1910 also saw the first true high schools.
During the rest of the 20th century, educational efforts were centered on reducing the inequality in the school system. The landmark Supreme Court case Brown v. Board of Education made the desegregation of elementary and high schools a national priority, while the Pell Grant program helped poor minorities gain access to college. Special education was made into federal law in 1975.
The Elementary and Secondary Education Act of 1965 made standardized testing a requirement, and in 1983, a commission was established to evaluate their results and propose a course of action. The resulting No Child Left Behind Act of 2001 was controversial and its goals proved to be unrealistic. A commission established in 2006 evaluated higher education, but its recommendations are yet to be fully implemented.
Education in the United States is mainly provided by the public sector, with control and funding coming from three levels: local, state, and federal, in that order. Child education is compulsory. There are also a large number and wide variety of higher education institutions throughout the country that one can choose to attend, both publicly and privately administered.
Public education is universally available. School curricula, funding, teaching, employment, and other policies are set through locally elected school boards with jurisdiction over the school districts. The districts receive many directives from the state government. School districts are usually separate from other local jurisdictions, with independent officials and budgets. State governments usually set educational standards and standardized testing decisions.
Statistics in Education
Among the country’s adult population, over 85 percent have completed high school and 27 percent have received a bachelor’s degree or higher. According to a 2005 study by the U.S. Census Bureau, the average salary for college or university graduates is greater than $51,000, exceeding the national average of those without a high school diploma by more than $23,000. The 2010 unemployment rate for high school graduates was 10.8%; the rate for college graduates was 4.9%. The country has a reading literacy rate at 99% of the population over age 15, while ranking below average in science and mathematics proficiency compared to other developed countries. In 2008, the high school graduation rate was 77%, below that of most developed countries.
The poor performance has pushed public and private efforts such as the No Child Left Behind Act. In addition, the ratio of college-educated adults entering the workforce to general population (33%) is slightly below the mean of other developed countries (35%) and rate of participation of the labor force in continuing education is high.
Funding for Education
The reliance on local funding sources has led to a long history of court challenges about how states fund their schools. These challenges have relied on interpretations of state constitutions after a U.S. Supreme Court ruling that school funding was not a matter of the U.S. Constitution (San Antonio Independent School District v. Rodriguez, 411 U.S. 1 (1973)). The state court cases, beginning with the California case of Serrano v. Priest, 5 Cal.3d 584 (1971), were initially concerned with equity in funding, which was defined in terms of variations in spending across local school districts. More recently, state court cases have begun to consider what has been called ‘adequacy. ‘ These cases have questioned whether the total amount of spending was sufficient to meet state constitutional requirements.
At the college and university level student loan funding is split in half; half is managed by the Department of Education directly, called the Federal Direct Student Loan Program (FDSLP). The other half is managed by commercial entities such as banks, credit unions, and financial services firms such as Sallie Mae, under the Federal Family Education Loan Program (FFELP). Some schools accept only FFELP loans; others accept only FDSLP. Still others accept both, and a few schools will accept neither, in which case students must seek out private alternatives for student loans. The federal Pell Grant program provides funding for students who demonstrate financial need.
College Tuition Rising
Cost of US college education relative to the consumer price index (inflation).
17.2.2: Employment Policy
Employment policy determines living and working standards that need to be met by the state and the federal government.
Learning Objective
Illustrate how employment policy is driven by federal, state and local law
Key Points
- Federal law not only sets the standards that govern workers’ rights to organize in the private sector, but also overrides most state and local laws that attempt to regulate this area.
- Federal and state laws protect workers from employment discrimination. In most areas these two bodies of law overlap. Federal law permits states to enact their own statutes barring discrimination on the basis of race, gender, religion, national origin and age.
- The NLRA and RLA displace state laws that attempt to regulate the right to organize, to strike and to engage in collective bargaining. The NLRB has exclusive jurisdiction to determine whether an employer has engaged in an unfair labor practice and to decide what remedies should be provided.
- US private-sector employees thus do not have the indefinite contracts traditionally common in many European countries, Canada and New Zealand.
- Public employees in both federal and state government are also typically covered by civil service systems that protect them from unjust discharge.
- The Fair Labor Standards Act of 1938 (FLSA) establishes minimum wage and overtime rights for most private sector workers, with a number of exemptions and exceptions. The FLSA does not preempt state and local governments from providing greater protections under their own laws.
Key Terms
- statutory protections
-
Protections received by the statue of law set by the legislature.
- safety standards
-
Providing state regulations to set the standards of a safe work environment.
- minimum wage
-
The lowest rate at which an employer can legally pay an employee; usually expressed as pay per hour.
Background
United States labor law is a heterogeneous collection of state and federal laws. Federal law not only sets the standards that govern workers’ rights to organize in the private sector, but also overrides most state and local laws that attempt to regulate this area. Federal law also provides more limited rights for employees of the federal government. These federal laws do not apply to employees of state and local governments, agricultural workers and domestic employees; any statutory protections these workers have derived from state law.
Federal law establishes minimum wages and overtime rights for most workers in the private and public sectors; state and local laws may provide more expansive rights. Federal law provides minimum workplace safety standards, but allows the states to take over those responsibilities and to provide more stringent standards.
Federal and state laws protect workers from employment discrimination. In most aspects, these two bodies of law overlap. Federal law permits states to enact their own statutes barring discrimination on the basis of race, gender, religion, national origin and age, so long as the state law does not provide less protections than federal law would. Federal law, on the other hand, preempts most state statutes that would bar employers from discriminating against employees to prevent them from obtaining pensions or other benefits or retaliating against them for asserting those rights.
Regulation of Unions and Organizing
The Taft-Hartley Act (also known as the “Labor-Management Relations Act”), passed in 1947, loosened some of the restrictions on employers, changed NLRB election procedures, and added a number of limitations on unions. The Act, among other things, prohibits jurisdictional strikes and secondary boycotts by unions, and authorizes individual states to pass “right-to-work laws”, regulates pension and other benefit plans established by unions and provides that federal courts have jurisdiction to enforce collective bargaining agreements.
For the most part, the NLRA and RLA displace state laws that attempt to regulate the right to organize, to strike and to engage in collective bargaining. The NLRB has exclusive jurisdiction to determine whether an employer has engaged in an unfair labor practice and to decide what remedies should be provided. States and local governments can, on the other hand, impose requirements when acting as market participants, such as requiring that all contractors sign a project labor agreement to avoid strikes when building a public works project, that they could not if they were attempting to regulate those employers’ labor relations directly.
Regulation of Wages, Benefits and Working Conditions
The Fair Labor Standards Act of 1938 (FLSA) establishes minimum wage and overtime rights for most private sector workers, with a number of exemptions and exceptions. The FLSA does not preempt state and local governments from providing greater protections under their own laws. A number of states have enacted higher minimum wages and extended their laws to cover workers who are excluded under the FLSA or to provide rights that federal law ignores. Local governments have also adopted a number of “living wage” laws that require those employers that contract with them to pay higher minimum wages and benefits to their employees. The federal government, along with many state governments, also requires employers to pay the prevailing wage to workers on public works projects, a practice which typically reflects the standards established by unions’ collective bargaining agreements in the area.
History of the Minimum Wage
This graph of the minimum wage in the United States shows the fluctuation in government guarantees for minimum standards of labor.
Job Security
While most state and federal laws start from the presumption that workers who are not covered by a collective bargaining agreement or an individual employment agreement are “at will” employees who can be fired without notice and for no stated reason, state and federal laws prohibiting discrimination or protecting the right to organize or engage in whistleblowing activities modify that rule by providing that discharge or other forms of discrimination are illegal if undertaken on grounds specifically prohibited by law. In addition, a number of states have modified the general rule that employment is at will by holding that employees may, under that state’s common law, have implied contract rights to fair treatment by their employers. US private-sector employees thus do not have the indefinite contracts traditionally common in many European countries, Canada and New Zealand.
Public employees in both federal and state government are also typically covered by civil service systems that protect them from unjust discharge. Public employees who have enough rights against unjustified discharge by their employers may also acquire a property right in their jobs, which entitles them in turn to additional protections under the due process clause of the Fourteenth Amendment to the United States Constitution.
The Worker Adjustment and Retraining Notification Act, better known by its acronym, the WARN Act, requires private sector employers to give sixty days’ notice of large-scale layoffs and plant closures; it allows a number of exceptions for unforeseen emergencies and other cases.
17.2.3: Health Care Policy
United States health care, provided by many public and private entities, is undergoing reform to cut spending and increase coverage.
Learning Objective
Compare and contrast the provision of healthcare by private and public providers
Key Points
- The government primarily provides health insurance for public sector employees. 60-65% of healthcare provision and spending comes from programs such as Medicare, Medicaid, TRICARE, the Children’s Health Insurance Program, and the Veterans Health Administration.
- In May of 2011, the state of Vermont became the first state to pass legislation establishing a single-payer health care system.
- The Patient Protection and Affordable Care Act (PPACA), commonly called Obamacare (or the federal health care law), is a United States federal statute signed into law by President Barack Obama on March 23, 2010.
- According to the World Health Organization (WHO), total health care spending in the U.S. was 15.2% of its GDP in 2008, the highest in the world.
- Most Americans under age 65 (59.3%) receive their health insurance coverage through an employer (which includes both private as well as civilian public-sector employers) under group coverage, although this percentage is declining.
- Public spending accounts for 45% to 56.1% of U.S. health care spending.
Key Terms
- Medicare
-
Guarantees healthcare for people older than 65 and younger people with disabilities and other physical ailments.
- Medicaid
-
Guarantees healthcare for low income families.
- TRICARE
-
A public healthcare program for the U.S. military.
Background
Health care in the United States is provided by many distinct organizations. Health care facilities are largely owned and operated by private sector businesses. The government primarily provides health insurance for public sector employees. 60-65% of healthcare provision and spending comes from programs such as Medicare, Medicaid, TRICARE, the Children’s Health Insurance Program, and the Veterans Health Administration. Their family member’s employer insures most of the population under 65, some buy health insurance on their own, and the remainder is uninsured.
In May of 2011, the state of Vermont became the first state to pass legislation creating a single-payer health care system. The legislation, known as Act 48, establishes health care in the state as a “human right” and lays the responsibility on the state to provide a health care system which best meets the needs of the citizens of Vermont. The state is currently in the studying phase of how best to implement this system.
The Patient Protection and Affordable Care Act (PPACA), commonly called Obamacare (or the federal health care law), is a United States federal statute signed into law by President Barack Obama on March 23, 2010. Together with the Health Care and Education Reconciliation Act, Obamacare represents the most significant regulatory overhaul of the U.S. healthcare system since the passage of Medicare and Medicaid in 1965.
PPACA is aimed primarily at decreasing the number of uninsured Americans and reducing the overall costs of health care. It provides a number of mechanisms—including mandates, subsidies, and tax credits—to employers and individuals in order to increase the coverage rate.
Spending
According to the World Health Organization (WHO), total health care spending in the U.S. was 15.2% of its GDP in 2008, the highest in the world . The Health and Human Services Department expects that the health share of GDP will continue its historical upward trend, reaching 19.5% of GDP by 2017. Of each dollar spent on health care in the United States, 31% goes to hospital care, 21% goes to physician/clinical services, 10% to pharmaceuticals, 4% to dental, 6% to nursing homes and 3% to home health care, 3% for other retail products, 3% for government public health activities, 7% to administrative costs, 7% to investment, and 6% to other professional services (physical therapists, optometrists, etc.).
International Comparison for Healthcare spending as % GDP
This graph shows the fraction of gross domestic product (GDP) devoted to health care in a number of developed countries in 2006.According to the Organization for Economic Cooperation and Development (OECD), the United States spent 15.3 percent of its GDP on health care in 2006.The next highest country was Switzerland, with 11.3 percent.In most other high-income countries, the share was less than 10 percent.
Increased spending on disease prevention is often suggested as a way of reducing health care spending. Whether prevention saves or costs money depends on the intervention. Childhood vaccinations, or contraceptives save much more than they cost. Research suggests that in many cases prevention does not produce significant long-term cost savings. Some interventions may be cost-effective by providing health benefits, while others are not cost-effective. Preventive care is typically provided to many people who would never become ill, and for those who would have become ill, it is partially offset by the health care costs during additional years of life.
Private Healthcare
Most Americans under age 65 (59.3%) receive their health insurance coverage through an employer (which includes both private as well as civilian public-sector employers) under group coverage, although this percentage is declining. Costs for employer-paid health insurance are rising rapidly: since 2001, premiums for family coverage have increased 78%, while wages have risen 19% and inflation has risen 17%, according to a 2007 study by the Kaiser Family Foundation. Workers with employer-sponsored insurance also contribute; in 2007, the average percentage of premium paid by covered workers is 16% for single coverage and 28% for family coverage. In addition to their premium contributions, most covered workers face additional payments when they use health care services, in the form of deductibles and copayments.
Public Healthcare
Government programs directly cover 27.8% of the population (83 million), including the elderly, disabled, children, veterans, and some of the poor, and federal law mandates public access to emergency services regardless of ability to pay. Public spending accounts for 45% to 56.1% of U.S. health care spending.
There are also various state and local programs for the poor. In 2007, Medicaid provided health care coverage for 39.6 million low-income Americans (although Medicaid covers approximately 40% of America’s poor). Also in 2007, Medicare provided health care coverage for 41.4 million elderly and disabled Americans. Enrollment in Medicare is expected to reach 77 million by 2031, when the baby boom generation is fully enrolled.
It has been reported that the number of physicians accepting Medicaid has decreased in recent years due to relatively high administrative costs and low reimbursements. In 1997, the federal government also created the State Children’s Health Insurance Program (SCHIP), a joint federal-state program to insure children in families that earn too much to qualify for Medicaid but cannot afford health insurance.
17.2.4: Health Care Reform
The issue of health insurance reform in the United States has been the subject of political debate since the early part of the 20th century.
Learning Objective
Explain the elements and provisions of the Patient Protection and Affordable Act and discuss the history of health-care reform in the 20th century
Key Points
- Health care reform was a major concern of the Bill Clinton administration, headed by First Lady Hillary Clinton; however, the 1993 Clinton health care plan was not enacted into law.
- The Health Security Express was a bus tour that started in late July of 1994. It involved supporters of President Clinton’s national health care reform.
- Barack Obama called for universal health care and the creation of a National Health Insurance Exchange that would include both private insurance plans and a Medicare-like government run option.
- In 2010, the Patient Protection and Affordable Care Act was enacted by President Obama, providing for the introduction, over four years, of a comprehensive system of mandated health insurance with reforms designed to eliminate some of the least-desirable practices of the insurance companies.
Key Term
- pilot
-
Something serving as a test or trial.
Background
The issue of health insurance reform in the United States has been the subject of political debate since the early part of the 20th century. Recent reforms remain an active political issue. Alternative reform proposals were offered by both of the two major candidates in the 2008 presidential election and President Obama’s plan for universal health care was challenged in the 2012 presidential election.
Clinton Initiative
Health care reform was a major concern of the Bill Clinton administration, headed by First Lady Hillary Clinton; however, the 1993 Clinton health care plan was not enacted into law.
The Health Security Express was a bus tour that started in late July of 1994. It involved supporters of President Clinton’s national health care reform. Several buses leaving from different points in the United States stopped in many cities along the way to the final destination of the White House. During these stops, each of the bus riders would talk about personal experiences, health care disasters, and why they felt it was important for all Americans to have health insurance. When the bus tour ended on August 3rd, the riders were greeted by President Clinton and the First Lady on the White House South lawn for a rally that was broadcast all over the world.
Changes under George W. Bush
In 2003 Congress passed the Medicare Prescription Drug, Improvement, and Modernization Act , which President George W. Bush signed into law on December 8, 2003. Part of this legislation included filling gaps in prescription-drug coverage left by the Medicare Secondary Payer Act that was enacted in 1980. The 2003 bill strengthened the Workers’ Compensation Medicare Set-Aside Program (WCMSA) that is monitored and administered by CMS.
Debate in the 2008 Presidential Election
Barack Obama called for universal health care and the creation of a National Health Insurance Exchange that would include both private insurance plans and a Medicare-like government run option. Coverage would be guaranteed regardless of health status and premiums would not vary based on health status. Obama’s plan required that parents cover their children, but did not require that adults buy insurance.
In 2010, the Patient Protection and Affordable Care Act (PPACA) was enacted by President Obama, providing for the introduction, over four years, of a comprehensive system of mandated health insurance with reforms designed to eliminate some of the least-desirable practices of the insurance companies (such as precondition screenings, rescinding policies when illness seemed imminent, and annual and lifetime coverage caps). The PPACA also set a minimum ratio of direct health care spending to premium income, created price competition bolstered by the creation of three standard insurance coverage levels to enable like-for-like comparisons by consumers, and also created a web-based health insurance exchange where consumers can compare prices and purchase plans. The system preserves private insurance and private health care providers and provides more subsidies to enable the poor to buy insurance.
Effective as of 2013
- A national pilot program is established for Medicare on payment bundling to encourage doctors, hospitals, and other care providers to better coordinate patient care. The threshold for claiming medical expenses on itemized tax returns is raised from 7.5% to 10% of adjusted gross income. The threshold remains at 7.5% for the elderly through 2016.
- The Federal Insurance Contributions Act tax (FICA) is raised to 2.35% from 1.45% for individuals earning more than $200,000 and married couples with incomes over $250,000. The tax is imposed on some investment income for that income group.
- A 2.9% excise tax is imposed on the sale of medical devices. Anything generally purchased at the retail level by the public is excluded from the tax
Effective as of 2014
- State health insurance exchanges open for small businesses and individuals.
- Individuals with income up to 133% of the federal poverty level qualify for Medicaid coverage.
- Healthcare tax credits become available to help people with incomes up to 400 percent of poverty purchase coverage on the exchange.
- Premium cap for maximum “out-of-pocket” pay will be established for people with incomes up to 400 percent of the Federal Poverty Line.
- Most people will be required to obtain health insurance or pay a tax.
- Health plans can no longer exclude people from coverage due to preexisting conditions.
- Employers with 50 or more workers who do not offer coverage face a fine of $2,000 for each employee if any worker receives subsidized insurance on the exchange. The first 30 employees are not counted for the fine.
- Effects on insurance premiums
- The Associated Press reported that, as a result of PPACA’s provisions concerning the Medicare Part D coverage gap (between the initial coverage limit and the catastrophic coverage threshold in the Medicare Part D prescription drug program), individuals falling in this “donut hole” would save about 40 percent. Almost all of the savings came because, with regard to brand-name drugs, PPACA secured a discount from pharmaceutical companies. The change benefited more than two million people, most of them in the middle class
17.2.5: Housing Policy
Public housing is administered by federal, state and local agencies to provide subsidized assistance to those with low-incomes.
Learning Objective
Explain the implications of national and local housing policy
Key Points
- Originally, public housing in the U.S. consisted of one or more blocks of low-rise and/or high-rise apartment buildings operated by a government agency.
- Subsidized apartment buildings in the U.S. are usually called housing projects, and the slang term for a group of these buildings is “the projects”.
- In 1937, the Wagner-Stegall Housing Act established the United States Housing Authority Housing Act of 1937.
- In the 1960s, across the nation, housing authorities became key partners in urban renewal efforts, constructing new homes for those displaced by highway, hospital, and other public efforts.
- The Housing and Community Development Act of 1974 created the Section 8 Housing Program to encourage the private sector to construct affordable homes.
- The city housing authorities or local governments generally run scattered-site housing programs. They are intended to increase the availability of affordable housing and improve the quality of low-income housing, while avoiding problems associated with concentrated subsidized housing.
Key Terms
- middle-class
-
occupying a position between the upper class and the working class.
- retail price
-
A price given to any item influenced by the behaviors of the market demand.
- subsidized housing
-
A form of housing that is subsidized by the government for people with low-incomes.
Background
Public housing in the United States has been administered by federal, state, and local agencies to provide subsidized assistance for low-income people and those living in poverty. Now increasingly provided in a variety of settings and formats, originally public housing in the U.S. consisted of one or more blocks of low-rise and/or high-rise apartment buildings operated by a government agency. Subsidized apartment buildings in the U.S. are usually called housing projects, and the slang term for a group of these buildings is “the projects”.
Public Housing
In 1937, the Wagner-Stegall Housing Act established the United States Housing Authority Housing Act (USHA) of 1937. Building on the Housing Division’s organizational and architectural precedent, the USHA built housing in the build-up to World War II, supported war-production efforts, and battled the housing shortage that occurred after the end of the war. In the 1960s, across the nation, housing authorities became key partners in urban renewal efforts, constructing new homes for those displaced by highway, hospital, and other public efforts.
One of the most unique U.S. public housing initiatives was the development of subsidized middle-class housing during the late New Deal (1940–42) under the auspices of the Mutual Ownership Defense Housing Division of the Federal Works Agency under the direction of Colonel Lawrence Westbrook. The residents purchased these eight projects after the Second World War and as of 2009 seven of the projects continue to operate as mutual housing corporations owned by their residents. These projects are among the very few definitive success stories in the history of the U.S. public housing effort.
Public housing in its earliest decades was usually much more working-class and middle-class and white than it was by the 1970s. Many Americans associate large, multi-story towers with public housing, but early projects, like the Ida B. Wells projects in Chicago, were actually low-rise towers. Le Corbusier superblocks caught on before World War II, as seen in the (union built) Penn South houses in New York.
Hylan Houses Bushwick, Brooklyn NY
The 20-story John F. Hylan Houses in the Bushwick section of Brooklyn, New York City.
The Housing and Community Development Act of 1974 created the Section 8 Housing Program to encourage the private sector to construct affordable homes. This kind of housing assistance helps poor tenants by giving a monthly subsidy to their landlords. This assistance can be ‘project based,’ which applies to specific properties, or ‘tenant based,’ which provides tenants with a voucher they can use anywhere vouchers are accepted. Virtually no new project based Section 8 housing has been produced since 1983. Effective October 1, 1999, existing tenant based voucher programs were merged into the Housing Choice Voucher Program, which is today the primary means of providing subsidies to low-income renters.
Public Policy and Implications
The city housing authorities or local governments generally run scattered-site housing programs. They are intended to increase the availability of affordable housing and improve the quality of low-income housing, while avoiding problems associated with concentrated subsidized housing. Many scattered-site units are built to be similar in appearance to other homes in the neighborhood to somewhat mask the financial stature of tenants and reduce the stigma associated with public housing.
Where to construct these housing units and how to gain the support of the community are issues of concern when it comes to public housing. Frequent concerns of community members include potential decreases in the retail price of their home, and a decline in neighborhood safety due to elevated levels of crime. Thus, one of the major concerns with the relocation of scattered-site tenants into white, middle-class neighborhoods is that residents will move elsewhere – a phenomenon known as white flight. To counter this phenomenon, some programs place tenants in private apartments that do not appear outwardly different. Despite these efforts, many members of middle-class, predominantly white neighborhoods have fought hard to keep public housing out of their communities.
There are also concerns associated with the financial burden that these programs have on the state. Scattered-site housing provides no better living conditions for its tenants than traditional concentrated housing if the units are not properly maintained. There are questions as to whether or not scattered-site public facilities are more expensive to manage because dispersal throughout the city makes maintenance more difficult.
17.3: Social Policy Demographics
17.3.1: The Elderly
There are several social policy challenges relating to the elderly, who are generally over the age of 65 and have retired from their jobs.
Learning Objective
Discuss government policies that affect the elderly
Key Points
- Within the United States, senior citizens are at the center of several social policy issues, most prominently Social Security and Medicare.
- Social Security is a social insurance program consisting of retirement, disability, and survivors’ benefits.
- In 1965, Congress created Medicare under Title XVIII of the Social Security Act to provide health insurance to people age 65 and older, regardless of income or medical history.
Key Terms
- social insurance
-
a program where risks are transferred to and pooled by an organization, often governmental, that is legally required to provide certain benefits
- New Deal
-
The New Deal was a series of economic programs enacted in the United States between 1933 and 1936. They involved presidential executive orders or laws passed by Congress during the first term of President Franklin D. Roosevelt. The programs were in response to the Great Depression, and focused on what historians call the “3 Rs”: Relief, Recovery, and Reform.
The elderly, often referred to as senior citizens, are people who are generally over the age of 65 and have retired from their jobs. Within the United States, senior citizens are at the center of several social policy issues, most prominently Social Security and Medicare.
Social security is a social insurance program consisting of retirement, disability, and survivors’ benefits. To qualify for these benefits, most American workers pay Social Security taxes on their earnings, and future benefits are based on the employees’ contributions. The Social Security Administration was set up in 1935 as part of President Franklin D. Roosevelt’s “New Deal. ” Social Security is currently the largest social welfare program in the United States, constituting 37% of government expenditure and 7% of GDP. In 2010, more than 54 million Americans received approximately $712 billion in Social Security benefits
Social Security
Social Security card, which grants certain benefits to citizens.
In 1965, Congress created Medicare under Title XVIII of the Social Security Act to provide health insurance to people age 65 and older, regardless of income or medical history. Medicare spreads the financial risk associated with illness across society in order to protect everyone. Thus, it has a somewhat different social role from for-profit private insurers, which manage their risk portfolio by adjusting their pricing according to perceived risk.
The Medicare population differs in significant ways from the general population. Compared to the rest of Americans, Medicare enrollees are disproportionately white and female (due to women’s greater longevity). They also have a comparatively precarious economic situation, which is usually exacerbated by the high cost of health care for the elderly.
17.3.2: The Middle Class
The middle class consists of people in the middle of a societal hierarchy, which varies between cultures.
Learning Objective
Identify the central features of the middle-class in the United States
Key Points
- The following factors are often ascribed to someone in the middle class: having a college education; holding professional qualifications, including academics, lawyers, engineers and doctors; a belief in bourgeois values; and identification culturally with mainstream popular culture.
- Within the United States, the broader middle class is often described as divided into the upper-middle class (also called the “professional class”) and the lower-middle class.
- Recently, the typical lifestyle of the American middle class has been criticized for its “conspicuous consumption” and materialism, as Americans have the largest homes and most appliances and automobiles in the world.
Key Terms
- bourgeois
-
Of or relating to the middle class, especially its attitudes and conventions.
- inflation
-
An increase in the general level of prices or in the cost of living.
- materialism
-
Constant concern over material possessions and wealth and a great or excessive regard for worldly concerns.
The middle class is a category of people in the middle of a societal hierarchy, though common measures of what constitutes middle class vary significantly between cultures.
The size of the middle class depends on how it is defined, whether by education, wealth, environment of upbringing, social network, manners or values, etc. However, the following factors are often ascribed in modern usage to someone in the middle class: having a college education; holding professional qualifications, including academics, lawyers, engineers, and doctors; a belief in bourgeoisvalues, such as high rates of home ownership and secure jobs; a particular lifestyle; and the identification culturally with mainstream popular culture (particularly in the United States).
Within the United States, the broader middle class is often described as divided into the upper-middle class (also called the “professional class”) and the lower-middle class. The upper-middle class consists mostly of white-collar professionals, most of whom are highly educated, salaried professionals whose work is largely self-directed and typically involves conceptualizing, creating, consulting, and supervising. Many have graduate degrees, with educational attainment serving as the main distinguishing feature of this class. Household incomes commonly exceed $100,000. The lower-middle class consists mainly of people in technical and lower-level management positions who work for those in the upper middle class. Though they enjoy a reasonably comfortable standard of living, they are often threatened by taxes and inflation.
Recently, the typical lifestyle of the American middle class has been criticized for its “conspicuous consumption” and materialism, as Americans have the largest homes and most appliances and automobiles in the world. Another challenge to the stability of the middle class within the United States is increasing income inequality, as middle class Americans have seen their incomes increase at a much slower rate than the wealthiest 1% in the country .
Suburban Middle Class Home
An upscale home in suburban California, an example of the “conspicuous consumption” of the American middle class.
17.3.3: The Working Poor
The working poor are working people whose incomes fall below a given poverty line.
Learning Objective
Define the working poor in the United States
Key Points
- Of the 8.8 million US families below the poverty line (11.1% of all families), 5.19 million, or 58.9%, had at least one person who was classified as working.
- Within the United States, since the start of the War on Poverty in the 1960s, scholars and policymakers on both ends of the political spectrum have paid an increasing amount of attention to the working poor.
- Some of the obstacles that working poor people may face include finding affordable housing, arranging transportation to and from work, buying basic necessities, arranging childcare, having unpredictable work schedules, juggling two or more jobs, and coping with low-status work.
Key Term
- Poverty line
-
The threshold of poverty below which one’s income does not cover necessities.
The working poor are working people whose incomes fall below a given poverty line. While poverty is often associated with joblessness, the wages of the working poor are usually insufficient to provide basic necessities, causing them to face numerous obstacles that make it difficult for many of them to find and keep a job, save up money, and maintain a sense of self-worth. In 2009, according to the U.S. Census Bureau’s official definition of poverty, 8.8 million US families were below the poverty line (11.1% of all families). Of these families, 5.19 million, or 58.9%, had at least one person who was classified as working
The Working Poor
Percentage of the working and nonworking poor in different countries
Within the United States, since the start of the War on Poverty in the 1960s, scholars and policymakers on both ends of the political spectrum have paid an increasing amount of attention to the working poor. One of the key ongoing debates concerns the distinction between the working and the nonworking (unemployed) poor. Conservative scholars and policymakers often attribute the prevalence of inequality and working poverty to overregulation and overtaxation, which they claim constricts job growth. In contrast, liberal scholars argue that the government should provide more housing assistance, childcare, and other kinds of aid to poor families, in order to help them overcome the obstacles they face.
Some of these obstacles may include finding affordable housing, arranging transportation to and from work, buying basic necessities, arranging childcare, having unpredictable work schedules, juggling two or more jobs, and coping with low-status work. Many scholars and policymakers suggest welfare state generosity, increased wages and benefits, more vocational education and training, increased child support, and increased rates of marriage as probable remedies to these obstacles.
17.3.4: The Nonworking Poor
The nonworking poor are unemployed people whose incomes fall below a given poverty line.
Learning Objective
Discuss the nonworking poor and the obstacles they face in the United States
Key Points
- Many conservative scholars tend to see nonworking poverty as a more urgent problem than working poverty because they believe that non-work is a moral hazard that leads to welfare dependency and laziness, whereas work, even poorly paid work, is morally beneficial.
- In order to help the nonworking poor gain entry into the labor market, liberal scholars advocate that the government should provide more housing assistance, childcare, and other kinds of aid to poor families.
- Many policies that have been proposed to alleviate the obstacles that working poor people face may also be applied to the nonworking poor, including welfare state generosity, increased wages, increased vocational education and training, child support assurance, and increased rates of marriage.
- Since the start of the War on Poverty in the 1960s, scholars and policymakers on both ends of the political spectrum have paid an increasing amount of attention to working poverty.
Key Terms
- welfare state
-
a social system in which the state takes overall responsibility for the welfare of its citizens, providing health care, education, unemployment compensation and social security
- Poverty line
-
The threshold of poverty below which one’s income does not cover necessities.
Introduction
The working poor are working people whose incomes fall below a given poverty line. Conversely, the nonworking poor are unemployed people whose incomes fall below a given poverty line . The main difference between the working and the nonworking poor, liberal policymakers argue, is that the nonworking poor have a more difficult time overcoming basic barriers to entry into the labor market, such as arranging for affordable childcare, finding housing near potential jobs, or arranging for transportation to and from work. In order to help the nonworking poor gain entry into the labor market, liberal scholars advocate that the government should provide more housing assistance, childcare, and other kinds of aid to poor families.
The Working Poor
Percentage of the working and nonworking poor in different countries
Distinctions
Since the start of the War on Poverty in the 1960s, scholars and policymakers on both ends of the political spectrum have paid an increasing amount of attention to tackling poverty. One of the key ongoing debates concerns the distinction between the working and the nonworking poor. Many conservative scholars tend to see nonworking poverty as a more urgent problem than working poverty because they believe that non-work is a moral hazard that leads to welfare dependency and laziness, whereas work, even poorly paid work, is morally beneficial. On the other hand, liberal scholars and policymakers often argue that most working and nonworking poor people are quite similar.
Many of the policies that have been proposed to alleviate the obstacles that working poor people face may also be applied to the nonworking poor. These policies include: welfare state generosity, including unemployment and child benefits; increased wages and benefits, which may have a positive effect on unskilled workers who are likely to be among the nonworking poor; increased vocational education and training for the same demographic; child support assurance, especially for families headed by a single parent; and increased rates of marriage, although a lack of good employment opportunities may not lower the poverty rate among low-income people.
Obstacles to uplift
The working poor face many of the same everyday struggles as the nonworking poor, but they also face some unique obstacles. Some studies, many of them qualitative, provide detailed insights into the obstacles that hinder workers’ ability to find jobs, keep jobs, and make ends meet. Some of the most common struggles faced by the working poor are finding affordable housing, arranging transportation to and from work, buying basic necessities, arranging childcare, having unpredictable work schedules, juggling two or more jobs, and coping with low-status work.
Housing
Working poor people who do not have friends or relatives with whom they can live often find themselves unable to rent an apartment of their own. Although the working poor are employed at least some of the time, they often find it difficult to save enough money for a deposit on a rental property. As a result, many working poor people end up in living situations that are actually more costly than a month-to-month rental.
Transportation
Given the fact that many working poor people do not own a car or cannot afford to drive their car, where they live can significantly limit where they are able to work, and vice versa. Given the fact that public transportation in many US cities is sparse, expensive, or non-existent, this is a particularly salient obstacle.
Basic Necessities
Like the unemployed poor, the working poor struggle to pay for basic necessities like food, clothing, housing, and transportation. In some cases, however, the working poor’s basic expenses can be higher than the unemployed poor’s. For instance, the working poor’s clothing expenses may be higher than the unemployed poor’s because they must purchase specific clothes or uniforms for their jobs.
Childcare
Working poor parents with young children, especially single parents, face significantly more childcare-related obstacles than other people. Oftentimes, childcare costs can exceed a low-wage earners’ income, making work, especially in a job with no potential for advancement, an economically illogical activity. However, some single parents are able to rely on their social networks to provide free or below-market-cost childcare. There are also some free childcare options provided by the government, such as the Head Start Program. However, these free options are only available during certain hours, which may limit parents’ ability to take jobs that require late-night shifts.
17.3.5: Minorities, Women, and Children
Minorities, women, and children are often the target of specific social policies.
Learning Objective
Discuss government social policy toward minorities, women and children in the United States
Key Points
- A minority group is a sociological category within a demographic that is differentiated from those who hold the majority of positions of social power in a society.
- While in most societies, numbers of men and women are roughly equal, the status of women as a subordinate group has led some (especially within feminist movements) to equate them with minorities.
- One major, particularly controversial policy targeting minority groups is affirmative action.
- People with disabilities continue to be an especially vulnerable minority group in modern society.
Key Term
- affirmative action
-
A policy or program providing advantages for people of a minority group with the aim of creating a more racially equal society through preferential access to education, employment, health care, social welfare, etc.
Minorities, Women, and Children
Minorities, women, and children are often the target of specific social policies. A minority group is a sociological category within a demographic that is differentiated and defined by the social majority. That is, those who hold the majority of positions of social power in a society.
The differentiation can be based on one or more observable human characteristics that include ethnicity, race, gender, wealth, or sexual orientation. Usage of the term is applied to various situations and civilizations within history, despite its popular wrongful association with a numerical, statistical minority. In the social sciences, the term minority is used to refer to categories of persons who hold few positions of social power .
Minorities
The Civil Rights Movement attempted to increase rights for minorities within the U.S.
While in most societies, numbers of men and women are roughly equal, the status of women as a subordinate group has led some (especially within feminist movements) to equate them with minorities. Children can also be understood as a minority group in these terms, as they are economically non-active and not necessarily given all the rights of adult citizens.
One major, particularly controversial policy targeting minority groups is affirmative action. This can be, for example, a government program to provide immigrant or minority groups who primarily speak a marginalized language with extra teaching in the majority language, so they are better able to compete for places at universities or for jobs. These may be considered necessary because the minority group in question is socially disadvantaged. Another form of affirmative action is quotas, where a percentage of places at university, or in employment in public services, are set aside for minority groups (including women) because a court has found that there has been a history of exclusion as it pertains to certain groups in certain sectors of society.
Chapter 16: Economic Policy
16.1: Goals of Economic Policy
16.1.1: The Goals of Economic Policy
There are four major goals of economic policy: stable markets, economic prosperity, business development and protecting employment.
Learning Objective
Compare and contrast the policy tools used by governments to achieve economic growth
Key Points
- Sometimes other objectives, like military spending or nationalization are important.
- To achieve these goals, governments use policy tools which are under the control of the government.
- Government and central banks are limited in the number of goals they can achieve in the short term.
Key Terms
- nationalization
-
Nationalization (British English spelling nationalisation) is the process of taking a private industry or private assets into public ownership by a national government or state.
- business development
-
A subset of the fields of Business and commerce, business development comprises a number of tasks and processes generally aiming at developing and implementing growth opportunities.
- economic prosperity
-
Economic prosperity is the state of flourishing, thriving, good fortune in regards to wealth.
Economic policy refers to the actions that governments take in the economic field. It covers the systems for setting interest rates and government budget as well as the labor market, national ownership, and many other areas of government interventions into the economy.
Policy is generally directed to achieve four major goals: stabilizing markets, promoting economic prosperity, ensuring business development, and promoting employment. Sometimes other objectives, like military spending or nationalization, are important.
Economic Growth
One of the major goals of economic policy is to promote economic growth. How growth is measured though is another question. The above image Rate of change of Gross domestic product, world and OECD, since 1961, is one representation of economic growth.
To achieve these goals, governments use policy tools which are under the control of the government. These generally include the interest rate and money supply, tax and government spending, tariffs, exchange rates, labor market regulations, and many other aspects of government.
Selecting Tools and Goals
Government and central banks are limited in the number of goals they can achieve in the short term. For instance, there may be pressure on the government to reduce inflation, reduce unemployment, and reduce interest rates while maintaining currency stability. If all of these are selected as goals for the short term, then policy is likely to be incoherent, because a normal consequence of reducing inflation and maintaining currency stability is increasing unemployment and increasing interest rates.
For much of the 20th century, governments adopted discretionary policies such as demand management that were designed to correct the business cycle. These typically used fiscal and monetary policy to adjust inflation, output and unemployment.
However, following the stagflation of the 1970s, policymakers began to be attracted to policy rules.
A discretionary policy is supported because it allows policymakers to respond quickly to events. However, discretionary policy can be subject to dynamic inconsistency: a government may say it intends to raise interest rates indefinitely to bring inflation under control, but then relax its stance later. This makes policy non-credible and ultimately ineffective.
A rule-based policy can be more credible, because it is more transparent and easier to anticipate. Examples of rule-based policies are fixed exchange rates, interest rate rules, the stability and growth pact and the Golden Rule. Some policy rules can be imposed by external bodies, for instance, the Exchange Rate Mechanism for currency.
A compromise between strict discretionary and strict rule-based policy is to grant discretionary power to an independent body. For instance, the Federal Reserve Bank, European Central Bank, Bank of England and Reserve Bank of Australia all set interest rates without government interference, but do not adopt rules.
Another type of non-discretionary policy is a set of policies which are imposed by an international body. This can occur (for example) as a result of intervention by the International Monetary Fund.
16.1.2: Fours Schools of Economic Thought: Classical, Marxian, Keynesian, and the Chicago School.
Mainstream modern economics can be broken down into four schools of economic thought: classical, Marxian, Keynesian, and the Chicago School.
Learning Objective
Apply the four main schools of modern economic thought.
Key Points
- Classical economics focuses on the tendency of markets to move towards equilibrium and on objective theories of value.
- As the original form of mainstream economics of the 18th and 19th centuries, classical economics served as the basis for many other schools of economic thought, including neoclassical economics.
- Marxism focuses on the labor theory of value and what Marx considered to be the exploitation of labor by capital.
- Keynesian economics derives from John Maynard Keynes, in particular his book, The General Theory of Employment, Interest and Money (1936), which ushered in contemporary macroeconomics as a distinct field.
- The Chicago School of economics is best known for its free market advocacy and monetarist ideas.
Key Terms
- mainstream economics
-
Mainstream economics is a term used to refer to widely-accepted economics as it is taught across prominent universities, and in contrast to heterodox economics.
- School of thought
-
A school of thought is a collection or group of people who share common characteristics of opinion or outlook regarding a philosophy, discipline, belief, social movement, cultural movement, or art movement.
Throughout the history of economic theory, several methods for approaching the topic are noteworthy enough, and different enough from one another, to be distinguished as particular ‘schools of economic thought. ‘ While economists do not always fit into particular schools, especially in modern times, classifying economists into a particular school of thought is common.
Mainstream modern economics can be broken down into four schools of economic thought:
Classical economics, also called classical political economy, was the original form of mainstream economics in the 18th and 19th centuries. Classical economics focuses on both the tendency of markets to move towards equilibrium and on objective theories of value. Neo-classical economics derives from this school, but differs because it is utilitarian in its value theory and because it uses marginal theory as the basis of its models and equations. Anders Chydenius (1729–1803) was the leading classical liberal of Nordic history. A Finnish priest and member of parliament, he published a book called The National Gain in 1765, in which he proposed ideas about the freedom of trade and industry, explored the relationship between the economy and society, and laid out the principles of liberalism. All of this happened eleven years before Adam Smith published a similar and more comprehensive book, The Wealth of Nations. According to Chydenius, democracy, equality and a respect for human rights formed the only path towards progress and happiness for the whole of society.
Marxian economics descends directly from the work of Karl Marx and Friedrich Engels. This school focuses on the labor theory of value and what Marx considers to be the exploitation of labor by capital. Thus, in this school of economic thought, the labor theory of value is a method for measuring the degree to which labor is exploited in a capitalist society, rather than simply a method for calculating price.
Marxism
The Marxist school of economic thought comes from the work of German economist Karl Marx.
Keynesian economics derives from John Maynard Keynes, and in particular his book, The General Theory of Employment, Interest and Money (1936), which ushered in contemporary macroeconomics as a distinct field. The book analyzed the determinants of national income, in the short run, during a period of time when prices are relatively inflexible. Keynes attempted to explain, in broad theoretical detail, why high labor-market unemployment might not be self-correcting due to low “effective demand,” and why neither price flexibility nor monetary policy could be counted on to remedy the situation. Because of its impact on economic analysis, this book is often called “revolutionary. “
Keynesian Economics
John Maynard Keynes (right), was a key theorist in economics.
A final school of economic thought, the Chicago School of economics, is best known for its free market advocacy and monetarist ideas. According to Milton Friedman and monetarists, market economies are inherently stable so long as the money supply does not greatly expand or contract. Ben Bernanke, current Chairman of the Federal Reserve, is among the significant public economists today that generally accepts Friedman’s analysis of the causes of the Great Depression.
16.2: The History of Economic Policy
16.2.1: The Nineteenth Century
Associated with industrialism and capitalism, the 19th century looms large in the history of economic policy and economic thought.
Learning Objective
Summarize the main currents of economic thought in the 19th-century
Key Points
- The beginning of the 19th century was dominated by classical economists, who concerned themselves with the transformation brought by the Industrial Revolution including rural depopulation, precariousness, poverty, and apparition of a working class.
- Karl Marx’s combination of political theory, represented in the Communist Manifesto and the dialectic theory of history inspired by Friedrich Hegel, provided a revolutionary critique of capitalism as he saw it in the 19th century.
- In the 1860’s, a revolution took place in economics.
- This current of thought was not united. There were three main schools working independently.
Key Terms
- industrialization
-
A process of social and economic change whereby a human society is transformed from a pre-industrial to an industrial state
- capitalism
-
A socio-economic system based on private property rights, including the private ownership of resources or capital, with economic decisions made largely through the operation of a market unregulated by the state.
As the century most associated with industrialization and capitalism in the West, the 19th century looms large in the history of economic policy and economic thought.
The beginning of the 19th century was dominated by “classical economists,” a group not actually referred to by this name until Karl Marx. One unifying part of their theories was the labor theory of value, contrasting to value deriving from a general equilibrium of supply and demand. These economists had seen the first economic and social transformation brought by the Industrial Revolution: rural depopulation, precariousness, poverty, and apparition of a working class. They wondered about the population growth because the demographic transition had begun in Great Britain at that time. They also asked many fundamental questions about the source of value, the causes of economic growth, and the role of money in the economy. They supported a free-market economy. They argued that it was a natural system based upon freedom and property. However, these economists were divided and did not make up a unified group of thought.
A notable current within classical economics was the under-consumption theory, as advanced by the Birmingham School and Malthus in the early 19th century. The theory argued for government action to mitigate unemployment and economic downturns. It was an intellectual predecessor of what later became Keynesian economics in the 1930’s.
Just as the term “mercantilism” had been coined and popularized by its critics like Adam Smith, so was the term “capitalism” or Kapitalismus used by its dissidents, primarily Karl Marx. Karl Marx (1818–1883) was, and in many ways still remains, the pre-eminent socialist economist. His combination of political theory represented in the Communist Manifesto and the dialectic theory of history inspired by Friedrich Hegel provided a revolutionary critique of capitalism as he saw it in the 19th century. The socialist movement that he joined had emerged in response to the conditions of people in the new industrial era and the classical economics which accompanied it. He wrote his magnum opus Das Kapital at the British Museum’s library .
Das Kapital
Karl Marx’s definition and popularizing of the term “capitalism” or Kapitalismus, as defined in “Das Kapital,” first published in 1867, remains one of the most influential works on the subject to this day.
In the 1860’s, a revolution took place in economics. The new ideas were that of the Marginalist school. Writing simultaneously and independently, a Frenchman (Léon Walras), an Austrian (Carl Menger), and an Englishman (Stanley Jevons) were developing the theory that had some antecedents. Instead of the price of a good or service reflecting the labor that has produced it, it reflected the marginal usefulness (utility) of the last purchase. This meant that in equilibrium, people’s preferences determined prices including, indirectly, the price of labor.
This current of thought was not united. There were three main schools working independently. The Lausanne school, whose two main representatives were Walras and Vilfredo Pareto, developed the theories of general equilibrium and optimality. The main written work of this school was Walras’ Elements of Pure Economics. The Cambridge school appeared with Jevons’ Theory of Political Economy in 1871. This English school developed the theories of the partial equilibrium and insisted on markets’ failures. The main representatives were Alfred Marshall, Stanley Jevons, and Arthur Pigou. The Vienna school was made up of Austrian economists Menger, Eugen von Böhm-Bawerk, and Friedrich von Wieser. They developed the theory of capital and tried to explain the presence of economic crises. It appeared in 1871 with Menger’s Principles of Economics.
16.2.2: The Progressive Era
The Progressive Era was one of general prosperity after the Panic of 1893; a severe depression that ended in 1897.
Learning Objective
Discuss the economic policies of the Progressive Era in the United States.
Key Points
- The weakened economy and persistent federal deficits led to changes in fiscal policy including the imposition of federal income taxes on businesses and individuals and the creation of the Federal Reserve System.
- The progressives voiced the need for government regulation of business practices to ensure competition and free enterprise.
- By the turn of the century, a middle class had developed that was leery of both the business elite and the radical political movements of farmers and laborers in the Midwest and West.
Key Terms
- tariff
-
a system of government-imposed duties levied on imported or exported goods; a list of such duties, or the duties themselves
- laissez-faire
-
an economic environment in which transactions between private parties are free from tariffs, government subsidies, and enforced monopolies with only enough government regulations sufficient to protect property rights against theft and aggression.
- fiscal policy
-
Government policy that attempts to influence the direction of the economy through changes in government spending or taxes.
The Progressive Era was one of general prosperity after the Panic of 1893; a severe depression that ended in 1897. The Panic of 1907 was short and mainly affected financiers. However, Campbell (2005) stresses the weak points of the economy in 1907–1914, linking them to public demands for more progressive interventions. The Panic of 1907 was followed by a small decline in real wages and increased unemployment, with both trends continuing until World War I. This resulted in stress on public finance and impacted the Wilson administration’s policies. The weakened economy and persistent federal deficits led to changes in fiscal policy, including the imposition of federal income taxes on businesses and individuals and the creation of the Federal Reserve System. Government agencies were also transformed in an effort to improve administrative efficiency.
In the Gilded Age (late 19th century) the parties were reluctant to involve the federal government too heavily in the private sector, except in the area of railroads and tariffs. In general, they accepted the concept of laissez-faire, which was doctrine opposing government interference in the economy except to maintain law and order. This attitude started to change during the depression of the 1890’s when small business, farm, and labor movements began asking the government to intercede on their behalf.
By the turn of the century, a middle class had developed that was leery of both the business elite and the radical political movements of farmers and laborers in the Midwest and West. The progressives voiced the need for government regulation of business practices to ensure competition and free enterprise. Congress enacted a law regulating railroads in 1887 (the Interstate Commerce Act) and one preventing large firms from controlling a single industry in 1890 (the Sherman Antitrust Act). However, these laws were not rigorously enforced until 1900 to 1920, when Republican President Theodore Roosevelt (1901–1909), Democratic President Woodrow Wilson (1913–1921), and others sympathetic to the views of the Progressives came to power . Many of today’s U.S. regulatory agencies were created during these years including the Interstate Commerce Commission and the Federal Trade Commission. Muckrakers were journalists who encouraged readers to demand more regulation of business. Upton Sinclair’s The Jungle (1906) was influential and persuaded America about the supposed horrors of the Chicago Union Stock Yards (though Sinclair himself never visited the site); a giant complex of meat processing that developed in the 1870’s. The federal government responded to Sinclair’s book and The Neill-Reynolds Report with the new regulatory Food and Drug Administration. Ida M. Tarbell wrote a series of articles against Standard Oil, which was perceived to be a monopoly. This affected both the government and the public reformers. Attacks by Tarbell and others helped pave the way for public acceptance of the breakup of the company by the Supreme Court in 1911.
Anti-Trust Legislation
President Wilson uses tariff, currency, and anti-trust laws to prime the pump and get the economy working.
When Democrat Woodrow Wilson was elected President with a Democratic Congress in 1912, he implemented a series of progressive policies in economics. In 1913, the 16th Amendment was ratified and a small income tax was imposed on high incomes. The Democrats lowered tariffs with the Underwood Tariff in 1913, although its effects were overwhelmed by the changes in trade caused by the World War that broke out in 1914. Wilson proved especially effective in mobilizing public opinion behind tariff changes by denouncing corporate lobbyists, addressing Congress in person in highly dramatic fashion, and staging an elaborate ceremony when he signed the bill into law. Wilson helped end the long battles over the trusts with the Clayton Antitrust Act of 1914. He managed to convince lawmakers on the issues of money and banking by the creation in 1913 of the Federal Reserve System, a complex business-government partnership that to this day dominates the financial world.
In 1913, Henry Ford adopted the moving assembly line, with each worker doing one simple task in the production of automobiles. Taking his cue from developments during the progressive era , Ford offered a very generous wage—$5 a day—to his (male) workers. He argued that a mass production enterprise could not survive if average workers could not buy the goods.
16.2.3: The Great Depression and the New Deal
The New Deal was a series of economic programs enacted in the United States between 1933 and 1936 in response to the Great Depression.
Learning Objective
Describe the Great Depression, the Roosevelt administration’s economic response to it in the New Deal, and its lasting effects
Key Points
- The programs focused on what historians call the “3 Rs”: Relief, Recovery, and Reform.
- The New Deal produced a political realignment.
- By 1942–43 the Conservative Coalition had shut down relief programs such as the WPA and CCC and blocked major liberal proposals.
Key Terms
- economic depression
-
In economics, a depression is a sustained, long-term downturn in economic activity in one or more economies. It is a more severe downturn than a recession, which is seen by some economists as part of the modern business cycle.
- political realignment
-
Realigning election (often called a critical election or political realignment) is a term from political science and political history describing a dramatic change in the political system.
- regulation
-
A law or administrative rule, issued by an organization, used to guide or prescribe the conduct of members of that organization; can specifically refer to acts in which a government or state body limits the behavior of businesses.
The Great Depression was a severe worldwide economic depression in the decade preceding World War II. It was the longest, most widespread, and deepest depression of the 20th century. In the 21st century, the Great Depression is commonly used as an example of how far the world’s economy can decline. The depression originated in the U.S., after the fall in stock prices that began around September 4, 1929, and became worldwide news with the stock market crash of October 29, 1929 (known as Black Tuesday).
Cities all around the world were hit hard, especially those dependent on heavy industry. Construction was virtually halted in many countries. Farming and rural areas suffered as crop prices fell by approximately 60%. Facing plummeting demand with few alternate sources of jobs, areas dependent on primary sector industries such as cash cropping, mining, and logging suffered the most. Some economies started to recover by the mid-1930s. In many countries, the negative effects of the Great Depression lasted until the end of World War II.
Great Depression in Unemployment
Unemployment rate in the US 1910–1960, with the years of the Great Depression (1929–1939) highlighted; accurate data begins in 1939.
Great Depression in GDP
USA annual real GDP from 1910 to 1960, with the years of the Great Depression (1929–1939) highlighted.
The New Deal was a series of economic programs enacted in the United States between 1933 and 1936. The programs were in response to the Great Depression, and focused on what historians call the “3 Rs”: Relief, Recovery, and Reform. That is, Relief for the unemployed and poor; Recovery of the economy to normal levels; and Reform of the financial system to prevent a repeat depression.
The New Deal produced a political realignment, making the Democratic Party the majority (as well as the party that held the White House for seven out of nine Presidential terms from 1933 to 1969). Its basis was liberal ideas, the white South, traditional Democrats, big city machines, and the newly empowered labor unions and ethnic minorities. The Republicans were split, with conservatives opposing the entire New Deal as an enemy of business and growth, and liberals accepting some of it and promising to make it more efficient. The realignment crystallized into the New Deal Coalition that dominated most presidential elections into the 1960s, while the opposing Conservative Coalition largely controlled Congress from 1937 to 1963.
From 1934 to 1938, Roosevelt was assisted in his endeavors by a “pro-spender” majority in Congress. Many historians distinguish a “First New Deal” (1933–34) and a “Second New Deal” (1935–38), with the second one being more liberal and more controversial. It included a national work program, the Works Progress Administration (WPA), which made the federal government by far the largest single employer in the nation. The “First New Deal” (1933–34) dealt with diverse groups, from banking and railroads to industry and farming, all of which demanded help for economic survival. The Federal Emergency Relief Administration, for instance, provided $500 million for relief operations by states and cities, while the short-lived CWA (Civil Works Administration) gave localities money to operate make-work projects in 1933-34.
The “Second New Deal” in 1935–38 included the Wagner Act to promote labor unions, the Works Progress Administration (WPA) relief program, the Social Security Act, and new programs to aid tenant farmers and migrant workers. The final major items of New Deal legislation were the creation of the United States Housing Authority and Farm Security Administration, both in 1937, and the Fair Labor Standards Act of 1938, which set maximum hours and minimum wages for most categories of workers.
The economic downturn of 1937–38, and the bitter split between the AFL and CIO labor unions, led to major Republican gains in Congress in 1938. Conservative Republicans and Democrats in Congress joined in the informal Conservative Coalition. By 1942–43 they shut down relief programs such as the WPA and CCC and blocked major liberal proposals. Roosevelt himself turned his attention to the war effort, and won reelection in 1940 and 1944. The Supreme Court declared the National Recovery Administration (NRA) and the first version of the Agricultural Adjustment Act (AAA) unconstitutional, although the AAA was rewritten and then upheld. As the first Republican president elected after FDR, Dwight D. Eisenhower (1953–61) left the New Deal largely intact, even expanding it in some areas. In the 1960s, Lyndon B. Johnson’s Great Society used the New Deal as inspiration for a dramatic expansion of liberal programs, which Republican Richard M. Nixon generally retained. After 1974, however, the call for deregulation of the economy gained bipartisan support. The New Deal regulation of banking (Glass–Steagall Act) was suspended in the 1990s. Many New Deal programs remain active, with some still operating under the original names, including the Federal Deposit Insurance Corporation (FDIC), the Federal Crop Insurance Corporation (FCIC), the Federal Housing Administration (FHA), and the Tennessee Valley Authority (TVA). The largest programs still in existence today are the Social Security System and the Securities and Exchange Commission (SEC).
16.2.4: Social Regulation
Social policy refers to guidelines, principles, legislation and activities that affect the living conditions conducive to human welfare.
Learning Objective
Summarize the broad periods of regulation and deregulation in American history
Key Points
- Social policy aims to improve human welfare and to meet human needs for education, health, housing, and social security.
- Important areas of social policy are the welfare state, social security, unemployment insurance, environmental policy, pensions, health care, social housing, social care, child protection, social exclusion, education policy, crime, and criminal justice.
- With regard to economic policy, regulations may include central planning of the economy, remedying market failure, enriching well-connected firms, or benefiting politicians.
- Informal social control is often not sufficient in a large society in which an individual can choose to ignore the sanctions of an individual group, leading to the use of formal, usually government control.
- With regard to economic policy, regulations may include central planning of the economy, remedying market failure, enriching well-connected firms, or benefiting politicians.
Key Terms
- laissez-faire
-
an economic environment in which transactions between private parties are free from tariffs, government subsidies, and enforced monopolies with only enough government regulations sufficient to protect property rights against theft and aggression.
- social policy
-
Social policy primarily refers to guidelines, principles, legislation, and activities that affect the living conditions conducive to human welfare.
Example
- The Malcolm Wiener Center for Social Policy at Harvard University describes it as “public policy and practice in the areas of health care, human services, criminal justice, inequality, education, and labor. “
Social Policy
Social policy primarily refers to guidelines, principles, legislation and activities that affect the living conditions conducive to human welfare. The Malcolm Wiener Center for Social Policy at Harvard University describes it as “public policy and practice in the areas of health care, human services, criminal justice, inequality, education, and labor. “
Types of Social Policy
Social policy aims to improve human welfare and to meet human needs for education, health, housing and social security. Important areas of social policy are the welfare state, social security, unemployment insurance, environmental policy, pensions, health care, social housing, social care, child protection, social exclusion, education policy, crime, and criminal justice.
The term ‘social policy’ can also refer to policies which govern human behavior. In the United States, the term ‘social policy’ may be used to refer to abortion and the regulation of its practice, euthanasia, homosexuality, the rules surrounding issues of marriage, divorce, adoption, the legal status of recreational drugs, and the legal status of prostitution.
Economic Policy
With regard to economic policy, regulations may include central planning of the economy, remedying market failure, enriching well-connected firms, or benefiting politicians. In the U.S., throughout the 18th and 19th centuries, the government engaged in substantial regulation of the economy. In the 18th century, the production and distribution of goods were regulated by British government ministries over the American Colonies. Subsidies were granted to agriculture and tariffs were imposed, sparking the American Revolution.
The United States government maintained high tariffs throughout the 19th century and into the 20th century, until the Reciprocal Trade Agreement was passed in 1934 under the Franklin D. Roosevelt administration. Other forms of regulation and deregulation came in waves: the deregulation of big business in the Gilded Age, which led to President Theodore Roosevelt’s trust busting from 1901 to 1909; more deregulation and Laissez-Faire economics in the 1920’s, which was followed by the Great Depression and intense governmental regulation under Franklin Roosevelt’s New Deal; and President Ronald Reagan’s deregulation of business in the 1980s.
The Seal of the SEC
Seal of the U.S. Securities and Exchange Commission.
16.2.5: Deregulation
Deregulation is the act or process of removing or reducing state regulations.
Learning Objective
Analyze arguments in favor of deregulation
Key Points
- The stated rationale for deregulation is often that fewer and simpler regulations will lead to a raised level of competitiveness, therefore higher productivity, more efficiency and lower prices overall.
- Opposition to deregulation usually involves apprehension regarding environmental pollution and environmental quality standards (these issues are protected against by certain regulations, such as those that limit the use of hazardous materials), financial uncertainty, and constraining monopolies.
- Regulatory reform is a parallel development alongside deregulation.
- Deregulation can be distinguished from privatization, because privatization involves taking state-owned service providers into the private sector.
Key Terms
- privatization
-
The transfer of a company or organization from government to private ownership and control.
- deregulation
-
The process of removing constraints, especially government-imposed economic regulations.
- regulation
-
A law or administrative rule, issued by an organization, used to guide or prescribe the conduct of members of that organization; can specifically refer to acts in which a government or state body limits the behavior of businesses.
Deregulation is the act or process of removing or reducing state regulations. It is therefore the opposite of regulation, which is the process in which the government regulates certain activities. Laissez-faire is an example of a deregulated economic environment in which transactions between private parties are free from government restrictions, tariffs, and subsidies, with only enough regulations to protect property rights. The phrase laissez-faire is French and literally means “let [them] do,” broadly implying “let it be” or “leave it alone. “
The rationale for deregulation, as it is often phrased, is that fewer and simpler regulations will lead to a raised level of competitiveness between businesses, which itself will generate higher productivity, higher efficiency and lower prices. The Financial Times Lexicon states that deregulation, in the sense of a substantial easing of government restrictions on industry, is normally justified in order to promote competition.
Opposition to deregulation usually stems from concerns regarding environmental pollution and environmental quality standards, both of which are protected against by certain types of regulations, such as those that limit the ability of businesses to use hazardous materials. Opposition has also stemmed from fears that a free market, without protective regulations, would become financially unstable or controlled by monopolies.
Regulatory reform is a parallel development alongside deregulation. Regulatory reform refers to efforts, by the government, to review regulations with a view towards minimizing them, simplifying them, and making them more cost effective. Such efforts, given impetus by the Regulatory Flexibility Act of 1980, are embodied in the U.S. Office of Management and Budget’s Office of Information and Regulatory Affairs, and the United Kingdom’s Better Regulation Commission. Cost-benefit analysis is frequently used in such reviews. Another catalyst of reform has been regulatory innovations (such as emissions trading), usually suggested by economists..
Deregulation can be distinguished from privatization, because privatization involves moving state-owned service providers into the private sector.
History
Many industries in the United States became regulated by the federal government in the late 19th and early 20th century. Entry to some markets was restricted to stimulate and protect private companies as they made initial investments into an infrastructure that provided essential public services, such as water, electric and communications utilities. Because the entry of competitors was highly restricted, monopoly situations were created. The government responded by instituting more regulations, this time price and economic controls aimed at protecting the public from these monopolies. Other forms of regulation were motivated by what was seen as the corporate abuse of the public interest by businesses that already existed. This occurred with the railroad industry following the era of the so-called “robber barons”. In the first instance, as markets matured to a point where several providers could be financially viable offering similar services, prices determined by that ensuing competition were seen as more economically efficient than those set by the regulatory process. Under these conditions, deregulation became attractive.
One phenomenon that encouraged deregulation was the fact that regulated industries often controlled the government regulatory agencies and used them to serve the industries’ interests. Even when regulatory bodies started out functioning independently, a process known as regulatory capture often saw industry interests come to dominate those of the consumer. A similar pattern has been observed within the deregulation process itself, which is often effectively controlled by the regulated industries through lobbying the legislative process. Such political forces, however, exist in many other forms for other special interest groups.
After a several decade hiatus, deregulation gained momentum in the 1970s, influenced by research at the University of Chicago and the theories of Ludwig von Mises, Friedrich von Hayek, and Milton Friedman, among others . Two leading ‘think tanks’ in Washington, the Brookings Institution and the American Enterprise Institute, were also active, holding seminars and publishing studies that advocated deregulatory initiatives throughout the 1970s and 1980s. Alfred E. Kahn played an unusual role, because he published as an academic and also participated in the Carter Administration’s efforts to deregulate transportation.
Friedrich Von Hayek
Austrian economist Friedrich von Hayek, along with University of Chicago economist Milton Friedman are two classic liberal economists attributed with the return of laissez-faire economics and deregulation.
The deregulation movement of the late 20th century had substantial economic effects and engendered substantial controversy.
16.3: Economic Policy
16.3.1: Monetary Policy
Monetary policy is the process by which the monetary authority of a country controls the supply of money.
Learning Objective
Describe how central banking authorities use monetary policy to achieve the twin economic goals of relatively stable prices and low unemployment
Key Points
- Expansionary policy is traditionally used to try to combat unemployment in a recession by lowering interest rates in the hope that easy credit will entice businesses into expanding.
- Contractionary policy is intended to slow inflation in hopes of avoiding the resulting distortions and deterioration of asset values.
- Monetary policy differs from fiscal policy, which refers to taxation, government spending, and associated borrowing.
- The primary tool of monetary policy is open market operations. This entails managing the quantity of money in circulation through the buying and selling of various financial instruments, such as treasury bills, company bonds, or foreign currencies.
Key Terms
- Expansionary policy
-
Expansionary policy increases the total supply of money in the economy more rapidly than usual.
- monetary policy
-
Monetary policy is the process by which the monetary authority of a country controls the supply of money, often targeting a rate of interest for the purpose of promoting economic growth and stability.
- Contractionary policy
-
Contractionary policy expands the money supply more slowly than usual or even shrinks it.
Monetary Policy
Monetary policy is the process by which the monetary authority of a country controls the supply of money, often targeting a rate of interest for the purpose of promoting economic growth and stability. The official goals usually include relatively stable prices and low unemployment. Monetary theory provides insight into how to craft optimal monetary policy. It is referred to as either being expansionary or contractionary, where an expansionary policy increases the total supply of money in the economy more rapidly than usual, and contractionary policy expands the money supply more slowly than usual or even shrinks it. Expansionary policy is traditionally used to try to combat unemployment in a recession by lowering interest rates in the hope that easy credit will entice businesses into expanding. Contractionary policy is intended to slow inflation in hopes of avoiding the resulting distortions and deterioration of asset values.
Monetary policy differs from fiscal policy, which refers to taxation, government spending, and associated borrowing.
Monetary policy rests on the relationship between the rates of interest in an economy, that is, the price at which money can be borrowed, and the total supply of money. Monetary policy uses a variety of tools to control one or both of these, to influence outcomes like economic growth, inflation, exchange rates with other currencies and unemployment. Where currency is under a monopoly of issuance, or where there is a regulated system of issuing currency through banks which are tied to a central bank, the monetary authority has the ability to alter the money supply and thus influence the interest rate to achieve policy goals. The beginning of monetary policy as such comes from the late 19th century, where it was used to maintain the gold standard.
Monetary policies are described as follows: accommodative, if the interest rate set by the central monetary authority is intended to create economic growth; neutral, if it is intended neither to create growth nor combat inflation; or tight if intended to reduce inflation.
There are several monetary policy tools available to achieve these ends: increasing interest rates by fiat; reducing the monetary base; and increasing reserve requirements with the effect of contracting the money supply; and, if reversed, expand the money supply.
Within almost all modern nations, special institutions, central banks, (the Federal Reserve System in the United States) have the task of executing the monetary policy and often independently of the executive .
The Federal Reserve Board Building
The Federal Reserve Board Building in Washington D. C.
The primary tool of monetary policy is open market operations. This entails managing the quantity of money in circulation through the buying and selling of various financial instruments, such as treasury bills, company bonds, or foreign currencies. All of these purchases or sales result in more or less base currency entering or leaving market circulation.
Usually, the short term goal of open market operations is to achieve a specific short term interest rate target. In other instances, monetary policy might instead entail the targeting of a specific exchange rate relative to some foreign currency or else relative to gold. For example, in the case of the USA the Federal Reserve targets the federal funds rate, the rate at which member banks lend to one another overnight; however, the monetary policy of China is to target the exchange rate between the Chinese renminbi and a basket of foreign currencies.
The other primary means of conducting monetary policy include:
- Discount window lending (lender of last resort)
- Fractional deposit lending (changes in the reserve requirement)
- Moral suasion (cajoling certain market players to achieve specified outcomes)
- “Open mouth operations” (talking monetary policy with the market).
16.3.2: Fiscal Policy
Fiscal policy is the use of government revenue collection or taxation, and expenditure (spending) to influence the economy.
Learning Objective
Review the United States’ stances of fiscal policy, methods of funding, and policies regarding borrowing
Key Points
- There are three main stances of fiscal policy: neutral fiscal policy, expansionary fiscal policy, and contractionary fiscal policy.
- Governments can use a budget surplus to do two things: to slow the pace of strong economic growth and to stabilize prices when inflation is too high.
- However, economists debate the effectiveness of fiscal stimulus. The argument mostly centers on crowding out.
- In the classical view, the expansionary fiscal policy also decreases net exports, which has a mitigating effect on national output and income.
Key Terms
- Neoclassical Economists
-
Neoclassical economists generally emphasize crowding out; when government borrowing leads to higher interest rates that may offset the stimulative impact of spending.
- fiscal policy
-
In economics and political science, fiscal policy is the use of government revenue collection or taxation, and expenditure (spending) to influence the economy.
- Keynesian Economics
-
Keynesian economics suggests that increasing government spending and decreasing tax rates are the best ways to stimulate aggregate demand, and only to decrease spending & increase taxes after the economic boom begins.
Fiscal Policy
In economics and political science, fiscal policy is the use of government revenue collection or taxation, and expenditure (spending) to influence the economy. Changes in the level and composition of taxation and government spending can impact the following variables in the economy: aggregate demand and the level of economic activity; the pattern of resource allocation; and the distribution of income.
Stances of Fiscal Policy
There are three main stances of fiscal policy:
- Neutral fiscal policy is usually undertaken when an economy is in equilibrium. Government spending is fully funded by tax revenue and overall the budget outcome has a neutral effect on the level of economic activity.
- Expansionary fiscal policy involves government spending exceeding tax revenue, and is usually undertaken during recessions.
- Contractionary fiscal policy occurs when government spending is lower than tax revenue, and is usually undertaken to pay down government debt.
However, these definitions can be misleading as, even with no changes in spending or tax laws at all, cyclic fluctuations of the economy cause cyclic fluctuations of tax revenues and of some types of government spending, altering the deficit situation; these are not considered to be policy changes. Thus, for example, a government budget that is balanced over the course of the business cycle is considered to represent a neutral fiscal policy stance.
Methods of Funding
Governments spend money on a wide variety of things, from the military and police to services like education and healthcare, as well as transfer payments such as welfare benefits. This expenditure can be funded in a number of different ways: taxation, printing money, borrowing money from the population or from abroad, consumption of fiscal reserves, or sale of fixed assets (land).
Borrowing
A fiscal deficit is often funded by issuing bonds. These pay interest, either for a fixed period or indefinitely. If the interest and capital requirements are too large, a nation may default on its debts, usually to foreign creditors, while public debt or borrowing refers to the government borrowing from the public.
Economic Effects of Fiscal Policy
Governments use fiscal policy to influence the level of aggregate demand in the economy, in an effort to achieve economic objectives of price stability, full employment, and economic growth. One school of fiscal policy developed by John Maynard Keynes suggests that increasing government spending and decreasing tax rates are the best ways to stimulate aggregate demand, and only to decrease spending & increase taxes after the economic boom begins. Keynesian Economics argues this method be used in times of recession or low economic activity as an essential tool for building the framework for strong economic growth and working towards full employment. In theory, the resulting deficits would be paid for by an expanded economy during the boom that would follow; this was the reasoning behind the New Deal.
John Maynard Keynes and Harry Dexter White
Keynes (right) was the father and founder of Keynesian economics.
Governments can use a budget surplus to do two things: to slow the pace of strong economic growth and to stabilize prices when inflation is too high. Keynesian theory posits that removing spending from the economy will reduce levels of aggregate demand and contract the economy, thus stabilizing prices.
However, economists debate the effectiveness of fiscal stimulus. The argument mostly centers on crowding out; whether government borrowing leads to higher interest rates that may offset the stimulative impact of spending. When the government runs a budget deficit, funds will need to come from public borrowing (government bonds), overseas borrowing, or monetizing the debt. When governments fund a deficit with the issuing of government bonds, interest rates can increase across the market, because government borrowing creates higher demand for credit in the financial markets. This causes a lower aggregate demand for goods and services, contrary to the objective of a fiscal stimulus. Neoclassical economists generally emphasize crowding out while Keynesians argue that fiscal policy can still be effective especially in a liquidity trap where, they argue, crowding out is minimal.
In the classical view, the expansionary fiscal policy also decreases net exports, which has a mitigating effect on national output and income. When government borrowing increases interest rates it attracts foreign capital from foreign investors. This is because, all other things being equal, the bonds issued from a country executing expansionary fiscal policy now offer a higher rate of return. To purchase bonds originating from a certain country, foreign investors must obtain that country’s currency. Therefore, when foreign capital flows into the country undergoing fiscal expansion, demand for that country’s currency increases. The increased demand causes that country’s currency to appreciate. Once the currency appreciates, goods originating from that country now cost more to foreigners than they did before and foreign goods now cost less than they did before. Consequently, exports decrease and imports increase.
16.3.3: Income Security Policy
Fiscal policy is considered to be any change the government makes to the national budget in order to influence a nation’s economy.
Learning Objective
Analyze the transformation of American fiscal policy in the years of the Great Depression and World War II
Key Points
- The Great Depression showed the American population that there was a growing need for the government to manage economic affairs. The size of the federal government began rapidly expanding in the 1930s, growing from 553,000 paid civilian employees in the late 1920s to 953,891 employees in 1939.
- FDR was important because he implicated the New Deal, a program that would offer relief, recovery, and reform to the American nation. In terms of relief, new organizations (such as the Works Progress Administration) saved many U.S. lives.
- In 1971, at Bretton Woods, the U.S. went off the gold standard, allowing the dollar to float. Shortly after that, OPEC pegged the price of oil to gold rather than the dollar. The 70s were marked by oil shocks, recessions, and inflation in the U.S.
- Fixed income refers to any type of investment under which the borrower/issuer is obliged to make payments of a fixed amount on a fixed schedule: for example, if the borrower has to pay interest at a fixed rate once a year, and to repay the principal amount on maturity.
- Fixed-income securities can be contrasted with equity securities, often referred to as stocks and shares, that create no obligation to pay dividends or any other form of income.
- Governments issue government bonds in their own currency and sovereign bonds in foreign currencies. Local governments issue municipal bonds to finance themselves. Debt issued by government-backed agencies is called an agency bond.
Key Terms
- fiscal
-
Related to the treasury of a country, company, region, or city, particularly to government spending and revenue.
- fixed income
-
Fixed income refers to any type of investment under which the borrower/issuer is obliged to make payments of a fixed amount on a fixed schedule: for example, if the borrower has to pay interest at a fixed rate once a year, and to repay the principal amount on maturity.
Background
Any changes the government makes to the national budget in order to influence a nation’s economy is considered fiscal policy. The approach to economic policy in the United States was rather laissez-faire until the Great Depression. The government tried to stay away from economic matters as much as possible and hoped that a balanced budget would be maintained.
Fixed income refers to any type of investment under which the borrower/issuer is obliged to make payments of a fixed amount on a fixed schedule: for example, if the borrower has to pay interest at a fixed rate once a year, and to repay the principal amount on maturity. Fixed-income securities can be contrasted with equity securities, often referred to as stocks and shares, that create no obligation to pay dividends or any other form of income. In order for a company to grow its business, it often must raise money: to finance an acquisition, buy equipment or land or invest in new product development. Governments issue government bonds in their own currency and sovereign bonds in foreign currencies. Local governments issue municipal bonds to finance themselves. Debt issued by government-backed agencies is called an agency bond.
The Great Depression
The Great Depression struck countries in the late 1920s and continued throughout the entire 1930s. It affected some countries more than others, and the effects in the U.S. were detrimental. In 1933, around 25% of all workers were unemployed in America. Many families starved or lost their homes. Some tried traveling to the West to find work, also to no avail. Because of the prolonged recovery of the United States economy and the major changes that the Great Depression forced the government to make, the creation of fiscal policy is often referred to as one of the defining moments in the history of the United States.
Crowds outside the New York Stock Exchange in 1929.
A solemn crowd gathers outside the Stock Exchange after the crash.
Another contributor to changing the role of government in the 1930s was President Franklin Delano Roosevelt. FDR was important because he implicated the New Deal, a program that would offer relief, recovery, and reform to the American nation. In terms of relief, new organizations (such as the Works Progress Administration) saved the lives of many U.S. citizens. The reform aspect was indeed the most influential in the New Deal, as it forever changed the role of government in the U.S. economy.
FDR
FDR’s “New Deal” policies were based on the principle of government intervention and regulation of the economy.
World War II and Effects
World War II forced the government to run huge deficits, or spend more than they were generating economically, in order to keep up with all of the production the U.S. military needed. By running deficits, the economy recovered, and America rebounded from its drought of unemployment. The military strategy of full employment had a huge benefit: the government’s massive deficits were used to pay for the war, and ended the Great Depression. This phenomenon set the standard and showed just how necessary it was for the government to play an active role in fiscal policy.
Modern Fiscal Policy
In 1971, at Bretton Woods, the U.S. went off the gold standard, allowing the dollar to float. Shortly after that, OPEC pegged the price of oil to gold rather than the dollar. The 70s were marked by oil shocks, recessions, and inflation in the U.S.
In late 2007 to early 2008, the economy would enter a particularly bad recession as a result of high oil and food prices, and a substantial credit crisis leading to the bankruptcy and eventual federal takeover of certain large and well-established mortgage providers. In an attempt to fix these economic problems, the United States federal government passed a series of costly economic stimulus and bailout packages. As a result of this, the deficit would increase to $455 billion and is projected to continue to increase dramatically for years to come, due in part to both the severity of the current recession and the high spending fiscal policy the federal government has adopted to help combat the nation’s economic woes.
16.3.4: Regulation and Antitrust Policy
Antitrust laws are a form of marketplace regulation intended to prohibit monopolization and unfair business practices.
Learning Objective
Assess the balance the federal government attempts to strike between regulation and deregulation
Key Points
- A number of governmental programs review regulatory innovations in order to minimize and simplify those regulations, and to make regulations more cost-effective.
- Government agencies known as competition regulators, as well as private litigants, apply antitrust and consumer protection laws in the hopes of preventing market failure.
- Large companies with huge cash reserves and large lines of credit can stifle competition by engaging in predatory pricing, in which they intentionally sell their products and services at a loss for a time, in order to force smaller competitors out of business.
Key Terms
- Antitrust Law
-
The United States antitrust law is a body of law that prohibits anti-competitive behavior (monopolization) and unfair business practices. Antitrust laws are intended to encourage competition in the marketplace.
- regulation
-
A regulation is a legal provision that creates, limits, or constrains a right; creates or limits a duty; or allocates a responsibility.
A regulation is a legal provision with many possible functions. It can create or limit a right; it can create or limit a duty; or it can allocate a responsibility. Regulations take many forms, including legal restrictions from a government authority, contractual obligations, industry self-regulations, social regulations, co-regulations, and market regulations.
State, or governmental, regulation attempts to produce outcomes which might not otherwise occur. Common examples of this type of regulation include laws that control prices, wages, market entries, development approvals, pollution effects, employment for certain people in certain industries, standards of production for certain goods, the military forces, and services.
The study of formal (legal and/or official) and informal (extra-legal and/or unofficial) regulation is one of the central concerns of the sociology of law. Scholars in this field are particularly interested in exploring the degree to which formal and legal regulation actually changes social behavior.
Deregulation, Regulatory Reform, and Liberalization
According to the Competitive Enterprise Institute, government regulation in the United States costs the economy approximately $1.75 trillion per year, a number that exceeds the combined total of all corporate pretax profits. Because of this, programs exist that review regulatory initiatives in order to minimize and simplify regulations, and to make them more cost-effective. Such efforts, given impetus by the Regulatory Flexibility Act of 1980 in the United States, are embodied in the United States Office of Management and Budget’s Office of Information and Regulatory Affairs. Economists also occasionally develop regulation innovations, such as emissions trading.
U.S. Department of Justice
The Department of Justice is home to the U.S. anti-trust enforcers.
U.S. Antitrust Law
U.S. antitrust law is a body of law that prohibits anti-competitive behavior (monopolization) and unfair business practices. Intended to encourage competition in the marketplace, these laws make it illegal for businesses to employ practices that hurt other businesses or consumers, or that generally violate standards of ethical behavior. Government agencies known as competition regulators, along with private litigants, apply the antitrust and consumer protection laws in hopes of preventing market failure. Originally, these types of laws emerged in the U.S. to combat “corporate trusts,” which were big businesses. Other countries use the term “competition law” for this action. Many countries, including most of the Western world, have antitrust laws of some form. For example, the European Union has provisions under the Treaty of Rome to maintain fair competition, as does Australia under its Trade Practices Act of 1974.
Antitrust Rationale
Antitrust laws prohibit monopolization, attempted monopolization, agreements that restrain trade, anticompetitive mergers, and, in some circumstances, price discrimination in the sale of commodities.
Monopolization, or attempts to monopolize, are offenses that an individual firm may commit. Typically, this behavior involves a firm using unreasonable, unlawful, and exclusionary practices that are intended to secure, for that firm, control of a market. Large companies with huge cash reserves and large lines of credit can stifle competition by engaging in predatory pricing, in which they intentionally sell their products and services at a loss for a time, in order to force smaller competitors out of business. Afterwards, with no competition, these companies are free to consolidate control of an industry and charge whatever prices they desire.
A number of barriers make it difficult for new competitors to enter a market, including the fact that entry requires a large upfront investment, specific investments in infrastructure, and exclusive arrangements with distributors, customers and wholesalers. Even if competitors do shoulder these costs, monopolies will have ample warning and time in which to either buy out the competitor, engage in its own research, or return to predatory pricing long enough to eliminate the upstart business.
Federal Antitrust Actions
The federal government, via the Antitrust Division of the United States Department of Justice, and the Federal Trade Commission, can bring civil lawsuits enforcing the laws. Famous examples of these lawsuits include the government’s break-up of AT&T’s local telephone service monopoly in the early 1980s, and governmental actions against Microsoft in the late 1990s.
The federal government also reviews potential mergers to prevent market concentration. As a result of the Hart-Scott-Rodino Antitrust Improvements Act, larger companies attempting to merge must first notify the Federal Trade Commission and the Department of Justice’s Antitrust Division prior to consummating a merger. These agencies then review the proposed merger by defining what the market is, and then determining the market concentration using the Herfindahl-Hirschman Index and each company’s market share. The government is hesitant to allow a company to develop market power, because if unchecked, such power can lead to monopoly behavior.
Exemptions to Antitrust Laws
Several types of organizations are exempt from federal antitrust laws, including labor unions, agricultural cooperatives, and banks. Mergers and joint agreements of professional football, hockey, baseball, and basketball leagues are exempt. Newspapers run by joint operating agreements are also allowed limited antitrust immunity under the Newspaper Preservation Act of 1970.
16.3.5: Subsidies and Contracting
A subsidy is assistance paid to business, economic sectors, or producers; a contract is an agreement between two or more parties.
Learning Objective
Discuss the aims of subsidies and their effects on supply and demand
Key Points
- Subsidies are often regarded as a form of protectionism or trade barrier by making domestic goods and services artificially competitive against imports. Subsidies may distort markets and can impose large economic costs.
- A subsidy may be an efficient means of correcting a market failure. For example, economic analysis may suggest that direct subsidies (cash benefits) would be more efficient than indirect subsidies (such as trade barriers).
- Government procurement in the United States addresses the federal government’s need to acquire goods, services (including construction), and interests in real property.
- The authority of a contracting officer (the Government’s agent) to contract on behalf of the Government is set forth in public documents (a warrant) that a person dealing with the contracting officer can review.
Key Terms
- Contract
-
An agreement entered into voluntarily by two or more parties with the intention of creating a legal obligation, which may have elements in writing, though contracts can be made orally.
- subsidy
-
Assistance paid to a business, economic sector, or producers.
Subsidy
A subsidy is assistance paid to a business, economic sector or producers. Most subsidies are paid by the government to producers or distributed as subventions in an industry to prevent the decline of that industry, to increase the prices of its products, or simply to encourage the hiring of more labor. Some subsidies are to encourage the sale of exports; some are for food to keep down the cost of living; and other subsidies encourage the expansion of farm production.
Subsidies
This graph depicts U.S. farm subsidies in 2005.
Subsidies are often regarded as a form of protectionism or trade barrier by making domestic goods and services artificially competitive against imports. Subsidies may distort markets and can impose large economic costs. Financial assistance in the form of subsidies may come from a government, but the term subsidy may also refer to assistance granted by others, such as individuals or non-governmental institutions.
Examples of industries or sectors where subsidies are often found include utilities, gasoline in the United States, welfare, farm subsidies, and (in some countries) certain aspects of student loans.
Types of Subsidies
Ways to classify subsidies include the reason behind them, the recipients of the subsidy, and the source of the funds. One of the primary ways to classify subsidies is the means of distributing the subsidy. The term subsidy may or may not have a negative connotation. A subsidy may be characterized as inefficient relative to no subsidies; inefficient relative to other means of producing the same results; or “second-best;” implying an inefficient but feasible solution.
In other cases, a subsidy may be an efficient means of correcting a market failure. For example, economic analysis may suggest that direct subsidies (cash benefits) would be more efficient than indirect subsidies (such as trade barriers); this does not necessarily imply that direct subsidies are bad, but they may be more efficient or effective than other mechanisms to achieve the same (or better) results. Insofar as they are inefficient, subsidies would generally be considered bad, as economics is the study of efficient use of limited resources.
Effects
In standard supply and demand curve diagrams, a subsidy shifts either the demand curve up or the supply curve down. Subsidies that increase the production will tend to result in lower prices, while subsidies that increase demand will tend to result in an increase in price. Both cases result in a new economic equilibrium. The recipient of the subsidy may need to be distinguished from the beneficiary of the subsidy, and this analysis will depend on elasticity of supply and demand as well as other factors. The net effect and identification of winners and losers is rarely straightforward, but subsidies generally result in a transfer of wealth from one group to another (or transfer between sub-groups).
Government Procurement
Government procurement in the United States addresses the federal government’s need to acquire goods, services (including construction), and interests in real property. It involves the acquiring by contract, usually with appropriated funds, of supplies, services, and interests in real property by and for the use of the Federal Government. This is done through purchase or lease, whether the supplies, services, or interests are already in existence or must be created, developed, demonstrated, and evaluated.
The authority of a contracting officer (the Government’s agent) to contract on behalf of the Government is set forth in public documents (a warrant) that a person dealing with the contracting officer can review. The contracting officer has no authority to act outside of his or her warrant or to deviate from the laws and regulations controlling Federal Government contracts. The private contracting party is held to know the limitations of the contracting officer’s authority, even if the contracting officer does not. This makes contracting with the United States a very structured and restricted process. As a result, unlike in the commercial arena, where the parties have great freedom, a contract with the U.S. Government must comply with the laws and regulations that permit it, and must be made by a contracting officer with actual authority to execute the contract.
16.3.6: The Public Debt
Government debt, also known as public debt, or national debt, is the debt owed by a central government.
Learning Objective
Describe how countries finance activities by issuing debt
Key Points
- Government debt is one method of financing government operations but not the only method. Governments can also create money to monetize their debts, thus removing the need to pay interest. However, this practice simply reduces government interest costs rather than truly canceling government debt.
- As the government draws its income from much of the population, government debt is an indirect debt of the taxpayers.
- Lending to a national government in the country’s own currency is often considered risk free and is done at a so-called risk-free interest rate. This is because, up to a point, the debt and interest can be repaid by raising tax receipts, a reduction in spending, or by simply printing more money.
Key Terms
- Public Debt
-
Government debt, also known as public debt, or national debt, is the debt owed by a central government.
- Sovereign Debt
-
Sovereign debt usually refers to government debt that has been issued in a foreign currency.
- Government Bond
-
A government bond is a bond issued by a national government. Such bonds are often denominated in the country’s domestic currency.
Government Debt
Government debt, also known as public debt, or national debt, is the debt owed by a central government. In the U.S. and other federal states, “government debt” may also refer to the debt of a state or provincial government, municipal or local government. Government debt is one method of financing government operations, but it is not the only method. Governments can also create money to monetize their debts, thereby removing the need to pay interest. However, this practice simply reduces government interest costs rather than truly canceling government debt. shows each country’s public debt as a percentage of their GDP in 2011.
Global Public Debt
This map shows each country’s public debt as a percentage of their GDP.
Governments usually borrow by issuing securities, government bonds and bills. Less creditworthy countries sometimes borrow directly from a supranational organization (the World Bank) or international financial institutions. As the government draws its income from much of the population, government debt is an indirect debt of the taxpayers. Government debt can be categorized as internal debt (owed to lenders within the country) and external debt (owed to foreign lenders). Sovereign debt usually refers to government debt that has been issued in a foreign currency. A broader definition of government debt may consider all government liabilities, including future pension payments and payments for goods and services the government has contracted but not yet paid.
Government and Sovereign Bonds
A government bond is a bond issued by a national government. Such bonds are often denominated in the country’s domestic currency. Most developed country governments are prohibited by law from printing money directly, that function having been relegated to their central banks. However, central banks may buy government bonds in order to finance government spending, thereby monetizing the debt.
Bonds issued by national governments in foreign currencies are normally referred to as sovereign bonds. Investors in sovereign bonds denominated in foreign currency have the additional risk that the issuer may be unable to obtain foreign currency to redeem the bonds.
Denominated in Reserve Currencies
Governments often borrow money in currency in which the demand for debt securities is strong. An advantage of issuing bonds in a currency such as the US dollar, the pound sterling, or the euro is that many investors wish to invest in such bonds.
Risk
Lending to a national government in the country’s own sovereign currency is often considered “risk free” and is done at a so-called “risk-free interest rate. ” This is because, up to a point, the debt and interest can be repaid by raising tax receipts (either by economic growth or raising tax rates), a reduction in spending, or failing that by simply printing more money. It is widely considered that this would increase inflation and reduce the value of the invested capital. A typical example of this is provided by Weimar Germany of the 1920s which suffered from hyperinflation due to its government’s inability to pay the national debt deriving from the costs of World War I.
In practice, the market interest rate tends to be different for debts of different countries. An example is in borrowing by different European Union countries denominated in euros. Even though the currency is the same in each case, the yield required by the market is higher for some countries’ debt than for others. This reflects the views of the market on the relative solvency of the various countries and the likelihood that the debt will be repaid.
A politically unstable state is anything but risk-free as it may cease its payments. Another political risk is caused by external threats. It is mostly uncommon for invaders to accept responsibility for the national debt of the annexed state or that of an organization it considered as rebels. For example, all borrowings by the Confederate States of America were left unpaid after the American Civil War. On the other hand, in the modern era, the transition from dictatorship and illegitimate governments to democracy does not automatically free the country of the debt contracted by the former government. Today’s highly developed global credit markets would be less likely to lend to a country that negated its previous debt, or might require punishing levels of interest rates that would be unacceptable to the borrower.
U.S. Treasury bonds denominated in U.S. dollars are often considered “risk free” in the U.S. This disregards the risk to foreign purchasers of depreciation in the dollar relative to the lender’s currency. In addition, a risk-free status implicitly assumes the stability of the US government and its ability to continue repayments during any financial crisis.
Lending to a national government in a currency other than its own does not give the same confidence in the ability to repay, but this may be offset by reducing the exchange rate risk to foreign lenders. Usually small states with volatile economies have most of their national debt in foreign currency.
16.4: Taxes
16.4.1: The Federal Tax System
The United States is a federal republic with autonomous state and local governments with taxes imposed at each level.
Learning Objective
Describe the various levels of the tax structure in the United States
Key Points
- Taxes are imposed on net income of individuals and corporations by federal, most state, and some local governments. Federal tax rates vary from 10% to 35% of taxable income. State and local tax rates vary widely by jurisdiction, from 0% to 12.696% and many are graduated.
- Payroll taxes are imposed by the federal and all state governments. These include Social Security and Medicare taxes imposed on both employers and employees, at a combined rate of 15.3% (13.3% for 2011).
- Property taxes are imposed by most local governments and many special purpose authorities based on the fair market value of property. Sales taxes are imposed on the price at retail sale of many goods and some services by most states and some localities.
Key Terms
- tariff
-
a system of government-imposed duties levied on imported or exported goods; a list of such duties, or the duties themselves
- autonomous
-
Self-governing. Governing independently.
The Federal Tax System
The United States is a federal republic with autonomous state and local governments. Taxes are imposed at each of these levels. These include taxes on income, payroll, property, sales, imports, estates and gifts, as well as various fees. The taxes collected in 2010 by federal, state and municipal governments amounted to 24.8% of the GDP.
U.S. Tax Revenues
U.S. Tax Revenues as a Percentage of GDP
Taxes are imposed on net income of individuals and corporations by the federal, most state, and some local governments. Citizens and residents are taxed on worldwide income and allowed a credit for foreign taxes. Income subject to tax is determined under tax rules, not accounting principles, and includes almost all income. Most business expenses reduce taxable income, though limits apply to a few expenses. Individuals are permitted to reduce taxable income by personal allowances and certain non-business expenses that can include home mortgage interest, state and local taxes, charitable contributions, medical, and certain other expenses incurred above certain percentages of income. State rules for determining taxable income often differ from federal rules. Federal tax rates vary from 10% to 35% of taxable income. State and local tax rates vary widely by jurisdiction, from 0% to 12.696% and many are graduated. State taxes are generally treated as a deductible expense for federal tax computation. Certain alternative taxes may apply. The United States is the only country in the world that taxes its nonresident citizens on worldwide income or estate, in the same manner and rates as residents.
Payroll taxes are imposed by the federal and all state governments. These include Social Security and Medicare taxes imposed on both employers and employees, at a combined rate of 15.3% (13.3% for 2011). Social Security tax applies only to the first $106,800 of wages in 2009 through 2011. Employers also must withhold income taxes on wages. An unemployment tax and certain other levies apply.
Property taxes are imposed by most local governments and many special purpose authorities based on the fair market value of property. School and other authorities are often separately governed, and impose separate taxes. Property tax is generally imposed only on realty, though some jurisdictions tax some forms of business property. Property tax rules and rates vary widely.
Sales taxes are imposed on the retail price of many goods and some services by most states and some localities. Sales tax rates vary widely among jurisdictions, from 0% to 16%, and may vary within a jurisdiction based on the particular goods or services taxed. Sales tax is collected by the seller at the time of sale, or remitted as use tax by buyers of taxable items who did not pay sales tax.
The United States imposes tariffs or customs duties on the import of many types of goods from many jurisdictions. This tax must be paid before the goods can be legally imported. Rates of duty vary from 0% to more than 20%, based on the particular goods and country of origin.
Estate and gift taxes are imposed by the federal and some state governments on property passed by inheritance or donation. Similar to federal income taxes, federal estate and gift taxes are imposed on worldwide property of citizens and residents and allow a credit for foreign taxes.
16.4.2: Federal Income Tax Rates
Federal income tax is levied on the income of individuals or businesses, which is the total income minus allowable deductions.
Learning Objective
Summarize the key moments in the development of a national income tax
Key Points
- In order to help pay for its war effort in the American Civil War, the federal government imposed its first personal income tax on August 5, 1861 as part of the Revenue Act of 1861.
- In 1913, the Sixteenth Amendment to the Constitution made the income tax a permanent fixture in the U.S. tax system.
- Taxpayers generally must self assess income tax by filing tax returns. Advance payments of tax are required in the form of withholding tax or estimated tax payments.
- The 2012 marginal tax rates for a single person is 10 percent for $0–8,700, 15 percent for $8,701–35,350, 25 percent for $35,351–85,650, 28 percent for $85,651–178,650, 33 percent for $178,651–388,350 and 35 percent for $388,351 and up.
- In the United States, payroll taxes are assessed by the federal government, all fifty states, the District of Columbia, and numerous cities.
- The United States social insurance system is funded by a tax similar to an income tax.
Key Term
- net
-
The amount remaining after expenses are deducted; profit.
Federal income tax is levied on the income of individuals or businesses. When the tax is levied on the income of companies, it is often called a corporate tax, corporate income tax or profit tax. Individual income taxes often tax the total income of the individual, while corporate income taxes often tax net income. Taxable income is total income less allowable deductions.
Income is broadly defined. Most business expenses are deductible. Individuals may also deduct a personal allowance and certain personal expenses. These include home mortgage interest, state taxes, contributions to charity, and some other items. Some of these deductions are subject to limits. Capital gains are taxable, and capital losses reduce taxable income only to the extent of gains. Individuals currently pay a lower rate of tax on capital gains and certain corporate dividends.
U.S. Income Taxes out of Total Taxes
This graph shows the revenue the U.S. government has made purely from income tax, in relation to all taxes.
In order to help pay for the American Civil War, the federal government imposed its first personal income tax on August 5, 1861 as part of the Revenue Act of 1861. The tax rate was 3 percent of all incomes over $800 ($20,693 in 2011 dollars). This tax was repealed and replaced by another income tax in 1862.
In 1894, Democrats in Congress passed the Wilson-Gorman tariff, which imposed the first peacetime income tax. The rate was 2 percent on income over $4,000 ($107,446.15 in 2011 dollars). This meant fewer than 10 percent of households would pay the tax. The purpose of Wilson-Gorman tariff was to make up for revenue that would be lost by other tariff reductions.
In 1895 the Supreme Court, in Pollock v. Farmers’ Loan & Trust Co., ruled that a tax based on receipts from the use of property was unconstitutional. The Court held that taxes on rents from real estate, interest income from personal property, and other income from personal property were treated as direct taxes on property, and had to be apportioned. Since apportionment of income taxes was impractical, this decision effectively prohibited a federal tax on income from property.
In 1913, the Sixteenth Amendment to the Constitution made the income tax a permanent fixture in the U.S. tax system. The United States Supreme Court, in Stanton v. Baltic Mining Co., ruled that the amendment conferred no new power of taxation. They ruled that it simply prevented the courts from taking the power of income taxation from Congress. In fiscal year 1918, annual internal revenue collections for the first time passed the billion-dollar mark, rising to 5.4 billion by 1920. With the advent of World War II, employment increased, as did tax collections—to 7.3 billion. The withholding tax on wages was introduced in 1943 and was instrumental in increasing the number of taxpayers to $60 million and tax collections to $43 billion by 1945.
Taxpayers generally must self-assess income tax by filing tax returns. Advance payments of tax are required in the form of withholding tax or estimated tax payments. Taxes are determined separately by each jurisdiction imposing tax. Due dates and other administrative procedures vary by jurisdiction. April 15 is the due date for individual returns for federal and many state and local returns. Tax, as determined by the taxpayer, may be adjusted by the taxing jurisdiction.
Social Security Tax
The United States social insurance system is funded by a tax similar to an income tax. Social Security tax of 6.2% is imposed on wages paid to employees. The tax is imposed on both the employer and the employee. For 2011 and 2012, the employee tax has been reduced by 6.2% to 4.2%. The maximum amount of wages subject to the tax for 2009, 2010, and 2011 was/is $106,800. This amount is indexed for inflation. A companion Medicare Tax of 1.45% of wages is imposed on employers and employees, with no limitation. A self-employment tax in like amounts (totaling 15.3%, 13.3% for 2011 and 2012) is imposed on self-employed persons.
Payroll Tax
Payroll taxes generally fall into two categories: deductions from an employee’s wages and taxes paid by the employer based on the employee’s wages. In the United States, payroll taxes are assessed by the federal government, all fifty states, the District of Columbia, and numerous cities. These taxes are imposed on employers and employees and on various compensation bases and are collected and paid to the taxing jurisdiction by the employers. Most jurisdictions imposing payroll taxes require reporting quarterly and annually in most cases, and electronic reporting is generally required for all but small employers.
16.4.3: Tax Loopholes and Lowered Taxes
Tax evasion is the term for efforts by individuals, corporations, trusts and other entities to evade taxes by illegal means.
Learning Objective
Describe the legal and illegal ways individuals and corporations avoid paying some or all taxes owed
Key Points
- The “term tax mitigation” has also been used in the tax regulations of some jurisdictions to distinguish tax avoidance foreseen by the legislators from tax avoidance, which exploits loopholes in the law.
- The United States Supreme Court has stated that “The legal right of an individual to decrease the amount of what would otherwise be his taxes or altogether avoid them, by means which the law permits, cannot be doubted”.
- Tax evasion is the general term for efforts by individuals, corporations, trusts, and other entities to evade taxes by illegal means. Both tax avoidance and evasion can be viewed as forms of tax noncompliance, as they describe a range of activities that are unfavorable to a state’s tax system.
- The Internal Revenue Service has identified small business and sole proprietorship employees as the largest contributors to the tax gap between what Americans owe in federal taxes and what the federal government receives.
- When tips, side-jobs, cash receipts, and barter income is not reported, it is illegal cheating because no tax are paid by individuals. Similarly, those who are self-employed or run small businesses may not declare income and evade the payment of taxes.
Key Terms
- evasion
-
The act of eluding or avoiding, particularly the pressure of an argument, accusation, charge, or interrogation; artful means of eluding.
- loophole
-
A method of escape, especially an ambiguity or exception in a rule that can be exploited in order to avoid its effect.
- mitigation
-
relief; alleviation
Example
- In the United States, the IRS estimate of the 2001 tax gap was $345 billion. For 2006, the tax gap is estimated to be $450 billion. A more recent study estimates the 2008 tax gap in the range of $450-500 billion, and unreported income to be approximately $2 trillion. Thus, 18-19% of total reportable income is not properly reported to the IRS.
Tax Avoidance
Tax avoidance is the legal utilization of the tax regime to one’s own advantage, to reduce the amount of tax that is payable by means that are within the law. The term tax mitigation’s original use was by tax advisors as an alternative to the pejorative term tax avoidance. The term has also been used in the tax regulations of some jurisdictions to distinguish tax avoidance foreseen by the legislators from tax avoidance which exploits loopholes in the law. The United States Supreme Court has stated that “The legal right of an individual to decrease the amount of what would otherwise be his taxes or altogether avoid them, by means which the law permits, cannot be doubted. “
Tax Evasion
Tax evasion is the general term for efforts by individuals, corporations, trusts and other entities to evade taxes by illegal means. Both tax avoidance and evasion can be viewed as forms of tax noncompliance, as they describe a range of activities that are unfavorable to a state’s tax system. Tax evasion is an activity commonly associated with the underground economy, and one measure of the extent of tax evasion is the amount of unreported income, namely the difference between the amount of income that should legally be reported to the tax authorities and the actual amount reported, which is also sometimes referred to as the “tax gap. “
Taxes are imposed in the United States at each of these levels. These include taxes on income, payroll, property, sales, imports, estates, and gifts, as well as various fees. In 2010, taxes collected by federal, state and municipal governments amounted to 24.8% of GDP . Under the federal law of the United States of America, tax evasion or tax fraud is the purposeful illegal attempt of a taxpayer to evade payment of a tax imposed by the federal government. Conviction of tax evasion may result in fines and imprisonment.
U.S. Income Taxes out of Total Taxes
This graph shows the revenue the U.S. government has made purely from income tax, in relation to all taxes.
The Internal Revenue Service has identified small business and sole proprietorship employees as the largest contributors to the tax gap between what Americans owe in federal taxes and what the federal government receives. Rather than W-2 wage earners and corporations, small business and sole proprietorship employees contribute to the tax gap, because there are few ways for the government to know about skimming or non-reporting of income without mounting more significant investigations. When tips, side-jobs, cash receipts, and barter income is not reported it is illegal cheating, because no tax is paid by individuals. Similarly, those who are self-employed or run small businesses may not declare income and evade the payment of taxes.
Examples of Tax Evasion
An IRS report indicates that, in 2009, 1,470 individuals earning more than $1,000,000 annually faced a net tax liability of zero or less. Also, in 1998 alone, a total of 94 corporations faced a net liability of less than half the full 35% corporate tax rate and the corporations Lyondell Chemical, Texaco, Chevron, CSX, Tosco, PepsiCo, Owens & Minor, Pfizer, JP Morgan Saks, Goodyear, Ryder, Enron, Colgate-Palmolive, Worldcom, Eaton, Weyerhaeuser, General Motors, El Paso Energy, Westpoint Stevens, MedPartners, Phillips Petroleum, McKesson, and Northrup Grumman all had net negative tax liabilities. Additionally, this phenomenon was widely documented regarding General Electric in early 2011. A Government Accountability Office study found that, from 1998 to 2005, 55% of United States companies paid no federal income taxes during at least one year in a seven-year period it studied. A review in 2011 by Citizens for Tax Justice and the Institute on Taxation and Economic Policy of companies in the Fortune 500 profitable every year from 2008 through 2010 stated these companies paid an average tax rate of 18.5%, and that 30 of these companies actually had a negative income tax due.
In the United States, the IRS estimate of the 2001 tax gap was $345 billion. For 2006, the tax gap is estimated to be $450 billion. A more recent study estimates the 2008 tax gap in the range of $450–$500 billion, and unreported income to be approximately $2 trillion. Thus, 18-19 percent of total reportable income is not properly reported to the IRS.
16.5: Politics and Economic Policy
16.5.1: Fiscal Policy and Policy Making
Fiscal policy is the use of government revenue collection (taxation) and expenditure (spending) to influence the economy.
Learning Objective
Identify the central elements of fiscal policy
Key Points
- The two main instruments of fiscal policy are government taxation and expenditure.
- There are three main stances in fiscal policy: neutral, expansionary, and contractionary.
- Even with no changes in spending or tax laws at all, cyclic fluctuations of the economy cause cyclic fluctuations of tax revenues and of some types of government spending, which alters the deficit situation; these are not considered fiscal policy changes.
Key Terms
- fiscal policy
-
Government policy that attempts to influence the direction of the economy through changes in government spending or taxes.
- taxation
-
The act of imposing taxes and the fact of being taxed
- expenditure
-
Act of expending or paying out.
In economics and political science, fiscal policy is the use of government budget or revenue collection (taxation) and expenditure (spending) to influence economic. The two main instruments of fiscal policy are government taxation and expenditure. Changes in the level and composition of taxation and government spending can impact the following variables in the economy: (1) aggregate demand and the level of economic activity; (2) the pattern of resource allocation; and (3) the distribution of income.
The three main stances of fiscal policy are:
- Neutral fiscal policy, usually undertaken when an economy is in equilibrium. Government spending is fully funded by tax revenue and overall the budget outcome has a neutral effect on the level of economic activity.
- Expansionary fiscal policy, which involves government spending exceeding tax revenue, and is usually undertaken during recessions.
- Contractionary fiscal policy, which occurs when government spending is lower than tax revenue, and is usually undertaken to pay down government debt .
These definitions can be misleading however. Even with no changes in spending or tax laws at all, cyclic fluctuations of the economy cause cyclic fluctuations of tax revenues and of some types of government spending, which alters the deficit situation; these are not considered to be policy changes. Therefore, for purposes of the above definitions, “government spending” and “tax revenue” are normally replaced by “cyclically adjusted government spending” and “cyclically adjusted tax revenue”. Thus, for example, a government budget that is balanced over the course of the business cycle is considered to represent a neutral fiscal policy stance.
Methods of Funding
Governments spend money on a wide variety of things, from the military and police to services like education and healthcare, as well as transfer payments such as welfare benefits. This expenditure can be funded in a number of different ways:
- Taxation
- Seigniorage, the benefit from printing money
- Borrowing money from the population or from abroad
- Consumption of fiscal reservesSale of fixed assets (e.g., land)
- Borrowing: A fiscal deficit is often funded by issuing bonds, like treasury bills or consols and gilt-edged securities. These pay interest, either for a fixed period or indefinitely. If the interest and capital requirements are too large, a nation may default on its debts, usually to foreign creditors. Public debt or borrowing : it refers to the government borrowing from the public.
- Consuming prior surpluses: A fiscal surplus is often saved for future use, and may be invested in either local currency or any financial instrument that may be traded later once resources are needed; additional debt is not needed. For this to happen, the marginal propensity to save needs to be strictly positive.
Fiscal Policy
Comparison of National Spending Per Citizen for the 20 Largest Economies is an example of various fiscal policies.
16.5.2: Deficit Spending, the Public Debt, and Policy Making
Deficit spending and public debt are controversial issues within economic policy debates.
Learning Objective
Describe government debt and how it is formed
Key Points
- Whereas, public debt refers to debt owed by a central government, deficit spending refers to spending done by a government in excess of tax receipts is known as deficit spending.
- As the government draws its income from much of the population, government debt is an indirect debt of the taxpayers.
- Deficit spending may, however, be consistent with public debt remaining stable as a proportion of GDP, depending on the level of GDP growth.
- The mainstream economics position is that deficit spending is desirable and necessary as part of counter-cyclical fiscal policy, but that there should not be a structural deficit.
- The mainstream position is attacked from both sides – advocates of sound finance argue that deficit spending is always bad policy, while some Post-Keynesian economists, particularly Chartalists, argue that deficit spending is necessary, and not only for fiscal stimulus.
Key Terms
- deficit
-
A situation wherein, or amount whereby, spending exceeds government revenue.
- debt
-
The state or condition of owing something to another.
- financing
-
A transaction that provides funds for a business.
Government debt is the debt owed by a central government. In the United States and other federal states, government debt may also refer to the debt of a state or provincial government, municipal or local government. By contrast, the annual government deficit refers to the difference between government receipts and government spending in a single year, that is, the increase in debt over a particular year. Government debt is one method of financing government operations, but it is not the only method. Governments can also create money to monetize their debts, thereby eliminating the need to pay interest. Doing this, however, simply reduces government interest costs, rather than truly fixing the government debt. Governments usually borrow by issuing securities, government bonds, and bills. Less creditworthy countries sometimes borrow directly from a supranational organization, like the World Bank, or international financial institutions.
Public Debt
National Debt Clock outside the IRS office in NYC, July 1, 2010.
Because the government draws its income from much of the population, government debt is an indirect debt of the taxpayers. Government debt can be categorized as internal debt (owed to lenders within the country) and external debt (owed to foreign lenders). Sovereign debt usually refers to government debt that has been issued in a foreign currency. Government debt can also be categorized by duration until repayment is due. For instance, short term debt is generally considered to last for one year or less, while long term debt is for more than ten years. Medium term debt falls between these two boundaries. A broader definition of government debt may consider all government liabilities, including future pension payments and payments for goods and services the government has contracted, but not yet paid.
Government deficit, on the other hand, refers to a situation when the government’s expenses, including its purchases of goods and services, its transfers (grants) to individuals and corporations, and its net interest payments, exceed its tax revenues. Deficit spending occurs when government spending exceeds tax receipts. Governments usually issue bonds to match their deficits. Bonds can be bought by the Central Bank through quantitative easing. Otherwise the debt issuance can increase the level of (i) public debt, (ii) private sector net worth, (iii) debt service (interest payments) and (iv) interest rates. Deficit spending may, however, be consistent with public debt, remaining stable as a proportion of GDP, depending on the level of GDP growth.
Public Debt as a Percentage of GDP
General government debt as a percent of GDP in USA, Japan, and Germany.
Deficit Reduction Debate: Differences between the two Parties
In the United States, taxes are imposed on net income of individuals and corporations by the federal, most state, and some local governments. One of the largest budget expenditures for state governments is Medicaid. The United States public debt is the outstanding amount owed by the federal government of the United States from the issue of securities by the Treasury and other federal government agencies. US public debt consists of two components:
- Debt held by the public includes Treasury securities held by investors outside the federal government, including that held by individuals, corporations, the Federal Reserve System and foreign, state and local governments.
- Debt held by government accounts or intragovernmental debt includes non-marketable Treasury securities held in accounts administered by the federal government that are owed to program beneficiaries, such as the Social Security Trust Fund. Debt held by government accounts represents the cumulative surpluses, including interest earnings, of these accounts that have been invested in Treasury securities.
Democrats and Republicans mean very different things when they talk about tax reform. Democrats argue for the wealthy to pay more via higher income tax rates, while Republicans focus on lowering income tax rates. While both parties discuss reducing tax expenditures (i.e., exemptions and deductions), Republicans focus on preserving lower tax rates for capital gains and dividends, while Democrats prefer educational credits and capping deductions. Political realities make it unlikely that more than $150 billion per year in individual tax expenditures could be eliminated. One area with more common ground is corporate tax rates, where both parties have generally agreed that lower rates and fewer tax expenditures would align the U.S. more directly with foreign competitioIn addition to policies regarding revenue and spending, policies that encourage economic growth are the third major way to reduce deficits. Economic growth offers the “win-win” scenario of higher employment, which increases tax revenue while reducing safety net expenditures for such things as unemployment compensation and food stamps. Other deficit proposals related to spending or revenue tend to take money or benefits from one constituency and give it to others, a “win-lose” scenario. Democrats typically advocate Keynesian economics, which involves additional government spending during an economic downturn. Republicans typically advocate Supply-side economics, which involves tax cuts and deregulation to encourage the private sector to increase its spending and investment.
16.5.3: Monetary Policy
Monetary policy is the process by which a country controls the supply of money in order to promote economic growth and stability.
Learning Objective
Recognize the importance of monetary policy for addressing common economic problems
Key Points
- The official goals of monetary policy usually include relatively stable prices and low unemployment.
- A policy is referred to as “contractionary” if it reduces the size of the money supply, increases money supply slowly, or if it raises the interest rate.
- An expansionary policy increases the size of the money supply more rapidly or decreases the interest rate.
- Within almost all modern nations, special institutions exist that have the task of executing the monetary policy, often independently of the executive.
Key Terms
- monetary policy
-
The process by which the government, central bank, or monetary authority manages the supply of money or trading in foreign exchange markets.
- contractionary
-
Tending to reduce the size of the money supply.
- expansionary
-
Tending to increase the total supply of money in the economy.
Monetary policy is the process by which the monetary authority of a country controls the supply of money, often targeting a rate of interest for the purpose of promoting economic growth and stability. The official goals usually include relatively stable prices and low unemployment. Monetary theory provides insight into how to craft optimal monetary policy. It is referred to as either being expansionary or contractionary. An expansionary policy increases the total supply of money in the economy more rapidly than usual. A contractionary policy expands the money supply more slowly than usual or even shrinks it. Expansionary policy is traditionally used to try to combat unemployment in a recession by lowering interest rates in the hope that easy credit will entice businesses to expand. Contractionary policy is intended to slow inflation in hopes of avoiding the resulting distortions and deterioration of asset values.
Monetary policy differs from fiscal policy. Fiscal policy refers to taxation, government spending, and associated borrowing.
Monetary policy uses a variety of tools to influence outcomes like economic growth, inflation, exchange rates with other currencies and to control unemployment. When currency is under a monopoly of issuance or when there is a regulated system of issuing currency through banks which are tied to a central bank, the monetary authority has the ability to alter the money supply and therefore influence the interest rate (to achieve policy goals). The beginning of monetary policy was introduced in the late 19th century, where it was used to maintain the gold standard.
A policy is referred to as “contractionary” if it reduces the size of the money supply, increases the money supply slowly, or if it raises the interest rate. An expansionary policy increases the size of the money supply more rapidly or decreases the interest rate. Furthermore, monetary policies are described as “accommodative” if the interest rate set by the central monetary authority is intended to create economic growth. Policies are referred to as “neutral” if it is intended neither to create growth nor combat inflation. Policies are called “tight” if they are intended to reduce inflation.
There are several monetary policy tools available to achieve these ends including increasing interest rates by fiat, reducing the monetary base, and increasing reserve requirements. All of the tools have the effect of contracting the money supply and, if reversed, expanding the money supply. Since the 1970’s, monetary policy has generally been formed separately from fiscal policy. Even prior to the 1970’s, the Bretton Woods system still ensured that most nations would form the two policies separately.
Within most modern nations, special institutions (such as the Federal Reserve System in the United States, the Bank of England, the European Central Bank, the People’s Bank of China, the Reserve bank of India, and the Bank of Japan) exist which have the task of executing the monetary policy, often independently of the executive. In general, these institutions are called central banks and usually have other responsibilities such as supervising the smooth operation of the financial system .
Federal Reserve System
The Federal Reserve System acts as the central mechanism for federal intervention in the U.S. economy.
The primary tool of monetary policy is open market operations. This entails managing the quantity of money in circulation through the buying and selling of various financial instruments, such as treasury bills, company bonds, or foreign currencies. All of these purchases or sales result in more or less base currency entering or leaving market circulation.
Usually, the short term goal of open market operations is to achieve a specific short term interest rate target. In other instances, monetary policy might entail the targeting of a specific exchange rate relative to some foreign currency or gold. For example, in the case of the United States, the Federal Reserve targets the federal funds rate, which is the rate at which member banks lend to one another overnight. However, the monetary policy of China is to target the exchange rate between the Chinese renminbi and a basket of foreign currencies.
The other primary means of conducting monetary policy include: (i) Discount window lending (lender of last resort); (ii) Fractional deposit lending (changes in the reserve requirement); (iii) Moral suasion (cajoling certain market players to achieve specified outcomes); (iv) “Open mouth operations” (talking monetary policy with the market).
16.5.4: Income Security Policy and Policy Making
Income security policy is designed to provide a population with income at times when they are unable to care for themselves.
Learning Objective
Define income security policy in the United States
Key Points
- Income maintenance is based on a combination of five main types of program: social insurance, means-tested benefits, non-contributory benefits, discretionary benefits, and universal or categorical benefits.
- The fact that a compulsory government program, not the private market, provides unemployment insurance can be explained using the concepts of adverse selection and moral hazard.
- The amount of support is enough to cover basic needs, and eligibility is often subject to a comprehensive and complex assessment of an applicant’s social and financial situation.
Key Terms
- demogrants
-
Non-contributory benefits given to whole sections of the population without a test of means or need.
- discretion
-
The freedom to make one’s own judgements
Income Security Policy is usually applied through various programs designed to provide a population with income at times when they are unable to care for themselves. Income maintenance is based in a combination of five main types of program
- Social insurance.
- Means-tested benefits. This is financial assistance provided for those who are unable to cover basic needs (such as food, clothing, and housing) due to poverty or lack of income because of unemployment, sickness, disability, or caring for children. While assistance is often in the form of financial payments, those eligible for social welfare can usually access health and educational services free of charge. The amount of support is enough to cover basic needs and eligibility is often subject to a comprehensive and complex assessment of an applicant’s social and financial situation.
- Non-contributory benefits. Several countries have special schemes, administered with no requirement for contributions and no means test, for people in certain categories of need (for example, veterans of armed forces, people with disabilities, and very old people).
- Discretionary benefits. Some schemes are based on the discretion of an official, such as a social worker.
- Universal or categorical benefits, also known as demogrants. These are non-contributory benefits given to whole sections of the population without a test of means or need, such as family allowances or the public pension in New Zealand (known as New Zealand Superannuation).
That a compulsory government program, not the private market, provides unemployment insurance can be explained using the concepts of adverse selection and moral hazard.
Adverse selection refers to the fact that “workers who have the highest probability of becoming unemployed have the highest demand for unemployment insurance.” Adverse selection causes profit maximizing private insurance agencies to set high premiums for the insurance because there is a high likelihood they will have to make payments to the policyholder. High premiums exclude many individuals who otherwise might purchase the insurance. “A compulsory government program avoids the adverse selection problem. Hence, government provision of UI has the potential to increase efficiency. However, government provision does not eliminate moral hazard.”
“At the same time, those workers who managed to obtain insurance might experience more unemployment other than what would have been the case.” The private insurance company would have to determine whether the employee is unemployed through no fault of their own, which is difficult to determine. Incorrect determinations could result in the payout of significant amounts for fraudulent claims, or alternately failure to pay legitimate claims. This leads to the rationale that if the government could solve either problem, then government intervention would increase efficiency.
16.5.5: The Changing Federal Role in the Economy
The role of the federal government in the economy has been a central debate among economists and political scientists for two centuries.
Learning Objective
Explain the role and the historical origins of the Federal Reserve System in the early 20th century
Key Points
- In the United States, the Federal Reserve System (also known as the Federal Reserve, and informally as the Fed) serves as the central mechanism for understanding federal intervention (and de-entanglement) with the economy.
- Over time, the roles and responsibilities of the Federal Reserve System have expanded and its structure has evolved.
- Events such as the Great Depression were major factors leading to changes in the system.
Key Terms
- monetary policy
-
The process by which the government, central bank, or monetary authority manages the supply of money or trading in foreign exchange markets.
- bank regulation
-
Bank regulations are a form of government regulation which subject banks to certain requirements, restrictions and guidelines.
The role of the federal government in the economy has been a central debate among economists and political scientists for over two centuries. Classic liberalism and Right-libertarian arguments argue for limited or no role for the federal government in the economy, while welfare economics argue for an increased role of the federal government.
In the United States, the Federal Reserve System (also known as the Federal Reserve, and informally as the Fed) serves as the central mechanism for understanding federal intervention (and de-entanglement) with the economy. The central banking system of the United States, the Fed was created on December 23, 1913, with the enactment of the Federal Reserve Act. This was largely in response to a series of financial panics, particularly a severe panic in 1907. Over time, the roles and responsibilities of the Federal Reserve System have expanded, and its structure has evolved. Events such as the Great Depression were major factors leading to changes in the system.
Federal Reserve System
The Federal Reserve System acts as the central mechanism for federal intervention in the U.S. economy.
The Congress established three key objectives for monetary policy—maximum employment, stable prices, and moderate long-term interest rates—in the Federal Reserve Act. The first two objectives are sometimes referred to as the Federal Reserve’s dual mandate. Its duties have expanded over the years, and today, according to official Federal Reserve documentation, include conducting the nation’s monetary policy, bank regulation, maintaining the stability of the financial system and providing financial services to depository institutions, the U.S. government, and foreign official institutions. The Fed also conducts research into the economy and releases numerous publications, such as the Beige Book.
16.5.6: Politics and the Great Recession of 2008
Global political instability is rising fast due to the global financial crisis and is creating new challenges that need to be managed.
Learning Objective
Explain the causes and consequences of the 2008-2012 Global Recession
Key Points
- Economic weakness could lead to political instability in many developing nations.
- Globally, mass protest movements have arisen in many countries as a response to the economic crisis.
- The bursting of the U.S. housing bubble, which peaked in 2006, caused the values of securities tied to U.S. real estate pricing to plummet, damaging financial institutions globally.
- The U.S. Senate’s Levin–Coburn Report asserted that the crisis was the result of “high risk, complex financial products; undisclosed conflicts of interest; the failure of regulators, the credit rating agencies, and the market itself to rein in the excesses of Wall Street.
- The 1999 repeal of the Glass–Steagall Act effectively removed the separation between investment banks and depository banks in the United States.
- Most governments in Europe, including Greece, Spain, Italy, have faced austerity measures that include reduced government spending, elimination of social programs in education and health, and the deregulation of short-term and long-term capital markets.
Key Terms
- recession
-
A period of reduced economic activity
- financial crisis
-
A period of economic slowdown characterised by declining productivity and devaluing of financial institutions often due to reckless and unsustainable money lending.
- secession
-
Secession (derived from the Latin term secessio) is the act of withdrawing from an organization, union, or especially a political entity. Threats of secession also can be a strategy for achieving more limited goals.
The Global Recession
The 2008–2012 global recession is a massive global economic decline that began in December 2007 and took a particularly sharp downward turn in September 2008. No economic recession since The Great Depression of the 1930s has affected economic input, production and circulation of capital like the current global recession. The global recession affected the entire world economy, hitting some countries more than others. It is a major global recession characterized by various systemic imbalances sparked by the outbreak of the Financial crisis of 2007–2008. shows the trend in international trade that reflects the recession in 2008.
World Trade
Evolution of international trade since 2000. There exists a dip in 2009 that corresponds to the recession of 2008.
Beginning February 26, 2009 an Economic Intelligence Briefing was added to the daily intelligence briefings prepared for the President of the United States. This addition reflects the assessment of United States intelligence agencies that the global financial crisis presents a serious threat to international stability.
Causes
The bursting of the U.S. housing bubble, which peaked in 2006, caused the values of securities tied to U.S. real estate prices to plummet, damaging financial institutions globally. The financial crisis was triggered by a complex interplay of government policies that encouraged home ownership, providing easier access to loans for subprime borrowers, overvaluation of bundled sub-prime mortgages based on the theory that housing prices would continue to escalate, questionable trading practices on behalf of both buyers and sellers, compensation structures that prioritize short-term deal flow over long-term value creation, and a lack of adequate capital holdings from banks and insurance companies to back the financial commitments they were making.
Several causes of the financial crisis have been proposed, with varying weights assigned by experts. The U.S. Senate’s Levin–Coburn Report asserted that the crisis was the result of “high risk, complex financial products; undisclosed conflicts of interest; the failure of regulators, the credit rating agencies, and the market itself to rein in the excesses of Wall Street. ” The 1999 repeal of the Glass–Steagall Act effectively removed the separation between investment banks and depository banks in the United States. Critics argued that credit rating agencies and investors failed to accurately price the risk involved with mortgage-related financial products, and that governments did not adjust their regulatory practices to address 21st-century financial markets. Research into the causes of the financial crisis has also focused on the role of interest rate spreads.
Political Consequences
In March 2009, Business Week stated that global political instability is rising fast due to the global financial crisis and is creating new challenges that needed to be addressed. The Associated Press reported in March 2009 that: United States “Director of National Intelligence Dennis Blair has said the economic weakness could lead to political instability in many developing nations. ” Even some developed countries are experiencing political instability. NPR reports that David Gordon, a former intelligence officer who now leads research at the Eurasia Group, said: “Many, if not most, of the big countries out there have room to accommodate economic downturns without having large-scale political instability if we’re in a recession of normal length. If you’re in a much longer-run downturn, then all bets are off.”
In January 2009, the government leaders of Iceland were forced to call elections two years early after the people of Iceland staged mass protests and clashed with the police due to the government’s handling of the economy. Hundreds of thousands protested in France against President Sarkozy’s economic policies. Prompted by the financial crisis in Latvia, the opposition and trade unions there organized a rally against the cabinet of premier Ivars Godmanis. The rally gathered some 10-20 thousand people. In the evening the rally turned into a Riot. The crowd moved to the building of the parliament and attempted to force their way into it, but were repelled by the state’s police. In late February many Greeks took part in a massive general strike to protest the economic situation and they shut down schools, airports, and many other services in Greece. Communists in Moscow also rallied to protest the Russian government’s economic plans. Protests have also occurred in China as demands from the west for exports have been dramatically reduced and unemployment increased. Beyond these initial protests, the protest movement has grown and continued in 2011. In late 2011, the Occupy Wall Street protest took place in the United States, spawning several offshoots that came to be known as the Occupy movement.
In 2012 the economic difficulties in Spain have caused support for secession movements to increase. In Catalonia support for the secession movement exceeded 50%, up from 25% in 2010. On September 11, a pro-independence march, which in the past had never drawn more than 50,000 people, pulled in a crowd estimated by city police at 1.5 million..
16.5.7: Business and Labor in the Economy
The relationship between business and labor has been at the center of economic and political theory for the last two centuries.
Learning Objective
Explain the relationship between labor and business in the economy
Key Points
- The late nineteenth century saw many governments starting to address questions surrounding the relationship between business and labor, primarily through labor law or employment law.
- Labor law is the body of laws, administrative rulings, and precedents which address the legal rights of, and restrictions on, working people and their organizations.
- Labor law arose due to the demand for workers to have better conditions, the right to organize, or, alternatively, the right to work without joining a labor union, and the simultaneous demands of employers to restrict the powers of workers’ many organizations and to keep labor costs low.
- Workers’ organizations, such as trade unions, can also transcend purely industrial disputes, and gain political power. The state of labor law at any one time is therefore both the product of, and a component of, struggles between different interests in society.
- Commercial law is the body of law that applies to the rights, relations, and conduct of persons and businesses engaged in commerce, merchandising, trade, and sales. It is often considered to be a branch of civil law and deals with issues of both private law and public law.
Key Terms
- labor law
-
This is the body of laws, administrative rulings, and precedents which address the legal rights of, and restrictions on, working people and their organizations.
- capitalism
-
A socio-economic system based on private property rights, including the private ownership of resources or capital, with economic decisions made largely through the operation of a market unregulated by the state.
- labor union
-
A continuous association of wage earners for the purpose of maintaining or improving the conditions of their employment; a trade union.
Example
- The Fair Labor Standards Act of 1938 set the maximum standard work week to 44 hours and in 1950, this was reduced to 40 hours. A green card entitles legal immigrants to work just like U.S. citizens, without requirement of work permits. Despite the 40-hour standard maximum work week, some lines of work require more than 40 hours to complete the tasks of the job. For example, if you prepare agricultural products for market, you can work over 72 hours a week, if you want to, but you cannot be required to.
Introduction
The relationship between business and labor has been at the center of some of the major economic and political theories about capitalism over the last two centuries. In his 1867 work, Das Kapital, Karl Marx argued that business and labor were inherently at odds under capitalism, because the motivating force of capitalism is in the exploitation of labor, whose unpaid work is the ultimate source of profit and surplus value. In order for this tension to be resolved, the workers had to take ownership over the means of the production, and, therefore, their own labor–a process that Marx explained in his other major work, The Communist Manifesto.
The late nineteenth century saw many governments starting to address questions surrounding the relationship between business and labor, primarily through labor law or employment law. Labor law is the body of laws, administrative rulings, and precedents which address the legal rights of, and restrictions on, working people and their organizations. As such, it mediates many aspects of the relationship between trade unions, employers, and employees.
Labor and Business
Labor strikes, such as this one in Tyldesley in the 1926 General Strike in the U.K., represent the often fraught relationship between labor and business.
Labor Law
Labor law arose due to the demand for workers to have better conditions, the right to organize, or, alternatively, the right to work without joining a labor union, and the simultaneous demands of employers to restrict the powers of the many organizations of workers and to keep labor costs low. Employers’ costs can increase due to workers organizing to achieve higher wages, or by laws imposing costly requirements, such as health and safety or restrictions on their free choice of whom to hire . Workers’ organizations, such as trade unions, can also transcend purely industrial disputes and gain political power. The state of labor law at any one time is, therefore, both the product of, and a component of, struggles between different interests in society.
Palmer Carpenter
1942 photograph of a carpenter at work on Douglas Dam, Tennessee (TVA). Encyclopedic both as a document of carpentry during that era and as a historic example of early color photography. Supersaturation was popular in the United States during that era; a fine example of the esthetics of its place and time.
The Fair Labor Standards Act of 1938 set the maximum standard work week to 44 hours and in 1950, this was reduced to 40 hours. A green card entitles legal immigrants to work just like U.S. citizens, without requirement of work permits. Despite the 40-hour standard maximum work week, some lines of work require more than 40 hours to complete the tasks of the job. For example, if you prepare agricultural products for market, you can work over 72 hours a week, if you want to, but you cannot be required to work these many hours. If you harvest products you must get a period of 24 hours off after working up to 72 hours in a seven-day period. There are exceptions to the 24-hour break period for certain harvesting employees, such as those involved in harvesting grapes, tree fruits, and cotton. Professionals, clerical (administrative assistants), technical, and mechanical employees cannot be terminated for refusing to work more than 72 hours in a work week. These high-hour ceilings, combined with a competitive job market, often motivate American workers to work more hours than required. American workers consistently take fewer vacation days than their European counterparts and on average, take the fewest days off of any developed country.
Commercial Law
Commercial law is the body of law that applies to the rights, relations, and conduct of persons and businesses engaged in commerce, merchandising, trade, and sales. It is often considered to be a branch of civil law and deals with issues of both private law and public law.
Chapter 15: Domestic Policy
15.1: The Policy-Making Process
15.1.1: Issue Identification and Agenda Building
The first step of the policy process involves issues being turned into agenda items for policymaking bodies.
Learning Objective
Describe the various ways different issues can become the focus of concerted political attention
Key Points
- Few issues actually make it onto policy agendas but those that do are often a result of public outcry, crises, and the lobbying efforts of important interest groups.
- Issues must become agenda items for policymaking bodies, like legislatures and administrative agencies, in order to proceed into the next stages of the policy process.
- Policy agendas are ephemeral and can easily be replaced by other issues when crises arise.
Key Terms
- public policy
-
The set of policies (laws, plans, actions, behaviors) of a government; plans and methods of action that govern that society; a system of laws, courses of action, and priorities directing a government action.
- agenda setting
-
a theory that describes the news media’s ability to shape which issues their audience thinks are important, based on their frequency and level of coverage
- agenda
-
A temporally organized plan for matters to be attended to.
Many problems exist within the United States but few make it onto the public policy agenda. Those problems that do move onto the policy agenda must first be identified as salient issues. An issue can be broadly defined as a circumstance of reality not meeting a constituency’s expectations. The power of the group in question can affect whether an issue moves onto the policy agenda. For example, a problem encountered by a major political campaign donor can move a given issue more quickly onto the agenda than a problem encountered by a small interest group without great political clout.
In other instances, issues can move into the public spotlight and be forced onto the policy agenda by the amount of attention and public outcry they receive. The media can be particularly effective in accomplishing this task. For example, widespread reporting on the number of Americans affected by tainted eggs and spinach moved the food safety system onto the policy agenda and resulted in a law that allocated greater authority to the Food and Drug Administration. The media can also keep issues off of the policy agenda by giving the impression that an issue does not require resolution through the policy process.
In addition to the power of certain groups and the media, significant events can act as policy triggers that immediately move issues onto the policy agenda. The terrorist attack in New York City on 9/11/2001 is an example of an event that brought terrorism, national security, weapons of mass destruction, and relations toward Iraq to the forefront of the national and international policy agendas.
In all of the aforementioned examples, issues have a high likelihood of becoming agenda items. Issues must become agenda items for some policymaking body in order to enter the policy cycle. These policymaking bodies may be a legislature, (e.g., a city council) or an administrative agency, (e.g., a health department).
It is important to note, however, that not all issues that move onto policy agendas complete the policy process to become laws. Indeed, agendas are subject to timing and can easily be displaced by other issues when crises occur. For example, Obama’s planned policy to loosen restrictions on coastal drilling was dropped after the BP oil spill occurred in the Gulf of Mexico . Those issues that withstand any significant crisis, though, will move onto the next stage of the policy process, formulation.
BP Oil Spill
The BP oil spill is an example of a crisis that changed the national policy agenda by reversing Obama’s planned policy to loosen restrictions on coastal drilling.
15.1.2: Policy Formulation
Formulation is the second stage of the policy process and involves the proposal of solutions to agenda issues.
Learning Objective
Identify the considerations that shape the formulation of first-best policy
Key Points
- Formulation often provides policymakers with several choices for resolving agenda items.
- Effective policy formulation is comprised of analysis that identifies the most effective policies and political authorizations.
Key Terms
- formulation
-
the second stage of the policy process in which policymakers propose courses of action for addressing agenda issues.
- policymaker
-
one involved in the formulation of policies, especially politicians, lobbyists, and activists.
Formulation of policy consists of policymakers discussing and suggesting approaches to correcting problems that have been raised as part of the agenda. Sometimes it is necessary to choose from among multiple potential paths forward. The issue of traffic safety has been solved by various policies throughout time. Here are a few examples of solutions: more highways were built in the 1950s , safer cars were required in the 1960s, and jailing drunk drivers was the solution in the 1980s and 1990s.
1950s Highways
Building highways is one example of a policy that was used to address the issue of traffic safety.
The ultimate policy that is chosen to solve the issue at hand is dependent on two factors. First, the policy must be a valid way of solving the issue in the most efficient and feasible way possible. Effective formulation involves analysis and identification of alternatives to solving issues. Secondly, policies must be politically feasible. This is usually accomplished through majority building in a bargaining process. Policy formulation is, therefore, comprised of analysis that identifies the most effective policies and political authorization.
15.1.3: Policy Adoption
Policy adoption is the third phase of the policy process in which policies are adopted by government bodies for future implementation.
Learning Objective
Identify which groups can expedite or retard the adoption of policy
Key Points
- Congress plays a minimal role in policy adoption since it cannot initiate policies the way the president can.
- The president has the sole ability of initiating new national policies.
Key Terms
- policy
-
A principle of behaviour, conduct etc. thought to be desirable or necessary, especially as formally expressed by a government or other authoritative body.
- adoption
-
The choosing and making that to be one’s own which originally was not so; acceptance; as, the adoption of opinions.
The Process of Adoption
Formulated policies have to be adopted by relevant institutions of government in order to be put into effect. Adoption can be affected by the same factors that influence what issues move into the earlier phase of agenda building. For instance, policies thataddress the changed circumstances crises often bring can often be immediately adopted. Meanwhile, powerful interest groups can use their political influence to determine what policies are adopted.
The media can also play a key role in policy adoption. When reporting and commentary is unbiased it can provide a forum where debate over various cases for policy adoption takes place. When the media displays a favorable bias, it can enhance a policy proposal’s likelihood of adoption. On the other hand, an unfavorable media bias may undermine a policy proposal. For example, unfavorable media coverage undermined the George W. Bush administration’s proposals to change Social Security. Negative response also killed the Clinton administration’s health care proposal.
George W. Bush Social Security Discussion
Negative media attention toward George W. Bush’s plan for Social Security prevented policy adoption.
Governors or mayors can adopt policies to bring about change on a state or local level. However, the president has the sole responsibility of determining what policies are adopted on a nationwide level. Congress has some influence over policy adoption, as it must approve the president’s actions. Once the relevant government bodies adopt policies, they move into the next phase of the policy process, policy implementation.
15.1.4: Policy Implementation
Policy implementation is the fourth phase of the policy cycle in which adopted policies are put into effect.
Learning Objective
Describe how policies are implemented and the challenges that accompany the process
Key Terms
- policy
-
A principle of behaviour, conduct etc. thought to be desirable or necessary, especially as formally expressed by a government or other authoritative body.
- implementation
-
The process of moving an idea from concept to reality. In business, engineering and other fields, implementation refers to the building process rather than the design process.
How Policies are Implemented
The implementation of policy refers to actually enacting the proposed solutions. Whether a given policy has been implemented successfully depends on three major criteria:
- A policy needs to be communicated from the creator (e.g., a local official, or the President) to the relevant governing body within the bureaucracy that has the power to enact it. Thus, a policy designed to enforce traffic safety by cutting down on the number of drunk drivers would be passed down to law enforcement officials for implementation. When no existing agency has the capabilities to carry out a given policy, new agencies must be established and staffed. This is reflected most clearly in the “alphabet soup” agencies established by Franklin D. Roosevelt under the New Deal.
- Second, a policy needs to be communicated clearly and easy to interpret if it is to be implemented effectively. Too much ambiguity in this stage can lead to involvement by the judiciary that will force legislators to clarify their ends and means for policy implementation. The judiciary may overrule the implementation of such policies.
- Finally, the resources applied to implementation must integrate with existing processes and agencies, without causing extensive disruption, competition, or conflict.
In addition to the aforementioned elements, policy implementation can further be complicated when policies are passed down to agencies without a great deal of direction. Policy formulation is often the result of compromise and symbolic uses of politics. Therefore, implementation imposes a large amount of both discretion and confusion in agencies that administer policies. In addition, bureaucratic incompetence, ineptitude, and scandals may complicate the policy implementation process.
The above issues with policy implementation have led some scholars to conclude that new policy initiatives will either fail to get off the ground or will take considerable time to be enacted. The most surprising aspect of the policy process may be that policies are implemented at all.
15.1.5: Policy Evaluation
Policies must be evaluated once in place, but still tend to become entrenched over time and often do not receive any kind of evaluation.
Learning Objective
Summarize two key challenges in assessing policies
Key Terms
- empirical
-
Pertaining to, derived from, or testable by observations made using the physical senses or using instruments which extend the senses.
- policy
-
A principle of behaviour, conduct etc. thought to be desirable or necessary, especially as formally expressed by a government or other authoritative body.
Introduction
Policies may be evaluated according to a number of standards. They may be informally evaluated according to uncritical analysis, such as anecdotes and stories. Policies may also be substantively evaluated through careful, honest feedback from those affected by the policies. More formal research can provide empirical evidence regarding the effectiveness of policies. Finally, scientific research provides both comparative and statistical evaluations of whether policies produce clear causal results.
Policy evaluation can take place at different times. Administrators seeking to improve operations may assess policies as they are being implemented. After policies have been implemented they can be further evaluated to understand their overall effectiveness.
In spite of the many ways policies may be evaluated, they are often not evaluated at all. Formal and scientific research is time consuming, complicated to design and implement, and costly. While more informal evaluations focused on feedback and anecdotes are more accessible, they also tend to be contaminated with bias.
Challenges in Assessing Policies
Policies can be difficult to assess. Some policies aim to accomplish broad conceptual goals that are subject to different interpretations. Healthy air quality, for example, can be difficult to define in ways that will be universally accepted. Policies may also contain multiple objectives that may not be compatible. For example, two of the objectives of the 1996 Telecommunications Act were creating jobs and reducing cable rates. If sufficient amounts of revenues are not made, companies must either cut jobs to maintain low rates or must raise rates to create more jobs. Policies that do have compatible objectives can still be difficult to evaluate when only a few of the objectives are accomplished. One person may deem the policy successful for accomplishing some of the objectives, while another may deem the policy unsuccessful for not accomplishing all of the objectives.
Air Quality
Broad conceptual goals, like healthy air quality, are difficult to evaluate since people may have different opinions on what “healthy” entails.
In general, public policies become entrenched over time and are difficult to terminate even if they are evaluated by various standards.
15.1.6: Policy Making and Special Interests
Interest groups that can advance their cause to the policymaking process tend to possess certain key traits.
Learning Objective
Describe the formation of special-interest groups and their role in the creation of policy
Key Points
- Special interest groups can range from think tanks, certain demographic sectors of the U.S. population, business groups, environmentalists, and foreign governments.
- Many interest groups compete for the attention of policymakers. Conflict between interest groups is, therefore, common.
- Key factors possessed by powerful special interest groups include: a large number of constituents, organizational skills, access to resources, links to government officials that will represent their cause, sheer intensity, and skill in accessing and convincing policymakers.
Key Terms
- lobbying
-
Lobbying (also lobby) is the act of attempting to influence decisions made by officials in the government, most often legislators or members of regulatory agencies.
- demographic
-
A demographic criterion: a characteristic used to classify people for statistical purposes, such as age, race, or gender.
Many different types of groups attempt to influence United States policy. For instance, certain demographic groups may favor policies that benefit them the most. Other groups may create formal institutions, known as think tanks, to advance their cause. Foreign governments can also behave as interest groups when it comes to U.S. foreign policy. For instance, Saudi Arabia launched a lobbying campaign to improve its image in the United States after it came under criticism for failing to crack down on terrorist groups after the 9/11 attack in New York.
Because of the wide variety of special interest groups, conflict between groups on an issue is common. The debate over creating free trade areas, like the North American Free Trade Agreement (NAFTA) , placed business groups in competition with labor and environmental groups in garnering the attention of policymakers toward their divergent causes.
NAFTA Member Countries
The NAFTA initialing ceremony, in October 1992. Events, such as the signing of the North American Free Trade Agreement (NAFTA), highlight the differences among special interest groups and the competition that takes place between them to capture the attention of policymakers.
Those interest groups that are able to advance their causes to the policy agenda must possess certain key factors. Political scientist Charles O. Jones has organized these factors into six categories. First, the number of people affected plays a role in what policies are adopted. For example, senior citizens often make their demands onto the policy agenda because of their large numbers and inclination to vote. The extent to which constituents are organized and the resources available to them serve as other factors that influence whether interest groups can advance their causes to the policy agenda. A fourth factor is whether government representatives exist that can link interest groups and their problems with governmental processes.
The skills that interest groups utilize to advance their causes are also important in accessing the policymaking process. Many organizations only employ the most experienced and capable lobbyists to represent their causes. Meanwhile, the sheer intensity of interest groups can make up for inadequate resources or numbers of constituents.
15.2: Health Care Policy
15.2.1: Health Care Policy
The debate over access to health care in the United States concerns whether or not the government should provide universal health care.
Learning Objective
Discuss the structure and institutions responsible for creating health care policy
Key Points
- Health care facilities are largely owned and operated by private sector businesses. The government primarily provides health insurance for public sector employees.
- Active debate about health care reform in the United States concerns questions of the right to health care, access, fairness, efficiency, cost, choice, value, and quality. Some have argued that the system does not deliver equivalent value for the money spent.
- Publicly funded health care programs help provide for the elderly, disabled, military service families and veterans, children, and the poor. Federal law ensures public access to emergency services regardless of ability to pay.
- The resulting economy of scale in providing health care services appears to enable a much tighter grip on costs. The U.S., as a matter of oft-stated public policy, largely does not regulate prices of services from private providers, assuming the private sector to do it better.
Key Terms
- monopolistic
-
Acting in the manner of a monopoly.
- socialize
-
To take into collective or governmental ownership
Background
Health care policy can be defined as the decisions, plans, and actions that are undertaken to achieve specific health care goals within a society. According to the World Health Organization (WHO), an explicit health policy achieves several things: it defines a vision for the future; it outlines priorities and the expected roles of different groups; and it builds consensus and informs people.
In many countries individuals must pay for health care directly out-of-pocket in order to gain access to health care goods and services. Individuals of these countries also have to pay private sector players in the medical and pharmaceutical industries to develop research. The planning and production of health human resources is distributed among labor market participants.
Health care in the United States is provided by many different organizations. Health care facilities are largely owned and operated by private sector businesses. The government primarily provides health insurance for public sector employees. 60-65 percent of healthcare provisions and spending comes from programs such as Medicare, Medicaid, TRICARE, the Children’s Health Insurance Program, and the Veterans Health Administration. Most of the population under 65 is insured by his/her or a family member’s employer, some buy health insurance on their own, and the remainder are uninsured.
The current active debate about health care reform in the United States concerns questions of a right to health care, access, fairness, efficiency, cost, choice, value, and quality. Some have argued that the system does not deliver equivalent value for the money spent. The United States pays twice as much for health care, yet lags behind other wealthy nations in measures such as infant mortality and life expectancy. In fact, the United States has a higher infant mortality rate than most of the world’s industrialized nations.
The Role of Government in the Health Care Market
Numerous publicly funded health care programs help provide for the elderly, disabled, military service families, veterans, children, and the poor. Additionally, the federal law ensures public access to emergency services regardless of the ability to pay. However, a system of universal health care has not been implemented nation-wide. Nevertheless, as the Organization for Economic Co-operation and Development (OECD) has pointed out, the total U.S. public expenditure for this limited population would, in most other OECD countries, be enough for the government to provide primary health insurance for the entire population. Although the federal Medicare program and the federal-state Medicaid programs possess some monopolistic purchasing power, the highly fragmented buying side of the U.S. health system is relatively weak by international standards, and in some areas, some suppliers such as large hospital groups have a virtual monopoly on the supply side. In most OECD countries, there is a high degree of public ownership and public finance. The resulting economy of scale in providing health care services appears to enable a much tighter grip on costs. The United States, as a matter of oft-stated public policy, largely does not regulate prices of services from private providers, assuming the private sector could do it better.
Examples of Health Care Reform
Massachusetts has adopted a universal health care system through the Massachusetts 2006 Health Reform Statute. It mandates that all residents who can afford to purchase health insurance must do so, and it provides subsidized insurance plans so that nearly everyone can afford health insurance. The law also provides a “Health Safety Net Fund” to pay for necessary treatment for those who cannot find affordable health insurance or for those who are not eligible. In July 2009, Connecticut similarly passed a law called SustiNet, with the goal of achieving health-care coverage for 98 percent of its residents by 2014.
Advocates for single-payer health care often point to other countries, where national government-funded systems produce better health outcomes at a lower cost. A single-payer universal health care plan for an entire population can be financed from a pool to which many parties such as employees, employers, and the state have contributed. Opponents deride this type of system as socialized medicine, and it has not been one of the favored reform options by Congress or the President in both the Clinton and Obama reform efforts. It has been pointed out that socialized medicine is a system in which the government owns the means of providing medicine. Britain’s health care system is an example of this socialized system, along with the Veterans Health Administration program in America. Medicare is an example of a mostly single-payer system, as is France’s system. Both of these systems have private insurers to choose from, but the government is the dominant purchaser.
Physicians for National Healthcare Poster
Single payer health care poster about the United States National Health Care Act.
15.2.2: Medicaid and Medicare
Medicaid is a health program for people and families with low incomes and Medicare is for people over 65 and disabled.
Learning Objective
Compare and contrast Medicaid and Medicare as social programs provided by the U.S. government
Key Points
- Medicaid is the United States’ health program for certain people and families with low incomes and resources. It is a means-tested program that is jointly funded by the states and federal government; however, it is managed by the states.
- Medicare is a national social insurance program, administered by the U.S. federal government since 1965. It guarantees access to health insurance for Americans ages 65 and older in addition to younger people with disabilities and people with end stage renal disease.
- Beginning in the 1990’s, many states received waivers from the federal government to create Medicaid managed care programs. Under managed care, Medicaid recipients are enrolled in a private health plan, which receives a fixed monthly premium from the state.
- Medicare Advantage plans are required to offer coverage that meets or exceeds the standards set by the original Medicare program, but they do not have to cover every benefit in the same way.
- Unlike Medicare, which is solely a federal program, Medicaid is a joint federal-state program. Each state operates its own Medicaid system, but this system must conform to federal guidelines in order for the state to receive matching funds and grants.
- Medicare is a earned entitlement. Medicare entitlement is most commonly based on a record of contributions to the Medicare fund.
Key Terms
- per capita
-
shared equally among all individuals.
- disabled
-
those who are disabled, regarded collectively or as a social group.
Background
Medicaid is the United States’ health program for qualified individuals and families with low incomes and resources. It is a means-tested program that is jointly funded by the states and federal government; however, it is managed by the states. Medicaid serves people who are U.S. citizens or legal permanent residents, including low-income adults, their children, and people with certain disabilities. Poverty alone does not necessarily qualify someone for Medicaid. Medicaid is the largest source of funding for medical and health-related services for people with limited incomes in the United States.
Medicare is a national social insurance program, administered by the U.S. federal government since 1965. It guarantees access to health insurance for Americans ages 65 and older in addition to younger people with disabilities and people with advanced renal disease. Medicare has a different social role from for-profit private insurers, which manage their risk portfolio to maximize profitability by denying coverage to those they anticipate will need it. As a social insurance program, Medicare spreads the financial risk associated with illness across society in order to protect everyone. In 2008, the U.S. Federal Government spent $391,266,000,000 on Medicare.
Features
Beginning in the 1990’s, many states received waivers from the federal government to create Medicaid managed care programs. Under managed care, Medicaid recipients are enrolled in a private health plan, which receives a fixed monthly premium from the state. The health plan is then responsible for providing for all or most of the recipient’s healthcare needs. Today, all but a few states use managed care to provide coverage to a significant proportion of Medicaid enrollees. Nationwide, roughly 60% of enrollees are enrolled in managed care plans. Core eligibility groups of poor children and parents are most likely to be enrolled in managed care, while the elderly and disabled eligibility groups more often remain in traditional “fee for service” Medicaid.
Some states operate a program known as the Health Insurance Premium Payment Program (HIPP). This program allows a Medicaid recipient to have private health insurance paid for by Medicaid. As of 2008, only a few states had premium assistance programs and enrollment was relatively low. However, interest in this approach remained high.
Medicare Advantage plans are required to offer coverage that meets or exceeds the standards set by the original Medicare program, but they do not have to cover every benefit in the same way. If a plan chooses to pay less than Medicare for some benefits, like skilled nursing facility care, the savings may be passed along to consumers by offering lower co-payments for doctor visits. Medicare Advantage plans use a portion of the payments they receive from the government for each enrollee to offer supplemental benefits. All plans limit their members’ annual out-of-pocket spending on medical care, with a yearly limit of $6,700. Some plans offer dental coverage, vision coverage, and other services not covered by Medicare Parts A or B. These plans are a good value for the health care dollar, if an individual wants to use the provider included in the plan’s network.
Comparing Medicare and Medicaid
Unlike Medicare, which is solely a federal program, Medicaid is a joint federal-state program. Each state operates its own Medicaid system, but this system must conform to federal guidelines in order for the state to receive matching funds and grants. The matching rate provided to states is determined using a federal matching formula (called Federal Medical Assistance Percentages), which generates payment rates that vary from state to state, depending on each state’s respective per capita income. The wealthiest states only receive a federal match of 50% while poorer states receive a larger match.
Medicaid funding has become a major budgetary issue for many states over the last few years. On average, states spend 16.8% of their general funds on the program. If the federal match expenditure is also counted, the program usually takes up 22% of each state’s budget.
U.S. Healthcare GDP
Spending on U.S. healthcare as a percentage of gross domestic product (GDP).
Medicare is a earned entitlement. Entitlement is most commonly based on a record of contributions to the Medicare fund. As a result, it is a form of social insurance that makes it feasible for people to pay for insurance for sickness in old age. They contribute to the fund when they are young and able to work. Medicare offers assurance that contributing individuals will receive benefits when they are older and no longer working. Some people will pay in more than they receive back and others will get back more than they paid in, but this is the practice with any form of insurance, public or private.
15.2.3: Universal Coverage
Universal healthcare coverage provides healthcare and financial protection to all citizens; however the United States has not adopted it.
Learning Objective
Explain how universal healthcare works as a national health care policy and the arguments made for and against it
Key Points
- Universal healthcare–sometimes referred to as universal health coverage, universal coverage, universal care or social health protection–usually refers to a healthcare system that provides healthcare and financial protection to all citizens.
- Proponents of healthcare reforms that call for the expansion of government involvement in order to achieve universal healthcare argue that the need to provide profits to investors in a predominantly free-market health system, and the additional administrative spending, tends to drive up costs.
- The United States has instead adopted a single-payer system for healthcare. The term single-payer healthcare is used in the United States to describe a funding mechanism meeting the costs of medical care from a single fund.
Key Term
- coverage
-
The amount of space or time given to an event in newspapers or on television
Background
Universal healthcare–sometimes referred to as universal health coverage, universal coverage, universal care, or social health protection–usually refers to a healthcare system that provides healthcare and financial protection to all citizens. It is organized to provide a specified package of benefits to all members of a society with the end goal of providing financial risk protection, improved access to health services, and improved health outcomes. Universal healthcare is not a one-size-fits-all concept, nor does it imply unlimited coverage for all people. Three critical dimensions can determine universal healthcare: who is covered, what services are covered, and how much of the cost is covered.
Universal healthcare systems vary according to the extent of government involvement in providing care and/or health insurance. In some countries, such as the United Kingdom, Spain, Italy, and the Nordic countries, the government has a high degree of involvement in the commissioning and delivery of healthcare services. In these countries, access is based on residence rights, and not on the purchase of insurance. Other countries have a much more pluralistic delivery system of obligatory health insurance, with contributory rates based on salaries or income and usually funded jointly by employers and beneficiaries . Sometimes the healthcare funds are derived from a combination of insurance premiums, salary-based mandatory contributions by employees and/or employers to regulated sickness funds, and by government taxes.
Total Health Expenditure Per Capita, US Dollars
This image depicts the total healthcare services expenditure per capita, in U.S. dollars PPP-adjusted, for the nations of Australia, Canada, France, Germany, Japan, Switzerland, the United Kingdom, and the United States with the years 1995, 2000, 2005, and 2007 compared.
Proponents of Universal Healthcare in the United States
Proponents of healthcare reforms that call for the expansion of government involvement in order to achieve universal healthcare argue that the need to provide profits to investors in a predominantly free-market health system, and the additional administrative spending, tends to drive up costs and lead to more expensive healthcare.
According to economist and former US Secretary of Labor, Robert Reich, only a “big, national, public option” can force insurance companies to cooperate, share information, and reduce costs. Scattered, localized, “insurance cooperatives” are too small to do that and are “designed to fail” by the moneyed forces opposing Democratic healthcare reform.
The United States has instead adopted a single-payer system for healthcare. The term “single-payer healthcare” is used in the United States to describe a funding mechanism meeting the costs of medical care from a single fund. Although the fund holder is usually the government, some forms of single-payer employ a public-private system.
15.2.4: Health Care Reform Under Obama
There have been many changes in healthcare reform, but as of 2012, President Obama has introduced some of the most controversial changes.
Learning Objective
Explain the tenets of President Obama’s Health Care reform legislation
Key Points
- As of 2012, the healthcare legislation remains controversial, with some states challenging it in federal court. There has also been opposition from some voters. In June 2012, in a 5–4 decision, the U.S. Supreme Court found the law to be constitutional.
- Starting in 2014, the law will prohibit insurers from denying coverage (see guaranteed issue) to sicker applicants or imposing special conditions such as higher premiums or payments (see community rating).
- Also beginning in 2014, the law will generally require uninsured individuals to buy government-approved health insurance, the individual mandate.
- Analysts argued that the insurance premium structure may shift more costs onto younger, healthier people. About $43 billion was spent in 2008 providing non-reimbursed emergency services for the uninsured. Act supporters argued that these costs increased the average family’s insurance premiums.
Key Term
- premium
-
something offered at a reduced price as an incentive to buy something else.
Background
In March 2010, President Obama gave a speech at a rally in Pennsylvania explaining the necessity of health insurance reform. The speech called on Congress to hold a final up or down vote on reform. As of 2012, the legislation remains controversial, with some states challenging it in federal court. There has also been opposition from some voters. In June 2012, in a 5–4 decision, the U.S. Supreme Court found the law to be constitutional.
Expanding Medicaid and Subsidizing Insurance
The law includes health-related provisions that take effect over several years. The law intends to expand Medicaid eligibility for people making up to 133% of the federal poverty level (FPL). It would subsidize insurance premiums for people making up to 400% of the FPL ($88,000 for family of 4 in 2010) so their maximum “out-of-pocket” payment for annual premiums would be on sliding scale from 2% to 9.5% of income. The law also intends to provide incentives for businesses to offer health care benefits, prohibit denial of coverage and denial of claims based on pre-existing conditions, establish health insurance exchanges, prohibit insurers from establishing annual coverage caps, and provide support for medical research.
Guaranteed Issue, Community Rating, Individual Mandate
Starting in 2014, the law will prohibit insurers from denying coverage (see guaranteed issue) to sicker applicants or imposing special conditions such as higher premiums or payments (see community rating). Health care expenditures are highly concentrated with the most expensive 5% of the population accounting for half of aggregate health care spending. The bottom 50% of spenders account for only 3% of health care spending. This means that what insurance companies gain from avoiding the sick greatly outweighs any possible gains from managing their care. As a result, insurers devoted resources to such avoidance at a direct cost to effective care management, which is against the interests of the insured. Instead of providing health security, the health insurance industry had, since the 1970’s, begun to compete by becoming risk differentiators. They sought to only insure people with good or normal health profiles and exclude those considered to be or likely to become unhealthy and therefore less profitable. According to a study from Cambridge Hospital, Harvard Law School, and Ohio University, 62% of all 2007 personal bankruptcies in the United States were driven by medical incidents, with (75% having had) health insurance.
Also beginning in 2014, the law will generally require uninsured individuals to buy government-approved health insurance — the individual mandate. Government-run exchanges may present information to facilitate comparison among competing plans, if available, but previous attempts at creating similar exchanges only produced mixed results. This requirement is expected to reduce the number of the uninsured from 19% of all residents in 2010 to 8% by 2016. Some analysts believe that the 8% figure of uninsured are expected to be mostly illegal immigrants (5%), with the rest paying the fine unless exempted. Whether or not this is true remains unclear based on the present available data.
Uninsured and Uninsured Rate (1987 to 2008)
This image depicts the number of uninsured Americans and the uninsured rate from 1987 to 2008.
Some analysts have argued that the insurance premium structure may shift more costs onto younger and healthier people. Approximately $43 billion was spent in 2008 providing non-reimbursed emergency services for the uninsured, which the Act’s supporters argued increased the average family’s insurance premiums. Other studies claim the opposite that the argument for reduced ER visits has been shown to be largely a canard […] insuring the uninsured will lead to, very approximately, a doubling of health expenditures for the currently uninsured. The studies suggest that making insurance mandatory rather than voluntary will tend to bring younger, healthier people into the insurance pool, shifting the cost of the Act’s increased spending onto them.
15.2.5: Public Health
The role of public health is to improve the quality of society by protecting people from disease.
Learning Objective
Categorize the institutions responsible for public health and good governance
Key Points
- The dramatic increase in the average life span during the 20th century is widely credited to public health achievements, such as vaccination programs and control of many infectious diseases.
- One of the major sources of the increase in average life span in the early 20th century was the decline in the urban penalty brought on by improvements in sanitation. These improvements included chlorination of drinking water, filtration, and sewage treatment.
- Public health plays an important role in disease prevention efforts in both the developing world and in developed countries, through local health systems and non-governmental organizations.
Key Term
- epidemiological
-
Of or pertaining to epidemiology, the branch of a science dealing with the spread and control of diseases, computer viruses, concepts etc. throughout populations or systems.
Example
- In the United States, the front line of public health initiatives is state and local health departments. The United States Public Health Service (PHS), led by the Surgeon General of the United States, and the Centers for Disease Control and Prevention, headquartered in Atlanta, are involved with several international health activities in addition to their national duties.
Background
The dramatic increase in the average life span during the 20th century is widely credited to public health achievements, such as vaccination programs and control of many infectious diseases including polio, diphtheria, yellow fever, and smallpox; effective health and safety policies such as road traffic safety and occupational safety; improved family planning; tobacco control measures; and programs designed to decrease non-communicable diseases by acting on known risk factors such as a person’s background, lifestyle and environment.
One of the major sources of the increase in average life span in the early 20th century was the decline in the urban penalty brought on by improvements in sanitation. These improvements included chlorination of drinking water, filtration, and sewage treatment, which led to the decline in deaths caused by infectious waterborne diseases such as cholera and intestinal diseases. Cutler and Miller in “The Role of Public Health Improvements in Health Advances” demonstrate how typhoid fever deaths in Chicago, Baltimore, Cincinnati, and Cleveland declined after these American cities adopted chlorination, filtration, and sewage treatment.
Since the 1980s, the growing field of population health has broadened the focus of public health from individual behaviors and risk factors to population-level issues such as inequality, poverty, and education. Modern public health is often concerned with addressing determinants of health across a population. There is recognition that our health is affected by many factors including where we live, genetics, income, education and social relationships – these are known as the social determinants of health. A social gradient in health runs through society, with those who are poorest generally suffering poor health. However even those in the middle classes will generally have poorer health than those of a higher social stratum. Newer public health policies seeks to address these health inequalities by advocating for population-based policies that improve health in an equitable manner.
Additionally, with the onset of the epidemiological transition and as the prevalence of infectious diseases decreased through the 20th century, the focus of public health has recently turned to chronic diseases such as cancer and heart disease.
Public Health and the Government
Public health plays an important role in disease prevention efforts in both the developing world and in developed countries, through local health systems and non-governmental organizations. The World Health Organization (WHO) is the international agency that coordinates and acts on global public health issues. Most countries have their own government public health agencies , sometimes known as ministries of health, to respond to domestic health issues. For example in the United States, the front line of public health initiatives is state and local health departments. The United States Public Health Service (PHS), led by the Surgeon General of the United States, and the Centers for Disease Control and Prevention, headquartered in Atlanta, are involved with several international health activities, in addition to their national duties. In Canada, the Public Health Agency of Canada is the national agency responsible for public health, emergency preparedness and response, and infectious and chronic disease control and prevention. The public health system in India is managed by the Ministry of Health & Family Welfare of the government of India with state owned health care facilities.
Public Health Nursing
Public health nursing made available through child welfare services.
15.3: Energy and Environmental Policy
15.3.1: Energy Policy
The energy policy of the United States is determined by federal, state, and local entities in the United States.
Learning Objective
Summarize the key provisions of a “cap-and-trade” approach to pollution reduction
Key Points
- Energy policy may include legislation, international treaties, subsidies and incentives to investment, guidelines for energy conservation, taxation, and other public policy techniques.
- The United States had resisted endorsing the Kyoto Protocol, preferring to let the market drive CO2 reductions to mitigate global warming, which will require CO2 emission taxation.
- The administration of Barack Obama has proposed an aggressive energy policy reform, including the need for a reduction of CO2 emissions with a cap and trade program, which could help encourage more clean renewable, sustainable energy development.
- The United States receives approximately 84% of its energy from fossil fuels. This energy is used for transport, industry, and domestic use. The remaining portion comes primarily from hydroelectric and nuclear stations.
- Renewable energy accounted for about 8% of total energy consumption in the United States in 2009. In the same year, approximately 10% of the electricity produced nationally came from renewable sources.
- Cap-and-trade is a market-based approach used to control pollution by providing economic incentives for achieving reductions in the emissions of pollutants.
Key Terms
- Renewable
-
A resource that is able to reproduce itself through biological and natural methods over time.
- Kyoto Protocol
-
The Kyoto Protocol is a protocol to the United Nations Framework Convention on Climate Change (UNFCCC or FCCC) that set binding obligations on the industrialized countries to reduce their emissions of greenhouse gases.
Example
- The 2010 United States federal budget proposes to support clean energy development with a 10-year investment of US $15 billion per year, generated from the sale of greenhouse gas (GHG) emissions credits. Under the proposed cap-and-trade program, all GHG emissions credits would be auctioned off, generating an estimated $78.7 billion in additional revenue in FY 2012, steadily increasing to $83 billion by FY 2019.
Background
The energy policy of the United States is determined by federal, state, and local entities in the United States, which address issues of energy production, distribution, and consumption, such as building codes and gas mileage standards. Energy policy may include legislation, international treaties, subsidies and incentives to investment, guidelines for energy conservation, taxation, and other public policy techniques. Global warming is the rise in the average temperature of Earth’s atmosphere and oceans since the late 19th century and its projected continuation. Since the early 20th century, Earth’s mean surface temperature has increased by about 0.8 °C (1.4 °F), with about two-thirds of the increase occurring since 1980. Warming of the climate system is unequivocal, and scientists are more than 90% certain that it is primarily caused by increasing concentrations of greenhouse gases produced by human activities such as the burning of fossil fuels and deforestation. These findings are recognized by the national science academies of all major industrialized nations.
State-specific energy-efficiency incentive programs also play a significant role in the overall energy policy of the United States. The United States had resisted endorsing the Kyoto Protocol, preferring to let the market drive CO2 reductions to mitigate global warming, which will require CO2 emission taxation . The administration of Barack Obama has proposed an aggressive energy policy reform, including the need for a reduction of CO2 emissions with a cap and trade program, which could help encourage more clean renewable, sustainable energy development.
Diagram of Greenhouse Effect
This diagram shows how the greenhouse effect works. Incoming solar radiation to the Earth equals 341 watts per square meter (Trenberth et al., 2009). Some of the solar radiation is reflected back from the Earth by clouds, the atmosphere, and the Earth’s surface (102 watts per square meter). Some of the solar radiation passes through the atmosphere. About half of the solar radiation is absorbed by the Earth’s surface (161 watts per square meter). Solar radiation is converted to heat energy, causing the emission of longwave (infrared) radiation back to the atmosphere (396 watts per square meter). Some of the infrared radiation is absorbed an re-emitted by heat-trapping “greenhouse” gases in the atmosphere. Outgoing infrared radiation from the Earth equals 239 watts per square meter.
Energy Independence
The 1973 oil crisis made energy a popular topic of discussion in the US. The Federal Department of Energy was started with steps planned toward energy conservation and more modern energy producers. A National Maximum Speed Limit of 55 mph (88 km/h) was imposed to help reduce consumption, and Corporate Average Fuel Economy (aka CAFE) standards were enacted to downsize automobile categories. Year-round Daylight Saving Time was imposed, the United States Strategic Petroleum Reserve was created, and the National Energy Act of 1978 was introduced. These initiatives resulted in alternate forms of energy and a diversified oil supply.
The United States receives approximately 84% of its energy from fossil fuels . This energy is used for transport, industry, and domestic use. The remaining portion comes primarily from hydroelectric and nuclear stations. Americans constitute less than 5% of the world’s population, but consume 26% of the world’s energy to produce 26% of the world’s industrial output. They account for about 25% of the world’s petroleum consumption, while producing only 6% of the world’s annual petroleum supply.
Coal
It was estimated by the Energy Information Administration that in 2007 primary sources of energy consisted of petroleum (36.0%), coal (27.4%), and natural gas (23.0%), amounting to an 86.4% share for fossil fuels in primary energy consumption in the world.
Cap-and-Trade
Cap-and-trade is a market-based approach used to control pollution by providing economic incentives for achieving reductions in the emissions of pollutants. A central authority (usually a governmental body) sets a limit or cap on the amount of a pollutant that may be emitted. The limit or cap is allocated or sold to firms in the form of emissions permits which represent the right to emit or discharge a specific volume of the specified pollutant. Firms are required to hold a number of permits (or allowances or carbon credits) equivalent to their emissions. The total number of permits cannot exceed the cap, limiting total emissions to that level. Firms that need to increase their volume of emissions must buy permits from other firms.
The transfer of permits is referred to as a trade. In effect, the buyer is paying a charge for polluting, while the seller is being rewarded for having reduced emissions. Thus, in theory, those who can reduce emissions most cheaply will do so, achieving the pollution reduction at the lowest cost to society. The 2010 United States federal budget proposes to support clean energy development with a 10-year investment of US $15 billion per year, generated from the sale of greenhouse gas (GHG) emissions credits. Under the proposed cap-and-trade program, all GHG emissions credits would be auctioned off, generating an estimated $78.7 billion in additional revenue in FY 2012, steadily increasing to $83 billion by FY 2019.
Renewable Energy
Renewable energy accounted for about 8% of total energy consumption in the United States in 2009. In the same year, approximately 10% of the electricity produced nationally came from renewable sources. The United States’ hydroelectric plants make the largest contribution to the country’s renewable energy, producing 248,100MW of the 371,700MW (67%) generated through all renewable energy. However, wind power in the United States is a growing industry. Increases in wind, solar, and geothermal power are expected to allow renewable energy production to double in the three-year period from 2009 to 2012, an increase from 8% to 14% of total consumption. Most of the increase is expected to come from wind power.
Green Mountain Wind Farm, Fluvanna 2004
The Brazos Wind Farm, also known as the Green Mountain Energy Wind Farm, near Fluvanna, Texas. Note cattle grazing beneath the turbines.
15.3.2: Environmental Policy
Environmental Policy has become highly contentious and political, with competing interests involved at any legislation over the environment.
Learning Objective
Describe the key conflict in environmental policy
Key Points
- The environment covers so many different aspects of life from health and recreation to business and commerce that that there are always competing interests involved an any legislation over the environment.
- Many individuals and organizations are involved as stakeholders in the process of making and implementing environmental policy.
- One of the enduring conflicts in environmental policy is between environmental and business interests. There is often a sense that the regulations or limitations made for environmental protection will limit economic growth.
- However, some groups are attempting to incorporate concern for the environment with business and innovation.
- U.S. environmental policy is always international policy as well. For example, when the U.S. pulled out of its obligations under the Kyoto Accord there was a great deal of international criticism.
Key Terms
- Kyoto Protocol
-
The Kyoto Protocol is a protocol to the United Nations Framework Convention on Climate Change (UNFCCC or FCCC) that set binding obligations on the industrialized countries to reduce their emissions of greenhouse gases.
- environmental policy
-
any course of action deliberately taken or not taken to manage human activities with a view to prevent, reduce, or mitigate harmful effects on nature and natural resources
Environmental Policy
Environmental policy in the U.S. has become highly contentious, competitive, and political. The environment covers so many different aspects of life from health and recreation to business and commerce that there are always competing interests involved at any legislation focused on the environment.
Additionally, many individuals and organizations are involved as stakeholders in the process of making and implementing environmental policy. These include various members of the executive and legislative branches of government, state and municipal governments, as well as civil servants, external interests groups, and international governments and residents. Within the Federal government alone there are the Environmental Protections Agency , the Department of Transportation, and the U.S. Fish and Wildlife Agency.
Environmental Protection Agency
The EPA is just one of the various bureaus involved in U.S. environmental policy.
Various legislation governs environmental concerns in the U.S., including the National Environmental Policy Act (NEPA) which was first introduced in 1970. This act mandates the preparation of Environmental Assessments and Environmental Impact Statements to try and limit the environmental damage of development.
One of the enduring conflicts in environmental policy is between environmental and business interests. There is often a sense that the regulations or limitations made for environmental protection will limit economic growth. However, some groups are attempting to incorporate concerns for the environment, with business and innovation. For example, the Bright Green environmental movement focuses on developing technological fixes for environmental problems. The Green Jobs movement focuses on combining needed new employment opportunities in low-income neighborhoods and neighborhoods of color with environment improvements in those same neighborhoods.
Finally, because the U.S. has to share the Earth with all of the other countries, U.S. environmental policy is always international policy as well. For example, when the U.S. pulled out of its obligations under the Kyoto Protocol there was a great deal of international criticism.
15.3.3: Oil
Oil remains a major energy source in the U.S., and changing this reliance requires political initiative.
Learning Objective
Describe the key impediments to severing a dependence on fossil fuels
Key Points
- In the 1970s, politicians began considering the importance of the U.S. becoming less dependent on foreign-produced oil.
- However, fossil fuels and petroleum remain a major energy source in the U.S.. This is in part because of the strength of the oil and energy lobby.
- Two concerns over the influence of oil companies on energy policy are ongoing environmental consequences and political impact.
- Today the idea of energy independence has emerged as an important political buzzword. It is yet to be seen if there is enough political will to make a significant shift from oil to other energy sources.
Key Term
- energy independence
-
Energy independence involves reducing U.S. imports of oil and other foreign sources of energy. If total energy is considered, the U.S. is over 70% self-sufficient.
Oil
In the 1970s, the U.S. faced an oil crisis, with severe shortages leaving lineups at gas stations across the country . The crisis was set off at least in part because of oil embargoes levied by OAPEC (Organization of Arab Petroleum Exporting Countries), and conflicts in Libya. The oil crisis contributed to recessions in the country. At that time, politicians in the U.S. began considering the importance of becoming less dependent on foreign-produced oil.
1973 Oil Crisis
During the 1970s there were oil shortages in the US. This sign outside of a gas station let patrons know the state of their supply.
However, fossil fuels and petroleum remain a major energy source in the U.S.. This is in part because of the strength of the oil and energy lobby in the US. In the 2006 election cycle, oil companies donated $19 million dollars to campaigns with over 80% of that going to Republican candidates. There have also been concerns that oil lobbyists have had direct influence through close relationships with politicians. For example, oil executives were invited to consult on issues such as the U.S. position on the Kyoto Protocol and the involvement in Iraq.
Two concerns over the influence of oil companies on energy policy are ongoing environmental consequences and the political impacts of a reliance on oil. The continuing influence of oil companies has been implicated in limiting the development of new energy resources and technologies. The film “Who Killed the Electric Car? “, for example, looks at the various factors that limited the production and success of the electric car. The movie examines the role of oil companies, and particularly the Western States Petroleum Association, in limiting the production of public car charging stations.
Today the idea of energy independence has emerged as an important political buzzword. It is yet to be seen if there is enough political will to make a significant shift from oil to other energy sources.
15.3.4: Climate Change
Global warming policy can be quite contentious because competing interests get involved in the policy-making and implementation process.
Learning Objective
Analyze the difficulties confronting cooperative international action on climate
Key Points
- Global warming, or climate change, is the idea that the actions of human beings are drastically changing weather patterns on the planet, including the temperature.
- Most scientists agree that the Earth has warmed significantly in recent years. They are quite confident about the human influence on change. However, there is disagreement about what to do about global warming.
- While 191 countries had ratified the Kyoto Protocol by September 2011, the U.S. was not one of them.
- Another idea for slowing down carbon emissions is a cap-and-trade system. As a market-based system, it would see limits or caps set on the mount of greenhouse gases that could be emitted.
- As with all environmental policy, global warming policy can be quite contentious because competing interests get involved in the policy-making and implementation process.
Key Terms
- anthropogenic
-
Having its origin in the influence of human activity on nature.
- global warming
-
he sustained increase in the average temperature of the earth, sufficient to cause climate change
- cap-and-trade
-
Cap-and-trade is a market-based approach used to control pollution by providing economic incentives for achieving reductions in the emissions of pollutants.
- Kyoto Protocol
-
The Kyoto Protocol is a protocol to the United Nations Framework Convention on Climate Change (UNFCCC or FCCC) that set binding obligations on the industrialized countries to reduce their emissions of greenhouse gases.
Global Warming
Global warming, or climate change, is the idea that the actions of human beings are drastically changing weather patterns on the planet, including the temperature. Most scientists agree that the Earth has warmed significantly in recent years. The warming is particularly true around the poles . This is causing polar icecap melting and a degradation of the protective ozone layer. Most scientists are also confident about the anthropogenic drivers of climate change. However, there is a great deal of division over the question of how important these changes are and what should be done about it.
Arctic Wildlife
Global warming is disproportionately affecting the polar regions, and changing the landscape for arctic wildlife like the polar bear.
One of the key problems is that the immediate effects of climate change are likely to be felt by counties with less political and economic clout. These include many countries in the global south and island nations that will be impacted if the sea levels rise.
One aspect that has been identified as important in slowing down climate change is the reduction in greenhouse gases, also referred to as carbon emissions.
Because of the global impact of these emissions, international treaties have tried to address the issue. The Kyoto Protocol was one such treaty, first introduced in Kyoto, Japan in 2005. While 191 countries had ratified the agreement by September 2011, the U.S. was not one of them.
Another idea for slowing down carbon emissions is a cap-and-trade system. This is a market-based system that would see limits, or caps, set on the amount of greenhouse gasses that could be emitted. Companies would purchase permits for a certain level of emissions. A market for these permits would be created so companies that produced lower levels could trade their permits with companies that wished to pollute at a higher rate.
As with all environmental policy, global warming policies can be quite contentious because competing interests get involved in the policy-making and implementation process. One difficulty is that the process has become highly politicized. Republican politicians often questioning the science behind climate change. Add to that the difficulty that is caused by highly influential business lobby groups, and it becomes apparent why it is so difficult to pass legislation to try and slow climate change.
15.3.5: New Sources of Energy
There are many concerns about the environmental and political impact of continued dependence on nonrenewable, foreign-produced fossil fuels.
Learning Objective
Describe the challenges facing those attempting to shift the United States away from non-renewable sources of energy
Key Points
- There are many concerns about the environmental and political impact of a continued dependance on non-renewable fossil fuels, particularly fossil fuels from sources outside of the U.S.
- Energy independence has become an important buzzword in U.S. politics. Investment in new and alternative sources of energy has also increased in recent years.
- The cooperation of various stakeholders is necessary to make new sources of fuel successful. These stakeholders include interest groups, politicians and consumers.
Key Term
- energy independence
-
Energy independence involves reducing U.S. imports of oil and other foreign sources of energy. If total energy is considered, the U.S. is over 70% self-sufficient.
New Sources of Energy
Environmental and political stability in the U.S. has been threatened in recent years by a continued dependence on non-renewable fossil fuels, particularly those from outside sources. “Energy independence” has thus become an important buzzword in U.S. politics, leading to greater investment in new and alternative sources of energy.
However, the widespread use of alternative fuel requires more than just scientific research. Investment in new technology is also affected by the influence of various interest groups, including representatives of traditional fuel suppliers who may have a vested interest in slowing the development of alternatives. Shifting to alternative fuels would also require strong legislative and regulations-based support in order to jump-start and sustain production. Finally, it would require a willingness on the part of consumers to make the shift away from the energy sources they have grown accustomed to.
While there are many alternative energy sources to choose from, none of them is perfect, and each has its own supporters and detractors.
Two of the best known and least scientifically controversial new energy sources are solar power and wind power. Both involve harnessing the power of naturally occurring events– sunlight and wind– and converting them into electric power through solar panels or wind turbines .
Wind Farm
These wind turbines exemplify one type of a new and alternative energy source.
Critics of these sources often cite aesthetic concerns– for example, that the panels of turbine farms block site-lines or alter landscape and architecture. Particularly in the case of wind farms, there is often extensive community consultation prior to construction in order to address potential aesthetic and environmental impacts.
Other alternatives include biofuels such as ethanol and clean coal technologies. In the U.S., ethanol is produced mainly from corn. Ethanol fuels burn more cleanly than traditional fuels; however, there is some concern that the production of large amounts of corn exclusively for biofuels could be detrimental to farm lands. The idea of clean coal is still largely experimental, but the Department of Energy is investing large sums in research and technology development in this area. The incentive is that clean coal would release less carbon, and that other energy sources such as hydrogen could also be captured for use in the process.
This is not the first time the idea of greater energy independence has become popular in U.S. policy and politics. For example, during the Oil Embargo of the 1970s, U.S. politicians began to discuss alternatives to fossil fuels. It remains to be seen whether there is sufficient political will this time around to make a significant shift towards alternative fuels.
15.4: Education Policy
15.4.1: Education Policy
Education policy refers to the collection of laws and rules that govern the operation of education systems.
Learning Objective
Discuss the institutions and issues relevant to current education policy in the United States and the sources of education policy evaluation and analysis
Key Points
- Examples of areas subject to debate in education policy include school size, class size, school choice, school privatization, teaching methods, curricular content, and graduation requirements.
- Unlike the systems of most other countries, education in the United States is highly decentralized, and the federal government and Department of Education are not heavily involved in determining curricula or educational standards (with the recent exception of the No Child Left Behind Act).
Key Terms
- Department of Education
-
The Department of Education is a Cabinet-level department of the United States government. The primary functions of the Department of Education are to “establish policy for, administer and coordinate -most federal assistance to education, collect data on US schools, and to enforce federal educational laws regarding privacy and civil rights. “
- Education policy
-
the principles and government policy-making in the educational sphere, as well as the collection of laws and rules that govern the operation of education systems
- Education policy
-
Education policy refers to the collection of laws and rules that govern the operation of education systems.
Education Policy
Education policy refers to the collection of laws and rules that govern the operation of education systems.
Education occurs in many forms for many purposes. Examples include early childhood education, kindergarten through to 12th grade, two and four year colleges or universities, graduate and professional education, adult education and job training. Therefore, education policy can directly affect the education of people at all ages.
Examples of areas subject to debate in education policy, include school size, class size, school choice, school privatization, tracking, teacher education and certification, teacher pay, teaching methods, curricular content, graduation requirements, school infrastructure investment, and the values that schools are expected to uphold and model.
Education policy analysis is the scholarly study of education policy. It seeks to answer questions about the purpose of education, the objectives (societal and personal) that it is designed to attain, the methods for attaining them and the tools for measuring their success or failure. Research intended to inform education policy is carried out in a wide variety of institutions and in many academic disciplines. Important researchers are affiliated with departments of psychology , economics , sociology , and human development , in addition to schools and departments of education or public policy.
The Department of Education
The federal department relating responsible for education oversight is the Department of Education . The Department of Education is a Cabinet-level department of the United States government. The primary functions of the Department of Education are to “establish policy for, administer and coordinate -most federal assistance to education, collect data on US schools, and to enforce federal educational laws regarding privacy and civil rights. ” However, the Department of Education does not establish schools or colleges.
U. S. Department of Education
The Lyndon B. Johnson Department of Education Building
Unlike the systems of most other countries, education in the United States is highly decentralized, and the federal government and Department of Education are not heavily involved in determining curricula or educational standards (with the recent exception of the No Child Left Behind Act). This has been left to state and local school districts. The quality of educational institutions and their degrees is maintained through an informal private process known as accreditation, over which the Department of Education has no direct public jurisdictional control.
The Department’s mission is: to promote student achievement and preparation for global competitiveness by fostering educational excellence and ensuring equal access. Aligned with this mission of ensuring equal access to education, the Department of Education is a member of the United States Interagency Council on Homelessness, and works with federal partners to ensure proper education for homeless and runaway youth in the United States.
15.4.2: Current Challenges for Education
Some challenges in education include curriculum unification, racial achievement gap, and controversy over sex education and affirmative action.
Learning Objective
Identify the most pressing issues in education curriculum and control
Key Points
- There is no unified curriculum in the United States. Not only do schools offer a range of topics and quality, but private schools may include mandatory religious classes.
- In 2003 a Supreme Court decision concerning affirmative action in universities allowed educational institutions to consider race as a factor in admitting students, but ruled that strict point systems are unconstitutional.
- Almost all students in the U.S. receive some form of sex education between grades 7 and 12; many schools begin addressing some topics as early as grades 4 or 5. However, what students learn varies widely, because curriculum decisions are so decentralized.
Key Term
- Racial Achievement Gap
-
The Racial Achievement Gap in the United States refers to the educational disparities between minority students and Caucasian students.
Contemporary Education Issues
Major educational issues in the United States center on curriculum and control. One of the major controversies of the United States education policy is the No Child Left Behind Act which will be covered in its own section.
Curriculum issues
There is no unified curriculum in the United States. Not only do schools offer a range of topics and quality, but private schools may include mandatory religious classes. These religious aspects raise the question of government funding school vouchers in states with Blaine Amendments in their constitution. This has produced debate over the standardization of curricula. Additionally, there is debate over which subjects should receive the most focus, with astronomy and geography among those cited as not being taught enough in schools.
Attainment
Drop-out rates are a concern in American four year colleges. In New York, 54 percent of students entering four-year colleges in 1997 had a degree six years later — and even less among Hispanics and African-Americans. Since the 1980s the number of educated Americans has continued to grow, but at a slower rate. Some have attributed this to an increase in the foreign born portion of the workforce. However, the decreasing growth of the educational workforce has instead been primarily due to slowing down in educational attainment of people schooled in the United States .
Educational Attainment Since 1947
This graph shows the educational attainment from 1947 to 2003 in the United States.
Racial Achievement Gap
The Racial Achievement Gap in the United States refers to the educational disparities between minority students and Caucasian students. This disparity manifests itself in a variety of ways: African-American and Hispanic students are more likely to receive lower grades, score lower on standardized tests, drop out of high school, and are less likely to enter and complete college.
Evolution in Kansas
In 1999 the School Board of the state of Kansas caused controversy when it decided to eliminate teaching of evolution in its state assessment tests. Scientists from around the country demurred. Many religious and family values groups, on the other hand, claimed that evolution is simply a theory in the colloquial sense, and as such creationist ideas should therefore be taught alongside it as an alternative viewpoint. A majority of the Kansas population supported teaching intelligent design and/or creationism in public schools.
Sex education
Almost all students in the U.S. receive some form of sex education between grades 7 and 12; many schools begin addressing some topics as early as grades 4 or 5. However, what students learn varies widely, because curriculum decisions are so decentralized. Many states have laws governing what is taught in sex education classes or allowing parents to opt out. Some state laws leave curriculum decisions to individual school districts.
According to a 2004 survey, over 80% of polled parents agreed that the sex education was helpful while fewer than 17% stated their view that the sex education was inappropriate. 10 percent believed that their children’s sexual education class forced them to discuss sexual issues “too early. “
Textbook review and adoption
In many localities in the United States, the curriculum taught in public schools is influenced by the textbooks used by the teachers. In some states, textbooks are selected for all students at the state level. Since states such as California and Texas represent a considerable market for textbook publishers, these states can exert influence over the content of the books.
In 2010, the Texas Board of Education adopted new Social Studies standards that could potentially impact the content of textbooks purchased in other parts of the country. The deliberations that resulted in the new standards were partisan in nature and are said to reflect a conservative leaning in the view of United States history.
Affirmative action
In 2003 a Supreme Court decision concerning affirmative action in universities allowed educational institutions to consider race as a factor in admitting students, but ruled that strict point systems are unconstitutional. Opponents of racial affirmative action argue that the program actually benefits middle- and upper-class people of color at the expense of the lower class. Prominent African American academics Henry Louis Gates and Lani Guinier, while favoring affirmative action, have argued that in practice, it has led to recent black immigrants and their children being greatly overrepresented at elite institutions, at the expense of the historic African American community made up of descendants of slaves.
Control
There is some debate about where control for education actually lies. Education is not mentioned in the constitution of the United States. In the current situation, the state and national governments have a power-sharing arrangement, with the states exercising most of the control. The federal government uses the threat of decreased funding to enforce laws pertaining to education. Furthermore, within each state there are different types of control. Some states have a statewide school system, while others delegate power to county, city or township-level school boards.
15.4.3: The No Child Left Behind Act
The No Child Left Behind Act supports standards based education reform to set high standards and establish goals to improve education.
Learning Objective
Evaluate the arguments for and against the No Child Left Behind Act
Key Points
- The Act requires states to develop assessments in basic skills. States must give these assessments to all students at select grade levels in order to receive federal school funding. The standards in the act are set by each individual state.
- Schools receiving Title I funding must make Adequate Yearly Progress (AYP) in test scores; each year, its fifth graders must do better on standardized tests than the previous year’s fifth graders.
- Critics argue the focus on standardized testing as the means of assessment encourages teachers to teach a narrow subset of skills the teacher believes will increase test performance, rather than focus on acquiring deep understanding of the curriculum. This is referred to as teaching to the test.
Key Term
- No Child Left Behind Act
-
The No Child Left Behind Act of 2001 (NCLB) is a United States Act of Congress that is a reauthorization of the Elementary and Secondary Education Act, which included Title I, the government’s flagship aid program for disadvantaged students. NCLB supports standards-based education reform based on the premise that setting high standards and establishing measurable goals can improve individual outcomes in education.
No Child Left Behind Act
The No Child Left Behind Act of 2001 (NCLB) is a United States Act of Congress that is a reauthorization of the Elementary and Secondary Education Act, which included Title I, the government’s flagship aid program for disadvantaged students. NCLB supports standards-based education reform based on the premise that setting high standards and establishing measurable goals can improve individual outcomes in education. The Act requires states to develop assessments in basic skills. States must give these assessments to all students at select grade levels in order to receive federal school funding. The standards in the act are set by each individual state. NCLB expanded the federal role in public education through annual testing, annual academic progress, report cards, teacher qualifications, and funding changes. The bill passed in the U.S. Congress with bipartisan support. President Bush signed it into law on January 8, 2002 .
The Signing of the No Child Left Behind Act
President Bush signs the No Child Left Behind Act January 8th, 2002.
Provisions of the Act
Schools receiving Title I funding through the Elementary and Secondary Education Act of 1965 must make Adequate Yearly Progress (AYP) in test scores (each year, its fifth graders must do better on standardized tests than the previous year’s fifth graders). If the school’s results are repeatedly poor, then steps are taken to improve the school.
Schools that miss AYP for a second consecutive year are labeled as being “in need of improvement” and are required to develop a two-year improvement plan for the trouble subject. Students are given the option to transfer to a better school within the school district, if any exists. Missing AYP in the third year forces the school to offer free tutoring and other supplemental education services to struggling students. If a school misses its AYP target for a fourth consecutive year, the school is labeled as requiring “corrective action,” which may involve wholesale replacement of staff, introduction of a new curriculum, or extending the amount of time students spend in class. A fifth year of failure results in planning to restructure the school; the plan is implemented if the school fails to hit its AYP targets for the sixth year in a row. Common options include closing the school, turning the school into a charter school, hiring a private company to run the school, or asking the state office of education to run the school directly.
The act also requires schools to let military recruiters have students’ contact information and other access to the student, if the school provides that information to universities or employers, unless the students opt out of giving military recruiters access.
Increased accountability
Supporters of the NCLB claim one of the strong positive points of the bill is the increased accountability that is required of schools and teachers. The yearly standardized tests are the main means of determining whether schools are living up to the standards that they are required to meet. If the required improvements are not made, the schools face decreased funding and other punishments that contribute to the increased accountability. According to supporters, these goals help teachers and schools realize the significance and importance of the educational system and how it affects the nation. Opponents of this law say that the punishments hurt the schools and do not contribute to the improvement of student education.
Additionally, the Act provides information for parents by requiring states and school districts to give parents detailed report cards on schools and districts explaining the school’s AYP performance. Schools must also inform parents when their child is being taught by a teacher or para-professional who does not meet “highly qualified” requirements.
Criticisms of standardized testing under NCLB
Critics have argued that the focus on standardized testing as the means of assessment encourages teachers to teach a narrow subset of skills that the teacher believes will increase test performance, rather than focus on acquiring deep understanding of the full, broad curriculum. This is colloquially referred to as “teaching to the test. “
Under No Child Left Behind, schools were held almost exclusively accountable for absolute levels of student performance. This means even schools that were making great strides with students were still labeled as “failing” just because the students had not yet made it to a “proficient” level of achievement.
The incentives for improvement also may cause states to lower their official standards. A 2007 study by the U.S. Dept. of Education indicates that the observed differences in states’ reported scores is largely due to differences in the stringency of their standards.
“Gaming” the system
The system of incentives and penalties sets up a strong motivation for schools, districts and states to manipulate test results. For example, schools have been shown to employ “creative reclassification” of drop-outs to reduce unfavorable statistics. Critics argue that these and other strategies create an inflated perception of NCLB’s successes, particularly in states with high minority populations.
15.5: Immigration Policy
15.5.1: Immigration Policy
Immigration reform refers to changes in government policies that attempt to either promote or curb immigration.
Learning Objective
Identify key pieces of legislation that shaped immigration policy in the U.S.
Key Points
- Proponents of greater immigration enforcement argue that illegal immigrants cost taxpayers an estimated $338.3 billion dollars and jeopardize the safety of law enforcement officials and citizens, especially along the Mexican border.
- The Immigration Reform and Control Act of 1986 made it illegal to hire or recruit illegal immigrants. In 2006, the House of Representatives passed the Border Protection, Anti-terrorism and Illegal Immigration Control Act and the Senate passed the Comprehensive Immigration Reform Act.
- In 2010, Governor of Arizona Jan Brewer signed the Support Our Law Enforcement and Safe Neighborhoods Act. The law directs law enforcement officials to ask for immigration papers on a reasonable suspicion that a person might be an illegal immigrant and make arrests for not carrying ID papers.
Key Terms
- amnesty
-
An act of the sovereign power granting oblivion, or a general pardon, for a past offense, as to subjects concerned in an insurrection.
- bipartisan
-
relating to, or supported by two groups, especially by two political parties
Background
In 1924 Congress passed the Immigration Act of 1924, which favored source countries that already had many immigrants in the U.S. and excluded immigrants from unpopular countries. Immigration patterns of the 1930s were dominated by the Great Depression, and in the early 1930s, more people emigrated from the United States than immigrated to it. Immigration continued to fall throughout the 1940s and 1950s, but it increased again afterwards.
Immigration is also widely used to describe proposals to increase legal immigration while decreasing illegal immigration, such as the guest worker proposal supported by George W. Bush. Proponents of greater immigration enforcement argue that illegal immigrants cost taxpayers an estimated $338.3 billion dollars and jeopardize the safety of law enforcement officials and citizens, especially along the Mexican border.
The Immigration Reform and Control Act of 1986 made it illegal to hire or recruit illegal immigrants. In 2006, the House of Representatives passed the Border Protection, Anti-terrorism and Illegal Immigration Control Act of 2005, and in 2006 the U.S. Senate passed the Comprehensive Immigration Reform Act of 2006. Neither bill became law because their differences could not be reconciled.
The Immigration and Nationality Act Amendments of 1965 (the Hart-Cellar Act) abolished the national origins quota system that had been put in place by the 1924 Immigration Act. In 2006, the number of immigrants totaled a record 37.5 million. After 2000, immigration to the United States numbered approximately 1,000,000 per year. Despite tougher border security after 9/11, nearly 8 million immigrants came to the United States from 2000 to 2005–more than in any other five-year period in the nation’s history. Almost half entered illegally. In 2006, 1.27 million immigrants were granted legal residence.
Recent Immigration Reform Hot Topics
In 2009 immigration reform became a hot topic, since the Obama administration recently signaled interest in beginning a discussion on comprehensive immigration reform before the year’s end. The proposed reform plan had as its goal bipartisan support and included six sections designed to appeal to both parties. These six sections are: (1) fixing border enforcement, (2) fixing interior enforcement, such as preventing visa overstays, (3) preventing people from working without a work permit, (4) creating a committee to adapt the number of visas available to changing economic times, (5) an amnesty type of program to legalize undocumented immigrants, and (6) programs to help immigrants adjust to life in the United States.
Citing Congress’ failure to enforce U.S. immigration laws, the state of Arizona confronted reform and on April, 23, 2010 Republican Governor Jan Brewer signed the Support Our Law Enforcement and Safe Neighborhoods Act, the broadest and strictest immigration reform imposed in the United States. The Arizona immigration law directs law enforcement officials to ask for immigration papers on a reasonable suspicion that a person might be an illegal immigrant and make arrests for not carrying ID papers. On July 6, 2010, the U.S. Department of Justice filed suit against Arizona with the intent of preventing Arizona from enforcing the law. In June 2012, the U.S. Supreme Court ruled on the case Arizona v. United States, upholding the provision requiring immigration status checks during law enforcement stops but striking down three other provisions as violations of the Supremacy Clause of the United States Constitution.
Jan Brewer
Jan Brewer, Governor of Arizona, who signed the controversial Suppor Our Law Enforcement and Safe Neighborhoods Act, which makes it a state misdemeanor crime for an immigrant to be in Arizona without carrying registration documents required by federal law.
Being the first state to pass such legislation, Arizona has set a precedent for other states. Although the response has cost the state between $7 million and $52 million, some in the state still feel that this outcome will outweigh the initial cost. Due to conflict and protest, Governor Brewer signed the Arizona House Bill 2162 (HB 2162) amending text in the original document. HB 2162 includes that race, color, and national origin would not play a role in prosecution; in order to investigate an individual’s immigration status, he or she must be “lawfully stopped, arrested, or detained. “
In the absence of comprehensive immigration reform at the federal level, many advocacy groups have focused on improving the fairness and efficiency of the immigration court system. They propose incremental steps the executive branch can take to stop an assembly line approach to deportation proceedings. These groups have identified several issues that threaten the due process rights of immigrants, including reliance on low quality videoconferencing to conduct hearings, inadequate language interpretation services for non-English speakers, and limited access to court records.
15.5.2: Illegal Immigration
Unauthorized immigration is when a non-citizen has entered the country without government permission and in violation of the law.
Learning Objective
Describe the nature and scope of illegal immigration in the United States
Key Points
- Between 7 million and 20 million illegal immigrants are estimated to be living in the United States. The majority of illegal immigrants are from Mexico.
- About 8 percent of children born in the United States in 2008 — about 340,000 — were offspring of unauthorized immigrants. These infants are, according to the Fourteenth Amendment to the Constitution, American citizens from birth.
- According to USA Today in 2006, about 4% work in farming; 21% have jobs in service industries. Substantial numbers can be found in construction and related occupations (19%), production, installation, and repair (15%), sales (12%), management (10%) and transportation (8%).
- A common means of border crossing is to hire professionals who smuggle illegal immigrants across the border for pay. Those operating on the US-Mexico border are known informally as coyotes.
Key Term
- coyote
-
A smuggler of illegal immigrants across the land border from Mexico into the United States of America.
An illegal immigrant in the United States is a non-citizen who has either entered the country without government permission and in violation of United States Nationality Law, or stayed beyond the termination date of a visa. Unauthorized immigration raises many political, economic and social issues and has become a source of major controversy. Illegal immigrants continue to outpace the number of legal immigrants—a trend that’s held steady since the 1990s. While the majority of illegal immigrants continue to concentrate in places with existing large Hispanic communities, illegal immigrants are increasingly settling throughout the rest of the country.
Between 7 million and 20 million illegal immigrants are estimated to be living in the United States. The majority of these illegal immigrants are from Mexico. An estimated 14 million people live in families in which the head of household or the spouse is an illegal immigrant. A quarter of all immigrants who have arrived in recent years have at least some college education. Nonetheless, illegal immigrants as a group tend to be less educated than other sections of the U.S. population: 49% haven’t completed high school, compared with 9% of native-born Americans and 25% of legal immigrants.
North America
Map of Mexico and United States border
The Pew Hispanic Center determined that according to an analysis of Census Bureau data about 8 percent of children born in the United States in 2008 — about 340,000 — were offspring of unauthorized immigrants. In total, 4 million U.S.-born children of unauthorized immigrant parents resided in this country in 2009. These infants are, according to the Fourteenth Amendment to the Constitution, American citizens from birth. These children are sometimes pejoratively referred to as anchor babies by those aggressively opposed to this method of citizenship attained outside of the legal immigration process. The majority of children that are born with illegal parents fail to graduate high school, averaging two fewer years of school than their peers. But once the parents do gain citizenship, the children do much better in school.
Illegal immigrants work in many sectors of the U.S. economy. According to USA Today in 2006, about 4% work in farming; 21% have jobs in service industries; and substantial numbers can be found in construction and related occupations (19%), and in production, installation and repair (15%), with 12% in sales, 10% in management and 8% in transportation. Illegal immigrants have lower incomes than both legal immigrants and native-born Americans, but earnings do increase somewhat the longer an individual is in the country.
There are an estimated half million illegal entries into the United States each year. A common means of border crossing is to hire professionals who smuggle illegal immigrants across the border for pay. Those operating on the US-Mexico border are known informally as coyotes. According to Pew, between 4 and 5.5 million unauthorized migrants entered the United States with a legal visa, accounting for between 33–50% of the total population. A tourist or traveler is considered a “visa overstay” once he or she remains in the United States after the time of admission has expired. Visa overstays tend to be somewhat more educated and better off financially than those who entered the country illegally. A smaller number of unauthorized migrants entered the United States legally using the Border Crossing Card, authorizing border crossings into the U.S. for a set amount of time. Border Crossing Card entry accounts for the vast majority of all registered non-immigrant entry into the United States – 148 million out of 179 million total – but there is little hard data as to how much of the illegal immigrant population entered in this way.
15.5.3: Immigration Reform
Immigration reform regards changes to current policy including promoted or open immigration, as well as reduced or eliminated immigration.
Learning Objective
Summarize recent legislative trends in immigration reform on the state and national level
Key Points
- Proponents of great immigration enforcement argue that illegal immigrants cost taxpayers an estimated $338.3 billion and jeopardize the safety of law enforcement officials and citizens, especially at the Mexican border.
- The Immigration Reform and Control Act of 1986 made it illegal to hire or recruit illegal immigrants.
- In 2006, the House of Representatives passed the Border Protection, Anti-terrorism and Illegal Immigration Control Act of 2005 and the Senate passed the Comprehensive Immigration Reform Act of 2006. Neither bill became a law.
- President Obama proposed to fix border enforcement, interior enforcement, prevent people from working without a work permit, create a committee to adapt the number of visas available to changing times, an amnesty program to legalize undocumented immigrants and programs to help immigrants adjust.
Key Terms
- deportation
-
The act of deporting or exiling, or the state of being deported; banishment; transportation.
- amnesty
-
An act of the sovereign power granting oblivion, or a general pardon, for a past offense, as to subjects concerned in an insurrection.
- bipartisan
-
relating to, or supported by two groups, especially by two political parties
Use of the Term
Immigration reform is a term used in political discussion regarding changes to current immigration policy. In its strict definition, reform means to change towards an improved form or condition by amending or removing faults or abuses. In the political sense, immigration reform may include promoted, expanded or open immigration. It may also include reduced or eliminated immigration. Immigration is also widely used to describe proposals to increase legal immigration while decreasing illegal immigration, such as the guest worker proposal supported by George W. Bush.
Legislative Trends
Illegal immigration is a controversial issue in the United States. Proponents of greater immigration enforcement argue that illegal immigrants cost taxpayers an estimated $338.3 billion dollars. They also argue that these immigrants jeopardize the safety of law enforcement officials and citizens, especially along the Mexican border.
The Immigration Reform and Control Act of 1986 made it illegal to hire or recruit illegal immigrants. The U.S. House of Representatives passed the Border Protection, Anti-terrorism and Illegal Immigration Control Act of 2005, and, in 2006, the U.S. Senate passed the Comprehensive Immigration Reform Act of 2006. Neither bill became law because their differences could not be reconciled in conference committee.
In 2009, immigration reform became a hot topic as the Obama administration signaled interest in beginning a discussion on comprehensive immigration reform. The proposed comprehensive immigration reform plan had as its goal bipartisan support. As such, it includes six sections designed to have something for everyone. These six sections are: (1) fixing border enforcement, (2) increasing interior enforcement, such as preventing visa overstays, (3) preventing people from working without a work permit, (4) creating a committee to adapt the number of visas available to changing economic times, (5) a type of amnesty program to legalize undocumented immigrants and (6) programs to help immigrants adjust to life in the United States.
The Case of Arizona
Citing Congress’ failure to enforce U.S. immigration laws, the state of Arizona confronted reform. In 2009, services provided to illegal immigrants, including incarceration, cost the state of Arizona an estimated $2.7 billion. On April, 23, 2010, Republican Governor Jan Brewer signed the Support Our Law Enforcement and Safe Neighborhoods Act, the broadest and strictest immigration reform imposed in the United States. The Arizona immigration law directs law enforcement officials to ask for immigration papers on a reasonable suspicion that a person might be an illegal immigrant. Officials can then arrest those not carrying ID papers. On July 6, 2010, the US Department of Justice filed suit against Arizona with the intent of preventing the state from enforcing the law and asks the court to find certain sections of the legislation null and void. Congress has left this issue untouched as many feared such a vote could threaten their chances at reelection.
Jan Brewer
Jan Brewer, Governor of Arizona, who signed the controversial Suppor Our Law Enforcement and Safe Neighborhoods Act, which makes it a state misdemeanor crime for an immigrant to be in Arizona without carrying registration documents required by federal law.
Being the first state to pass such legislation, Arizona has set a precedent for other states. Nevertheless, this legislation has also caused Arizona to carry a large burden. Although the response has cost the state between 7 million and 52 million, some in the state still feel that this outcome will outweigh the initial cost. Due to conflict and protest, Governor Brewer signed House Bill 2162 (HB 2162) a week later, amending text in the original document. HB 2162 includes that race, color, and national origin would not play a role in prosecution; in order to investigate an individual’s immigration status, he or she must be “lawfully stopped, arrested or detained. “
Reform and Advocacy
In the absence of comprehensive immigration reform at the federal level, many advocacy groups have focused on improving the fairness and efficiency of the immigration court system. They propose incremental steps the executive branch can take to stop an assembly line approach to deportation proceedings. These groups have identified several issues that threaten the due process rights of immigrants, including reliance on low quality videoconferencing to conduct hearings, inadequate language interpretation services for non-English speakers, and limited access to court records. They also focus on problems arising out of the recent increase in immigration law enforcement without a commensurate boost in resources for adjudication. Other calls for reform include increased transparency at the Board of Immigration Appeals (BIA) and more diversity of experience among immigration judges, the majority of whom previously held positions adversarial to immigrants.
Chapter 14: The Judiciary
14.1: The American Legal System
14.1.1: Cases and the Law
In the US judicial system, cases are decided based on principles established in previous cases; a practice called common law.
Learning Objective
Explain the relationship between legal precedent and common law
Key Points
- Common law is created when a court decides on a case and sets precedent.
- The principle of common law involves precedent, which is a practice that uses previous court cases as a basis for making judgments in current cases.
- Justice Brandeis established stare decisis as the method of making case law into good law. The principle of stare decisis refers to the practice of letting past decisions stand, and abiding by those decisions in current matters.
Key Terms
- stare decisis
-
The principle of following judicial precedent.
- common law
-
A legal system that gives great precedential weight to common law on the principle that it is unfair to treat similar facts differently on different occasions.
- precedent
-
a decided case which is cited or used as an example to justify a judgment in a subsequent case
Establishing Common Law
When a decision in a court case is made and is called law, it typically is referred to as “good law. ” Thus, subsequent decisions must abide by that previous decision. This is called “common law,” and it is based on the principle that it is unfair to treat similar facts differently in subsequent occasions. Essentially, the body of common law is based on the principles of case precedent and stare decisis.
Case Precedent
In the United States legal system, a precedent or authority is a principle or rule established in a previous legal case that is either binding on or persuasive for a court or other tribunal when deciding subsequent cases with similar issues or facts. The general principle in common law legal systems is that similar cases should be decided so as to give similar and predictable outcomes, and the principle of precedent is the mechanism by which this goal is attained. Black’s Law Dictionary defines “precedent” as a “rule of law established for the first time by a court for a particular type of case and thereafter referred to in deciding similar cases. “
Stare Decisis
Stare decisis is a legal principle by which judges are obliged to respect the precedent established by prior decisions. The words originated from the phrasing of the principle in the Latin maxim Stare decisis et non quieta movere: “to stand by decisions and not disturb the undisturbed. ” In a legal context, this is understood to mean that courts should generally abide by precedent and not disturb settled matters.
In other words, stare decisis applies to the holding of a case, or, the exact wording of the case. As the United States Supreme Court has put it: “dicta may be followed if sufficiently persuasive but are not binding. “
In the United States Supreme Court, the principle of stare decisis is most flexible in constitutional cases:
Stare decisis is usually the wise policy, because in most matters it is more important that the applicable rule of law be settled than that it be settled right. … But in cases involving the Federal Constitution, where correction through legislative action is practically impossible, this Court has often overruled its earlier decisions. … This is strikingly true of cases under the due process clause.—Burnet v. Coronado Oil & Gas Co., 285 U.S. 393, 406–407, 410 (1932)
Louis Brandeis
Brandeis developed the idea of case law and the importance of stare decisis. His opinion in New Ice Co. set the stage for new federalism.
14.1.2: Types of Courts
The federal court system has three levels: district courts, courts of appeals, and the Supreme Court.
Learning Objective
Compare and contrast the different types of courts that exist in the U.S. federal court system
Key Points
- District courts and administrative courts were created to hear lower level cases.
- The courts of appeals are required to hear all federal appeals.
- The Supreme Court is not required to hear appeals and is considered the final court of appeals.
- The Supreme Court may exercise original jurisdiction in cases affecting ambassadors and other diplomats, and in cases in which a state is a party.
Key Term
- appeal
-
(a) An application for the removal of a cause or suit from an inferior to a superior judge or court for re-examination or review. (b) The mode of proceeding by which such removal is effected. (c) The right of appeal. (d) An accusation; a process which formerly might be instituted by one private person against another for some heinous crime demanding punishment for the particular injury suffered, rather than for the offense against the public. (e) An accusation of a felon at common law by one of his accomplices, which accomplice was then called an approver.
The federal court system is divided into three levels: the first and lowest level is the United States district courts, the second, intermediate level is the court of appeals, and the Supreme Court is considered the highest court in the United States.
The United States district courts are the general federal trial courts, although in many cases Congress has passed statutes which divert original jurisdiction to these specialized courts or to administrative law judges (ALJs). In these cases, the district courts have jurisdiction to hear appeals from such lower bodies.
The United States courts of appeals are the federal intermediate appellate courts. They operate under a system of mandatory review which means they must hear all appeals of right from the lower courts. They can make a ruling of their own on the case, or choose to accept the decision of the lower court. In the latter case, many defendants appeal to the Supreme Court.
The highest court is the Supreme Court of the United States, which is considered the court of last resort . It generally is an appellate court that operates under discretionary review. This means that the Court, through granting of writs of certiorari, can choose which cases to hear. There is generally no right of appeal to the Supreme Court. In a few situations, like lawsuits between state governments or some cases between the federal government and a state, the Supreme Court becomes the court of original jurisdiction. In addition, the Constitution specifies that the Supreme Court may exercise original jurisdiction in cases affecting ambassadors and other diplomats, in cases in which a state is a party, and cases between the state and another country. In all other cases, however, the Court has only appellate jurisdiction. It considers cases based on its original jurisdiction very rarely; almost all cases are brought to the Supreme Court on appeal. In practice, the only original jurisdiction cases heard by the Court are disputes between two or more states. Such cases are generally referred to a designated individual, usually a sitting or retired judge, or a well-respected attorney, to sit as a special master and report to the Court with recommendations.
Supreme Court of the United States
The modern supreme court.
14.1.3: Federal Jurisdiction
The federal court system has limited, though important, jurisdiction.
Learning Objective
Discuss the different levels of jurisdiction by state and federal courts in the American legal system
Key Points
- In the justice system, state courts hear state law, and federal courts hear federal law and sometimes appeals.
- Federal courts only have the power granted to them by federal law and the Constitution.
- Courts render their decisions through opinions; the majority will write an opinion and the minority will write a dissent.
Key Terms
- federal system
-
a system of government based upon democratic rule in which sovereignty and the power to rule is constitutionally divided between a central governing authority and constituent political units (such as states or provinces)
- jurisdiction
-
the power, right, or authority to interpret and apply the law
Federal Jurisdiction
The American legal system includes both state courts and federal courts. Generally, state courts hear cases involving state law, although they may also hear cases involving federal law so long as the federal law in question does not grant exclusive jurisdiction to federal courts. Federal courts may only hear cases where federal jurisdiction can be established. Specifically, the court must have both subject-matter jurisdiction over the matter of the claim and personal jurisdiction over the parties . The Federal Courts are courts of limited jurisdiction, meaning that they can only exercise the powers that are granted to them by the Constitution and federal laws. For subject matter jurisdiction, the claims in the case must either: raise a federal question, such as a cause of action or defense arising under the Constitution; a federal statute, or the law of admiralty; or have diversity of parties. For example, all of the defendants are from a different state than the plaintiff, and have an amount in controversy that exceeds a monetary threshold (which changes from time to time, but is $75,000 as of 2011).
Federal district courts
The federal district courts represent one of the ways federal jurisdiction is split.
If a Federal Court has subject matter jurisdiction over one or more of the claims in a case, it has discretion to exercise ancillary jurisdiction over other state law claims.
The Supreme Court has “cautioned that … Court[s] must take great care to ‘resist the temptation’ to express preferences about [certain types of cases] in the form of jurisdictional rules. Judges must strain to remove the influence of the merits from their jurisdictional rules. The law of jurisdiction must remain apart from the world upon which it operates.”
Generally, when a case has successfully overcome the hurdles of standing, Case or Controversy, and State Action, it will be heard by a trial court. The non-governmental party may raise claims or defenses relating to alleged constitutional violation(s) by the government. If the non-governmental party loses, the constitutional issue may form part of the appeal. Eventually, a petition for certiorari may be sent to the Supreme Court. If the Supreme Court grants certiorari and accepts the case, it will receive written briefs from each side (and any amicae curi or friends of the court—usually interested third parties with some expertise to bear on the subject) and schedule oral arguments. The Justices will closely question both parties. When the Court renders its decision, it will generally do so in a single majority opinion and one or more dissenting opinions. Each opinion sets forth the facts, prior decisions, and legal reasoning behind the position taken. The majority opinion constitutes binding precedent on all lower courts; when faced with very similar facts, they are bound to apply the same reasoning or face reversal of their decision by a higher court.
14.2: Origins of American Law
14.2.1: Common Law
Law of the United States was mainly derived from the common law system of English law.
Learning Objective
Identify the principles and institutions that comprise the common law tradition
Key Points
- The United States and most Commonwealth countries are heirs to the common law legal tradition of English law. Certain practices traditionally allowed under English common law were specificilly outlawed by the Constitution, such as bills of attainder and general search warrants.
- All U.S. states except Louisiana have enacted “reception statutes” which generally state that the common law of England (particularly judge-made law) is the law of the state to the extent that it is not repugnant to domestic law or indigenous conditions.
- Unlike the states, there is no plenary reception statute at the federal level that continued the common law and thereby granted federal courts the power to formulate legal precedent like their English predecessors.
- The passage of time has led to state courts and legislatures expanding, overruling, or modifying the common law. As a result, the laws of any given state invariably differ from the laws of its sister states.
Key Terms
- stare decisis
-
The principle of following judicial precedent.
- heir
-
Someone who inherits, or is designated to inherit, the property of another.
Background
At both the federal and state levels, the law of the United States was mainly derived from the common law system of English law , which was in force at the time of the Revolutionary War. However, U.S. law has diverged greatly from its English ancestor both in terms of substance and procedure. It has incorporated a number of civil law innovations.
Royal Courts of Justice
The neo-medieval pile of the Royal Courts of Justice on G.E. Street, The Strand, London.
American Common Law
The United States and most Commonwealth countries are heirs to the common law legal tradition of English law. Certain practices traditionally allowed under English common law were specifically outlawed by the Constitution, such as bills of attainder and general search warrants.
As common law courts, U.S. courts have inherited the principle of stare decisis. American judges, like common law judges elsewhere, not only apply the law, they also make the law. Their decisions in the cases before them became the precedent for decisions in future cases.
The actual substance of English law was formally received into the United States in several ways. First, all U.S. states except Louisiana have enacted “reception statutes” which generally state that the common law of England (particularly judge-made law) is the law of the state to the extent that it is not repugnant to domestic law or indigenous conditions. Some reception statutes impose a specific cutoff date for reception, such as the date of a colony’s founding, while others are deliberately vague. Therefore, contemporary U.S. courts often cite pre-Revolution cases when discussing the evolution of an ancient judge-made common law principle into its modern form. An example is the heightened duty of care that was traditionally imposed upon common carriers.
Federal courts lack the plenary power possessed by state courts to simply make up law. State courts are able to do this in the absence of constitutional or statutory provisions replacing the common law. Only in a few limited areas, like maritime law, has the Constitution expressly authorized the continuation of English common law at the federal level (meaning that in those areas federal courts can continue to make law as they see fit, subject to the limitations of stare decisis).
Federal Precedent
Unlike the states, there is no plenary reception statute at the federal level that continued the common law and thereby granted federal courts the power to formulate legal precedent like their English predecessors. Federal courts are solely creatures of the federal Constitution and the federal Judiciary Acts. However, it is universally accepted that the Founding Fathers of the United States, by vesting judicial power into the Supreme Court and the inferior federal courts in Article Three of the United States Constitution, vested in them the implied judicial power of common law courts to formulate persuasive precedent. This power was widely accepted, understood, and recognized by the Founding Fathers at the time the Constitution was ratified. Several legal scholars have argued that the federal judicial power to decide “cases or controversies” necessarily includes the power to decide the precedential effect of those cases and controversies.
State Law
The passage of time has led to state courts and legislatures expanding, overruling, or modifying the common law. As a result, the laws of any given state invariably differ from the laws of its sister states. Therefore, with regard to the vast majority of areas of the law that are traditionally managed by the states, the United States cannot be classified as having one legal system. Instead, it must be regarded as 50 separate systems of tort law, family law, property law, contract law, criminal law, and so on. Naturally, the laws of different states frequently come into conflict with each other. In response, a very large body of law was developed to regulate the conflict of laws in the United States.
All states have a legislative branch which enacts state statutes, an executive branch that promulgates state regulations pursuant to statutory authorization, and a judicial branch that applies, interprets, and occasionally overturns state statutes, regulations, and local ordinances.
In some states, codification is often treated as a mere restatement of the common law. This occurs to the extent that the subject matter of the particular statute at issue was covered by some judge-made principle at common law. Judges are free to liberally interpret the codes unless and until their interpretations are specifically overridden by the legislature. In other states, there is a tradition of strict adherence to the plain text of the codes.
14.2.2: Primary Sources of American Law
The primary sources of American Law are: constitutional law, statutory law, treaties, administrative regulations, and the common law.
Learning Objective
Identify the sources of American federal and state law
Key Points
- Where Congress enacts a statute that conflicts with the Constitution, the Supreme Court may find that law unconstitutional and declare it invalid. A statute does not disappear automatically merely because it has been found unconstitutional; a subsequent statute must delete it.
- The United States and most Commonwealth countries are heirs to the common law legal tradition of English law. Certain practices traditionally allowed under English common law were expressly outlawed by the Constitution, such as bills of attainder and general search warrants.
- Early American courts, even after the Revolution, often did cite contemporary English cases. This was because appellate decisions from many American courts were not regularly reported until the mid-19th century; lawyers and judges used English legal materials to fill the gap.
- Foreign law has never been cited as binding precedent, but merely as a reflection of the shared values of Anglo-American civilization or even Western civilization in general.
- Most U.S. law consists primarily of state law, which can and does vary greatly from one state to the next.
Key Term
- commonwealth
-
A form of government, named for the concept that everything that is not owned by specific individuals or groups is owned collectively by everyone in the governmental unit, as opposed to a state, where the state itself owns such things.
Background
In the United States, the law is derived from various sources. These sources are constitutional law, statutory law, treaties, administrative regulations, and the common law. At both the federal and state levels, the law of the United States was originally largely derived from the common law system of English law, which was in force at the time of the Revolutionary War. However, U.S. law has diverged greatly from its English ancestor both in terms of substance and procedure, and has incorporated a number of civil law innovations. Thus, most U.S. law consists primarily of state law, which can and does vary greatly from one state to the next.
Constitutionality
Where Congress enacts a statute that conflicts with the Constitution , the Supreme Court may find that law unconstitutional and declare it invalid. A statute does not disappear automatically merely because it has been found unconstitutional; a subsequent statute must delete it. Many federal and state statutes have remained on the books for decades after they were ruled to be unconstitutional. However, under the principle of stare decisis, no sensible lower court will enforce an unconstitutional statute, and the Supreme Court will reverse any court that does so. Conversely, any court that refuses to enforce a constitutional statute (where such constitutionality has been expressly established in prior cases) will risk reversal by the Supreme Court.
American Common Law
As common law courts, U.S. courts have inherited the principle of stare decisis. American judges, like common law judges elsewhere, not only apply the law, they also make the law, to the extent that their decisions in the cases before them become precedent for decisions in future cases.
The actual substance of English law was formally “received” into the United States in several ways. First, all U.S. states except Louisiana have enacted “reception statutes” which generally state that the common law of England (particularly judge-made law) is the law of the state to the extent that it is not repugnant to domestic law or indigenous conditions. Some reception statutes impose a specific cutoff date for reception, such as the date of a colony’s founding, while others are deliberately vague. Thus, contemporary U.S. courts often cite pre-Revolution cases when discussing the evolution of an ancient judge-made common law principle into its modern form, such as the heightened duty of care traditionally imposed upon common carriers.
Second, a small number of important British statutes in effect at the time of the Revolution have been independently reenacted by U.S. states. Two examples that many lawyers will recognize are the Statute of Frauds (still widely known in the U.S. by that name) and the Statute of 13 Elizabeth (the ancestor of the Uniform Fraudulent Transfers Act). Such English statutes are still regularly cited in contemporary American cases interpreting their modern American descendants.
However, it is important to understand that despite the presence of reception statutes, much of contemporary American common law has diverged significantly from English common law. The reason is that although the courts of the various Commonwealth nations are often influenced by each other’s rulings, American courts rarely follow post-Revolution Commonwealth rulings unless there is no American ruling on point, the facts and law at issue are nearly identical, and the reasoning is strongly persuasive.
Early on, American courts, even after the Revolution, often did cite contemporary English cases. This was because appellate decisions from many American courts were not regularly reported until the mid-19th century; lawyers and judges, as creatures of habit, used English legal materials to fill the gap. But citations to English decisions gradually disappeared during the 19th century as American courts developed their own principles to resolve the legal problems of the American people. The number of published volumes of American reports soared from eighteen in 1810 to over 8,000 by 1910. By 1879, one of the delegates to the California constitutional convention was already complaining: “Now, when we require them to state the reasons for a decision, we do not mean they shall write a hundred pages of detail. We [do] not mean that they shall include the small cases, and impose on the country all this fine judicial literature, for the Lord knows we have got enough of that already. “
Today, in the words of Stanford law professor Lawrence Friedman: American cases rarely cite foreign materials. Courts occasionally cite a British classic or two, a famous old case, or a nod to Blackstone; but current British law almost never gets any mention. Foreign law has never been cited as binding precedent, but merely as a reflection of the shared values of Anglo-American civilization or even Western civilization in general.
14.2.3: Civil Law and Criminal Law
Criminal law is the body of law that relates to crime and civil law deals with disputes between organizations and individuals.
Learning Objective
Compare and contrast civil law with common law
Key Points
- The objectives of civil law are different from other types of law. In civil law there is the attempt to right a wrong, honor an agreement, or settle a dispute.
- Criminal law is the body of law that relates to crime. It is the body of rules that defines conduct that is not allowed because it is held to threaten, harm or endanger the safety and welfare of people.
- In civil law there is the attempt to right a wrong, honor an agreement, or settle a dispute. If there is a victim, they get compensation, and the person who is the cause of the wrong pays, this being a civilized form of, or legal alternative to, revenge.
- For public welfare offenses where the state is punishing merely risky (as opposed to injurious) behavior, there is significant diversity across the various states.
Key Terms
- equity
-
A legal tradition that deals with remedies other than monetary relief, such as injunctions, divorces and similar actions.
- criminal law
-
the area of law that regulates social conduct, prohibits threatening, harming, or otherwise endangering the health, safety, and moral welfare of people, and punishes people who violate these laws
- incarceration
-
The act of confining, or the state of being confined; imprisonment.
Background
Criminal law is the body of law that relates to crime. It is the body of rules that defines conduct that is not allowed because it is held to threaten, harm or endanger the safety and welfare of people. Criminal law also sets out the punishment to be imposed on people who do not obey these laws. Criminal law differs from civil law, whose emphasis is more on dispute resolution than in punishment.
Civil law is the branch of law dealing with disputes between individuals or organizations, in which compensation may be awarded to the victim. For instance, if a car crash victim claims damages against the driver for loss or injury sustained in an accident, this will be a civil law case. Civil law differs from criminal law, which emphasizes punishment rather than dispute resolution. The law relating to civil wrongs and quasi-contract is part of civil law.
Civil Law Versus Criminal Law
The objectives of civil law are different from other types of law. In civil law there is the attempt to right a wrong, honor an agreement, or settle a dispute. If there is a victim, they get compensation, and the person who is the cause of the wrong pays, this being a civilized form of, or legal alternative to, revenge. If it is a matter of equity, there often exists a pie for division and a process of civil law which allocates it. In public law the objective is usually deterrence and retribution.
An action in criminal law does not necessarily preclude an action in civil law in common law countries, and may provide a mechanism for compensation to the victims of crime. Such a situation occurred when O.J. Simpson was ordered to pay damages for wrongful death after being acquitted of the criminal charge of murder.
Criminal law involves the prosecution by the state of wrongful acts, which are considered to be so serious that they are a breach of the sovereign’s peace (and cannot be deterred or remedied by mere lawsuits between private parties). Generally, crimes can result in incarceration, but torts (see below) cannot. The majority of the crimes committed in the United States are prosecuted and punished at the state level. Federal criminal law focuses on areas specifically relevant to the federal government like evading payment of federal income tax, mail theft, or physical attacks on federal officials, as well as interstate crimes like drug trafficking and wire fraud.
All states have somewhat similar laws in regard to “higher crimes” (or felonies), such as murder and rape, although penalties for these crimes may vary from state to state. Capital punishment is permitted in some states but not others. Three strikes laws in certain states impose harsh penalties on repeat offenders.
Some states distinguish between two levels: felonies and misdemeanors (minor crimes). Generally, most felony convictions result in lengthy prison sentences as well as subsequent probation, large fines, and orders to pay restitution directly to victims; while misdemeanors may lead to a year or less in jail and a substantial fine. To simplify the prosecution of traffic violations and other relatively minor crimes, some states have added a third level, infractions. These may result in fines and sometimes the loss of one’s driver’s license, but no jail time.
For public welfare offenses where the state is punishing merely risky (as opposed to injurious) behavior, there is significant diversity across the various states. For example, punishments for drunk driving varied greatly prior to 1990. State laws dealing with drug crimes still vary widely, with some states treating possession of small amounts of drugs as a misdemeanor offense or as a medical issue and others categorizing the same offense as a serious felony.
The law of most of the states is based on the common law of England; the notable exception is Louisiana. Much of Louisiana law is derived from French and Spanish civil law, which stems from its history as a colony of both France and Spain. Puerto Rico, a former Spanish colony, is also a civil law jurisdiction of the United States. However, the criminal law of both jurisdictions has been necessarily modified by common law influences and the supremacy of the federal Constitution. Many states in the southwest that were originally Mexican territory have inherited several unique features from the civil law that governed when they were part of Mexico. These states include Arizona, California , Nevada, New Mexico, and Texas.
California Penal Code
The California Penal Code, the codification of criminal law and procedure in the U.S. state of California.
14.2.4: Basic Judicial Requirements
In the judiciary system each position within the federal, state and local government has different types of requirements.
Learning Objective
Identify the type and structure of courts that make up the U.S. federal court system
Key Points
- In federal legislation, regulations governing the “courts of the United States” only refer to the courts of the United States government, and not the courts of the individual states.
- State courts may have different names and organization; trial courts may be called courts of common plea and appellate courts “superior courts” or commonwealth courts.
- The U.S. federal court system hears cases involving litigants from two or more states, violations of federal laws, treaties, and the Constitution, admiralty, bankruptcy, and related issues. In practice, about 80% of the cases are civil and 20% are criminal.
- Federal courts may not decide every case that happens to come before them. In order for a district court to entertain a lawsuit, Congress must first grant the court subject matter jurisdiction over the type of dispute in question.
- In addition to their original jurisdiction, the district courts have appellate jurisdiction over a very limited class of judgments, orders, and decrees.
- A final ruling by a district court in either a civil or a criminal case can be appealed to the United States court of appeals in the federal judicial circuit in which the district court is located, except that some district court rulings involving patents and certain other specialized matters.
Key Terms
- appeal
-
(a) An application for the removal of a cause or suit from an inferior to a superior judge or court for re-examination or review. (b) The mode of proceeding by which such removal is effected. (c) The right of appeal. (d) An accusation; a process which formerly might be instituted by one private person against another for some heinous crime demanding punishment for the particular injury suffered, rather than for the offense against the public. (e) An accusation of a felon at common law by one of his accomplices, which accomplice was then called an approver.
- original jurisdiction
-
the power of a court to hear a case for the first time
Background
In federal legislation, regulations governing the “courts of the United States” only refer to the courts of the United States government, and not the courts of the individual states. Because of the federalist underpinnings of the division between federal and state governments, the various state court systems are free to operate in ways that vary widely from those of the federal government, and from one another. In practice, however, every state has adopted a division of its judiciary into at least two levels, and almost every state has three levels, with trial courts hearing cases which may be reviewed by appellate courts, and finally by a state supreme court. A few states have two separate supreme courts, with one having authority over civil matters and the other reviewing criminal cases. State courts may have different names and organization; trial courts may be called “courts of common plea” and appellate courts “superior courts” or “commonwealth courts. ” State courts hear about 98% of litigation; most states have special jurisdiction courts, which typically handle minor disputes such as traffic citations, and general jurisdiction courts, which handle more serious disputes.
The U.S. federal court system hears cases involving litigants from two or more states, violations of federal laws, treaties, and the Constitution, admiralty, bankruptcy, and related issues. In practice, about 80% of the cases are civil and 20% are criminal. The civil cases often involve civil rights, patents, and Social Security while the criminal cases involve tax fraud, robbery, counterfeiting, and drug crimes. The trial courts are U.S. district courts, followed by United States courts of appeals and then the Supreme Court of the United States. The judicial system, whether state or federal, begins with a court of first instance, whose work may be reviewed by an appellate court, and then ends at the court of last resort, which may review the work of the lower courts.
Jurisdiction
Unlike some state courts, the power of federal courts to hear cases and controversies is strictly limited. Federal courts may not decide every case that happens to come before them. In order for a district court to entertain a lawsuit, Congress must first grant the court subject matter jurisdiction over the type of dispute in question. Though Congress may theoretically extend the federal courts’ subject matter jurisdiction to the outer limits described in Article III of the Constitution, it has always chosen to give the courts a somewhat narrower power.
For most of these cases, the jurisdiction of the federal district courts is concurrent with that of the state courts. In other words, a plaintiff can choose to bring these cases in either a federal district court or a state court. Congress has established a procedure whereby a party, typically the defendant, can remove a case from state court to federal court, provided that the federal court also has original jurisdiction over the matter. Patent and copyright infringement disputes and prosecutions for federal crimes, the jurisdiction of the district courts is exclusive of that of the state courts.
US Court of Appeals and District Court Map
Courts of Appeals, with the exception of one, are divided into geographic regions known as circuits that hear appeals from district courts within the region..
Attorneys
In order to represent a party in a case in a district court, a person must be an Attorney At Law and generally must be admitted to the bar of that particular court. The United States usually does not have a separate bar examination for federal practice (except with respect to patent practice before the United States Patent and Trademark Office). Admission to the bar of a district court is generally granted as a matter of course to any attorney who is admitted to practice law in the state where the district court sits. Many district courts also allow an attorney who has been admitted and remains an active member in good standing of any state, territory or the District of Columbia bar to become a member. The attorney submits his application with a fee and takes the oath of admission. Local practice varies as to whether the oath is given in writing or in open court before a judge of the district.
Several district courts require attorneys seeking admission to their bars to take an additional bar examination on federal law, including the following: the Southern District of Ohio, the Northern District of Florida, and the District of Puerto Rico.
Appeals
Generally, a final ruling by a district court in either a civil or a criminal case can be appealed to the United States court of appeals in the federal judicial circuit in which the district court is located, except that some district court rulings involving patents and certain other specialized matters must be appealed instead to the United States Court of Appeals for the Federal Circuit, and in a very few cases the appeal may be taken directly to the United States Supreme Court.
14.3: The Federal Court System
14.3.1: U.S. District Courts
The 94 U.S. district courts oversee civil and criminal cases within certain geographic or subject areas.
Learning Objective
Identify the function and characteristics of U.S. district courts
Key Points
- The U.S. district courts are the trial courts within the U.S. federal system that primarily address civil and criminal cases.
- Each state and territory has at least one district court responsible for hearing cases that arise within a given geographic area.
- Civil cases are legal disputes between two or more parties while criminal cases involve prosecution by a U.S. attorney. Civil and criminal cases also differ in how they are conducted.
- Every district court is associated with a bankruptcy court that usually provides relief for honest debtors and ensures that creditors are paid back in a timely manner.
- Two special trial courts lie outside of the district court system: the Court of International Trade and the United States Court of Federal Claims.
Key Terms
- prosecutor
-
A lawyer who decides whether to charge a person with a crime and tries to prove in court that the person is guilty.
- plaintiff
-
A party bringing a suit in civil law against a defendant; accusers.
- defendant
-
In civil proceedings, the party responding to the complaint; one who is sued and called upon to make satisfaction for a wrong complained of by another.
Introduction
The United States district courts are the trial courts within the U.S. federal court system. There are a total of ninety-four district courts throughout the U.S. states and territories . Every state and territory has at least one district court that is responsible for hearing cases that arise within that geographic area. With the exception of the territorial courts in Guam, the Northern Mariana Islands, and the Virgin Islands, federal district judges are appointed for life.
District Court of Rhode Island Seal
Each state has at least one district court that is responsible for overseeing civil and criminal cases in a given region.
Criminal and Civil Cases
The U.S. district courts are responsible for holding general trials for civil and criminal cases. Civil cases are legal disputes between two or more parties; they officially begin when a plaintiff files a complaint with the court. The complaint explains the plaintiff’s injury, how the defendant caused the injury, and requests the court’s assistance in addressing the injury. A copy of the complaint is “served” to the defendant who must, subsequently, appear for a trial. Plaintiffs may ask the court to order the defendant to stop the conduct that is causing the injury or may seek monetary compensation for the injury.
Meanwhile, criminal cases involve a U.S. attorney (the prosecutor), a grand jury, and a defendant. Defendants may also have their own private attorney to represent them or a court-appointed attorney if they are unable to afford counsel. The purpose of the grand jury is to review evidence presented by the prosecutor to decide whether a defendant should stand trial. Criminal cases involve an arraignment hearing in which defendants enter a plea to the charges brought against them by the U.S. attorney. Most defendants will plead guilty at this point instead of going to trial. A judge may issue a sentence at this time or will schedule a hearing at a later point to determine the sentence. Those defendants who plead not guilty will be scheduled to receive a later trial.
Bankruptcy Court
A bankruptcy court is associated with each U.S. district court. Bankruptcy cases primarily address two concerns. First, they may provide an honest debtor with a “fresh start” by relieving the debtor of most debts. Second, bankruptcy cases ensure that creditors are repaid in a timely manner commensurate with what property the debtor has available for payment.
Other Trial Courts
While district courts are the primary trial courts within the U.S., two special trial courts exist outside of the district court system. The Court of International Trade has jurisdiction over cases involving international trade and customs issues. Meanwhile, the United States Court of Federal Claims oversees claims against the United States. These claims include money damages against the U.S., unlawful takings of private property by the federal government, and disputes over federal contracts. Both courts exercise nationwide jurisdiction versus the geographic jurisdiction limited to the district courts.
14.3.2: U.S. Court of Appeals
The U.S. courts of appeals review the decisions made in trial courts and often serve as the final arbiter in federal cases.
Learning Objective
Discuss the role of the U.S. federal courts of appeals in the judiciary
Key Points
- The U.S. federal courts of appeals hear appeals from district courts as well as appeals from decisions of federal administrative agencies.
- Courts of appeals are divided into thirteen circuits, twelve of which serve specific geographic regions.The thirteenth circuit hears appeals from the Court of International Trade, the U.S. Court of Federal Claims, the Patent and Trademark Office, and others.
- In contrast to trial courts, appellate court decisions are made by a panel of three judges who do not consider any additional evidence beyond what was presented in trial court or any additional witness testimony.
- A litigant who files an appeal, known as an appellant, and a litigant defending against an appeal, known as an appellee, present their legal arguments in documents called briefs. Oral arguments may also be made to argue for or against an appeal.
- The U.S. courts of appeals are among the most powerful courts since they establish legal precedent and serve as the final arbiter in more cases than the Supreme Court.
Key Terms
- trial court
-
a tribunal established for the administration of justice, in which disputing parties come together to present information before a jury or judge that will decide the outcome of the case
- ruling
-
An order or a decision on a point of law from someone in authority.
- appeal
-
(a) An application for the removal of a cause or suit from an inferior to a superior judge or court for re-examination or review. (b) The mode of proceeding by which such removal is effected. (c) The right of appeal. (d) An accusation; a process which formerly might be instituted by one private person against another for some heinous crime demanding punishment for the particular injury suffered, rather than for the offense against the public. (e) An accusation of a felon at common law by one of his accomplices, which accomplice was then called an approver.
- litigant
-
A party suing or being sued in a lawsuit, or otherwise calling upon the judicial process to determine the outcome of a suit.
- brief
-
memorandum of points of fact or of law for use in conducting a case
The U.S. federal courts of appeals, also known as appellate courts or circuit courts, hear appeals from district courts as well as appeals from decisions of federal administrative agencies. There are thirteen courts of appeals, twelve of which are based on geographic districts called circuits. These twelve circuit courts decide whether or not the district courts within their geographic jurisdiction have made an error in conducting a trial . The thirteenth court of appeals hears appeals from the Court of International Trade, the U.S. Court of Federal Claims, the Patent and Trademark Office, and others.
US Court of Appeals and District Court Map
Courts of Appeals, with the exception of one, are divided into geographic regions known as circuits that hear appeals from district courts within the region..
Every federal court litigant has the right to appeal an unfavorable ruling from the district court by requesting a hearing in a circuit court. However, only about 17% of eligible litigants do so because of the expense of appealing. In addition, few appealed cases are heard in the higher courts. Those that are, are rarely reversed.
Procedure
The procedure within appellate courts diverges widely from that within district courts. First, a litigant who files an appeal, known as an appellant, must show that the trial court or an administrative agency made a legal error that affected the decision in the case. Appeals are then passed to a panel of three judges working together to make a decision. These judges base their decision on the record of the case established by the trial court or agency. The appellant presents a document called a brief, which lays out the legal arguments to persuade the judge that the trial court made an error. Meanwhile, the party defending against the appeal, known as the apellee, also presents a brief presenting reasons the trial court decision is correct or why an error made by the trial court is not significant enough to reverse the decision. The appellate judges do not receive any additional evidence or hear witnesses.
While some cases are decided on the basis of written briefs alone, other cases move on to an oral argument stage. Oral argument consists of a structured discussion between the appellate lawyers and the panel of judges on the legal principles in dispute. Each side is given a short time to present their arguments to the judges. The court of appeals decision is usually the final word in the case unless it sends the case back to the trial court for additional proceedings. A litigant who loses in the federal courts of appeals may also ask the Supreme Court to review the case.
Legal Precedent
The U.S. courts of appeals are among the most powerful and influential courts in the United States. Decisions made within courts of appeals, unlike those of the lower district courts, establish binding precedents. After a ruling has been made, other federal courts in the circuit must follow the appellate court’s guidance in similar cases, even if the trial judge thinks that the case should be handled differently. In addition, the courts of appeals often serve as the final arbiter in federal cases, since the Supreme Court hears less than 100 of the over 10,000 cases sent to it annually.
14.3.3: The Supreme Court
The U.S. Supreme Court is the highest tribunal within the U.S. and most often hears cases concerning the Constitution or federal law.
Learning Objective
Explain the composition and significance of the Supreme Court, as well as its methodology to decide which cases to hear.
Key Points
- The Supreme Court is currently composed of a Chief Justice and eight associate justices nominated the President and confirmed by the U.S. Senate.
- The justices are appointed for life and do not officially represent any political party although they are often informally categorized as being conservative, moderate, or liberal on their judicial outlook.
- Article III and the Eleventh Amendment of the Constitution establish the Supreme Court as the first court to hear certain kinds of cases.
- Most cases that reach the Supreme Court are appeals from lower courts that begin as a writ of certiorari.
- Some reasons cases are granted cert are that they present a conflict in the interpretation of federal law or the Constitution, they represent an extreme departure from the normal course of judicial proceedings, or if a decision conflicts with a previous decision of the Supreme Court.
- Cases are decided by majority rule in which at least five of the nine justices have to agree.
Key Terms
- appeal
-
(a) An application for the removal of a cause or suit from an inferior to a superior judge or court for re-examination or review. (b) The mode of proceeding by which such removal is effected. (c) The right of appeal. (d) An accusation; a process which formerly might be instituted by one private person against another for some heinous crime demanding punishment for the particular injury suffered, rather than for the offense against the public. (e) An accusation of a felon at common law by one of his accomplices, which accomplice was then called an approver.
- tribunal
-
An assembly including one or more judges to conduct judicial business; a court of law.
- amicus curiae
-
someone who is not a party to a case who offers information that bears on the case but that has not been solicited by any of the parties to assist a court
Definition and Composition
The U.S. Supreme Court is the highest tribunal within the U.S. and hears a limited number of cases per year associated with the Constitution or laws of the United States. It is currently composed of the Chief Justice of the United States and eight associate justices. The current Chief Justice is John Roberts; the eight associate justices are Anthony Kennedy, Clarence Thomas, Ruth Bader Ginsburg, Stephen Breyer, Samuel Alito, Sonia Sotomayor, and Elena Kagan. Justice Antonin Scalia died on Feb. 13, 2016. President Obama nominated Merrick Garland as his replacement on March 16, 2016, but the U.S. Senate did not initiate the process for Garland’s approval. Majority Leader Mitch McConnell stated that the Senate would not approve a Scalia replacement until after a new president took office in January, 2017.
The U.S. Supreme Court
The United States Supreme Court, the highest court in the United States, in 2010. Top row (left to right): Associate Justice Sonia Sotomayor, Associate Justice Stephen G. Breyer, Associate Justice Samuel A. Alito, and Associate Justice Elena Kagan. Bottom row (left to right): Associate Justice Clarence Thomas, Associate Justice Antonin Scalia, Chief Justice John G. Roberts, Associate Justice Anthony Kennedy, and Associate Justice Ruth Bader Ginsburg.
The Justices
The Supreme Court justices are nominated by the President and confirmed by the U.S. Senate. Justices have life tenure unless they resign, retire, or are removed after impeachment. Although the justices do not represent or receive official endorsements from political parties, they are usually informally labeled as judicial conservatives, moderates, or liberals in their legal outlook.
Article III of the United States Constitution leaves it to Congress to fix the number of justices. Initially, The Judiciary Act of 1789 called for the appointment of six justices. But in 1866, at the behest of Chief Justice Chase, Congress passed an act providing that the next three justices to retire would not be replaced, which would thin the bench to seven justices by attrition. Consequently, one seat was removed in 1866 and a second in 1867. In 1869, however, the Circuit Judges Act returned the number of justices to nine, where it has since remained.
How Cases Reach the Supreme Court
The Supreme Court is the first court to hear certain kinds of cases in accordance with both Article III and the Eleventh Amendment of the Constitution. These cases include those between the United States and one of the states, those between two or more states, those brought by one state against citizens of another state or foreign country, and those involving foreign ambassadors or other ministers. However, only about 1% of the Supreme Court’s cases consist of these cases. Most cases reach the Supreme Court as appeals from civil and criminal cases that have been decided by state and lower federal courts.
Cases that come to the Supreme Court as appeals begin as a writ of certiorari, which is a petition to the Supreme Court to review the case. The Supreme Court will review the case if four of the nine justices agree to “grant cert. ” This standard for acceptance is known as the Rule of Four. Cases that are granted cert must be chosen for “compelling reasons,” as outlined in the court’s Rule 10. These reasons could be that the cases present a conflict in the interpretation of federal law or the Constitution, that they raise an important question about federal law, or that they represent an extreme departure from the normal course of judicial proceedings. The Supreme Court can also grant cert if the decision made in a lower court conflicts with a previous decision of the Supreme Court. If the Supreme Court does not grant cert this simply means that it has decided not to review the case.
To manage the high volume of cert petitions received by the Court each year, the Court employs an internal case management tool known as the “cert pool. ” Each year, the Supreme Court receives thousands of petitions for certiorari; in 2001 the number stood at approximately 7,500, and had risen to 8,241 by October Term 2007. The Court will ultimately grant approximately 80 to 100 of these petitions, in accordance with the rule of four.
Procedure
Once a case is granted cert, lawyers on each side file a brief that presents their arguments. With the permission of the court, others with a stake in the outcome of the case may also file an amicus curiae brief for one of the parties to the case. After the briefs are reviewed, the justices hear oral arguments from both parties. During the oral argument the justices may ask questions, raise new issues, or probe arguments made in the briefs.
The justices meet in a conference some time after oral arguments to vote on a case decision. The justices vote in order of seniority, beginning with the Chief Justice. Cases are decided by majority rule in which at least five of the nine justices have to agree. Bargaining and compromise are often called for to create a majority coalition. Once a decision has been made, one of the justices will write the majority opinion of the Court. This justice is chosen by either the Chief Justice or by the justice in the majority who has served the longest in the Supreme Court. The Supreme Court cannot directly enforce its rulings. Instead, it relies on respect for the Constitution and the law for adherence to its decisions.
14.4: Judicial Review and Policy Making
14.4.1: The Impact of Court Decisions
Court decisions can have a very strong influence on current and future laws, policies, and practices.
Learning Objective
Identify the impacts of court decisions on current policies and practices.
Key Points
- Court decisions can have an important impact on policy, law, and legislative or executive action; different courts can also have an influence on each other.
- In the U.S. legal systems, a precedent is a principle or rule established in a previous legal court decision that is either binding on, or persuasive for, a court or other tribunal when deciding subsequent cases with similar issues or facts.
- Common law precedent is a third kind of law, on equal footing with statutory law (statutes and codes enacted by legislative bodies), and regulatory law (regulations promulgated by executive branch agencies).
- Stare decisis is a legal principle by which judges are obliged to respect the precedent established by prior court decisions.
- Vertical precedent is the application of the doctrine of stare decisis from a superior court to an inferior court; horizontal precedent, on the other hand, is the application of the doctrine across courts of similar or coordinate level.
Key Term
- privatization
-
the government outsourcing of services or functions to private firms
Privatization is government outsourcing of services or functions to private firms. These services often include, revenue collection, law enforcement and prison management.
In competitive industries with well-informed consumers, privatization consistently improves efficiency. The more competitive the industry, the greater the improvement in output, profitability and efficiency. Such efficiency gains mean a one-off increase in GDP, but improved incentives to innovate and reduce costs also tend to raise the rate of economic growth. Although typically there are many costs associated with these efficiency gains, many economists argue that these can be dealt with by appropriate government support through redistribution and perhaps retraining.
Capitol Hill
Capitol Hill, where bills become laws.
Studies show that private market factors can more efficiently deliver many goods or service than governments due to free market competition. Over time this tends to lead to lower prices, improved quality, more choices, less corruption, less red tape and/or quicker delivery. Many proponents do not argue that everything should be privatized. Market failures and natural monopolies could be problematic.
Opponents of certain privatizations believe that certain public goods and services should remain primarily in the hands of government in order to ensure that everyone in society has access to them. There is a positive externality when the government provides society at large with public goods and services such as defense and disease control. Some national constitutions in effect define their governments’ core businesses as being the provision of such things as justice, tranquility, defense and general welfare. These governments’ direct provision of security, stability and safety is intended to be done for the common good with a long-term perspective. As for natural monopolies, opponents of privatization claim that they aren’t subject to fair competition and are better administrated by the state. Likewise, private goods and services should remain in the hands of the private sector.
14.4.2: The Power of Judicial Review
Judicial review is the doctrine where legislative and executive actions are subject to review by the judiciary.
Learning Objective
Explain the significance of judicial review in the history of the Supreme Court
Key Points
- Judicial review is an example of the separation of powers in a modern governmental system.
- Common law judges are seen as sources of law, capable of creating new legal rules and rejecting legal rules that are no longer valid. In the civil law tradition, judges are seen as those who apply the law, with no power to create or destroy legal rules.
- In the United States, judicial review is considered a key check on the powers of the other two branches of government by the judiciary.
Key Term
- doctrine
-
A belief or tenet, especially about philosophical or theological matters.
Judicial review is the doctrine under which legislative and executive actions are subject to review by the judiciary. Specific courts with judicial review power must annul the acts of the state when it finds them incompatible with a higher authority. Judicial review is an example of the separation of powers in a modern governmental system. This principle is interpreted differently in different jurisdictions, so the procedure and scope of judicial review differs from state to state.
Judicial review can be understood in the context of two distinct—but parallel—legal systems, civil law and common law, and also by two distinct theories on democracy and how a government should be set up, legislative supremacy and separation of powers. Common law judges are seen as sources of law, capable of creating new legal rules and rejecting legal rules that are no longer valid. In the civil law tradition, judges are seen as those who apply the law, with no power to create or destroy legal rules.
The separation of powers is another theory about how a democratic society’s government should be organized. First introduced by French philosopher Charles de Secondat, Baron de Montesquieu , separation of powers was later institutionalized in the United States by the Supreme Court ruling in Marbury v. Madison. It is based on the idea that no branch of government should be more powerful than any other and that each branch of government should have a check on the powers of the other branches of government, thus creating a balance of power among all branches of government. The key to this idea is checks and balances. In the United States, judicial review is considered a key check on the powers of the other two branches of government by the judiciary.
14.4.3: Judicial Activism and Restraint
Judicial activism is based on personal/political considerations and judicial restraint encourages judges to limit their power.
Learning Objective
Compare and contrast judicial activist and judicial-restrained judges
Key Points
- Judicial activism describes judicial rulings suspected of being based on personal or political considerations rather than on existing law.
- Judicial restraint encourages judges to limit the exercise of their own power. It asserts that judges should hesitate to strike down laws unless they are obviously unconstitutional, though what counts as obviously unconstitutional is itself a matter of some debate.
- Detractors of judicial activism argue that it usurps the power of elected branches of government or appointed agencies, damaging the rule of law and democracy. Defenders say that in many cases it is a legitimate form of judicial review and that interpretations of the law must change with the times.
Key Term
- statutory
-
Of, relating to, enacted or regulated by a statute.
Judicial activism describes judicial rulings suspected of being based on personal or political considerations rather than on existing law. The definition of judicial activism and which specific decisions are activist, is a controversial political issue. The phrase is generally traced back to a comment by Thomas Jefferson, referring to the despotic behavior of Federalist federal judges, in particular, John Marshall. The question of judicial activism is closely related to constitutional interpretation, statutory construction and separation of powers.
Detractors of judicial activism argue that it usurps the power of elected branches of government or appointed agencies, damaging the rule of law and democracy. Defenders say that in many cases it is a legitimate form of judicial review and that interpretations of the law must change with the times.
Judicial restraint is a theory of judicial interpretation that encourages judges to limit the exercise of their own power. It asserts that judges should hesitate to strike down laws unless they are obviously unconstitutional, though what counts as obviously unconstitutional is itself a matter of some debate.
In deciding questions of constitutional law, judicially-restrained jurists go to great lengths to defer to the legislature. Former Associate Justice Oliver Holmes Jr. is considered to be one of the first major advocates of the philosophy. Former Associate Justice Felix Frankfurter , a Democrat appointed by Franklin Roosevelt, is generally seen as the model of judicial restraint.
Judicially-restrained judges respect stare decisis, the principle of upholding established precedent handed down by past judges. When Chief Justice Rehnquist overturned some of the precedents of the Warren Court, Time magazine said he was not following the theory of judicial restraint. However, Rehnquist was also acknowledged as a more conservative advocate of the philosophy.
Felix Frankfurter
Former Associate Justice Felix Frankfurter, one of the first major advocates to advocate deferring to the legislature.
14.4.4: The Supreme Court as Policy Makers
The Constitution does not grant the Supreme Court the power of judicial review but the power to overturn laws and executive actions.
Learning Objective
Discuss the constitutional powers and authority of the Supreme Court and its role in developing policies
Key Points
- The Supreme Court first established its power to declare laws unconstitutional in Marbury v. Madison (1803), consummating the system of checks and balances, allowing judges to have the last word on allocation of authority among the three branches of the federal government.
- The Supreme Court cannot directly enforce its rulings, but it relies on respect for the Constitution and for the law for adherence to its judgments.
- Through its power of judicial review, the Supreme Court has defined the scope and nature of the powers and separation between the legislative and executive branches of the federal government.
Key Term
- impeachment
-
the act of impeaching a public official, either elected or appointed, before a tribunal charged with determining the facts of the matter.
A policy is described as a principle or rule to guide decisions and achieve rational outcomes. The policy cycle is a tool used for the analyzing of the development of a policy item. A standardizes version includes agenda setting, policy formulation, adoption, implementation and evaluation.
The Constitution does not explicitly grant the Supreme Court the power of judicial review but the power of the Court to overturn laws and executive actions it deems unlawful or unconstitutional is well-established. Many of the Founding Fathers accepted the notion of judicial review. The Supreme Court first established its power to declare laws unconstitutional in Marbury v. Madison (1803), consummating the system of checks and balances. This power allows judges to have the last word on allocation of authority among the three branches of the federal government, which grants them the ability to set bounds to their own authority, as well as to their immunity from outside checks and balances.
Supreme Court
The Supreme Court holds the power to overturn laws and executive actions they deem unlawful or unconstitutional.
The Supreme Court cannot directly enforce its rulings, but it relies on respect for the Constitution and for the law for adherence to its judgments. One notable instance came in 1832, when the state of Georgia ignored the Supreme Court’s decision in Worcester v. Georgia. Some state governments in the south also resisted the desegregation of public schools after the 1954 judgment Brown v. Board of Education. More recently, many feared that President Nixon would refuse to comply with the Court’s order in United States v. Nixon (1974) to surrender the Watergate tapes. Nixon ultimately complied with the Supreme Court’s ruling.
Some argue that the Supreme Court is the most separated and least checked of all branches of government. Justices are not required to stand for election by virtue of their tenure during good behavior and their pay may not be diminished while they hold their position. Though subject to the process of impeachment, only one Justice has ever been impeached and no Supreme Court Justice has been removed from office. Supreme Court decisions have been purposefully overridden by constitutional amendment in only four instances: the Eleventh Amendment overturned Chisholm v. Georgia (1793), the13th and 14th Amendments in effect overturned Dred Scott v. Standford (1857), the 16th Amendment reversed Pollock v. Farmers’ Loan and Trust Co. (1895) and the 16th Amendment overturned some portions of Oregon v. Mitchell (1970). When the Court rules on matters involving the interpretation of laws rather than of the Constitution, simple legislative action can reverse the decisions. The Supreme Court is not immune from political and institutional restraints: lower federal courts and state courts sometimes resist doctrinal innovations, as do law enforcement officials.
On the other hand, through its power of judicial review, the Supreme Court has defined the scope and nature of the powers and separation between the legislative and executive branches of the federal government. The Court’s decisions can also impose limitations on the scope of Executive authority, as in Humphrey’s Executor v. United States (1935), the Steel Seizure Case (1952) and United States v. Nixon (1974).
14.4.5: Two Judicial Revolutions: The Rehnquist Court and the Roberts Court
The Rehnquist Court favored federalism and social liberalism, while the Roberts Court was considered more conservative.
Learning Objective
Compare and contrast the Rehnquist Court and the Roberts Court
Key Points
- Rehnquist favored a conception of federalism that emphasized the Tenth Amendment’s reservation of powers to the states. Under this view of federalism, the Supreme Court, for the first time since the 1930s, struck down an Act of Congress as exceeding federal power under the Commerce Clause.
- In 1999, Rehnquist became the second Chief Justice to preside over a presidential impeachment trial, during the proceedings against President Bill Clinton.
- One of the Court’s major developments involved reinforcing and extending the doctrine of sovereign immunity, which limits the ability of Congress to subject non-consenting states to lawsuits by individual citizens seeking money damages.
- The Roberts Court refers to the Supreme Court of the United States since 2005, under the leadership of Chief Justice John G. Roberts. It is generally considered more conservative than the preceding Rehnquist Court, as a result of the retirement of moderate Justice Sandra Day O’Connor.
- In its first five years, the Roberts court has issued major rulings on gun control. affirmative action, campaign finance regulation, abortion, capital punishment and criminal sentencing.
Key Term
- certiorari
-
A grant of the right of an appeal to be heard by an appellate court where that court has discretion to choose which appeals it will hear.
William Rehnquist served as an Associate Justice on the Supreme Court of the United States, and later as the 16th Chief Justice of the United States. When Chief Justice Warren Burger retired in 1986, President Ronald Reagan nominated Rehnquist to fill the position. The Senate confirmed his appointment by a 65-33 vote and he assumed office on September 26, 1986.
William Rehnquist
Former Chief Justice William Rehnquist
Considered a conservative, Rehnquist favored a conception of federalism that emphasized the Tenth Amendment’s reservation of powers to the states. Under this view of federalism, the Supreme Court, for the first time since the 1930s, struck down an Act of Congress as exceeding federal power under the Commerce Clause. He won over his fellow justices with his easygoing, humorous and unpretentious personality. Rehnquist also tightened up the justices’ conferences, keeping them from going too long or off track. He also successfully lobbied Congress in 1988 to give the Court control of its own docket, cutting back mandatory appeals an certiorari grants in general.
In 1999, Rehnquist became the second Chief Justice to preside over a presidential impeachment trial, during the proceedings against President Bill Clinton. In 2000, Rehnquist wrote a concurring opinion in Bush v. Gore, the case that effectively ended the presidential election controversy in Florida, that the Equal Protection Clause barred a standard-less manual recount of the votes as ordered by the Florida Supreme Court.
The Rehnquist Court’s congruence and proportionality standard made it easier to revive older precedents preventing Congress from going too far in enforcing equal protection of the laws. One of the Court’s major developments involved reinforcing and extending the doctrine of sovereign immunity, which limits the ability of Congress to subject non-consenting states to lawsuits by individual citizens seeking money damages.
Rehnquist presided as Chief Justice for nearly 19 years, making him the fourth-longest-serving Chief Justice after John Marshall, Roger Taney and Melville Fuller. He is the eighth longest-serving justice in Supreme Court history.
The Roberts Court refers to the Supreme Court of the United States since 2005, under the leadership of Chief Justice John G. Roberts. It is generally considered more conservative than the preceding Rehnquist Court, as a result of the retirement of moderate Justice Sandra Day O’Connor and the subsequent confirmation of the more conservative Justice Samuel Alito in her place.
After the death of Chief Justice Rehnquist, Roberts was nominated by President George W. Bush, who had previously nominated him to replace Sandra Day O’Connor. The Senate confirmed his nomination by a vote of 78-22. Roberts took the Constitutional oath of office, administered by senior Associate Justice John Paul Stevens at the White House, on September 29, 2005, almost immediately after his confirmation. On October 3, he took the judicial oath provided for by the Judiciary Act of 1789, prior to the first oral arguments of the 2005 term.
In its first five years, the Roberts court issued major rulings on gun control. affirmative action, campaign finance regulation, abortion, capital punishment and criminal sentencing.
14.5: Federal Judicial Appointments
14.5.1: The Nomination Process
It is the president’s responsibility to nominate federal judges and the Senate’s responsibility to approve or reject the nomination.
Learning Objective
Explain how the nomination process represents the systems of checks and balances in the Constitution
Key Points
- The U.S. Constitution establishes “checks and balances” among the powers of the executive, legislative and judiciary branches. The nomination process of federal judges is an important part of this system.
- The Appointments Clause of the United States Constitution empowers the president to appoint certain public officials with the “advice and consent” of the U.S. Senate.
- .
- Certain factors influence who the president chooses to nominate for the Supreme Court: composition of the Senate, timing of the election cycle, public approval rate of the president, and the strength of interest groups.
- After the president makes a nomination, the Senate Judiciary Committee studies the nomination and makes a recommendation to the Senate.
Key Terms
- veto
-
A political right to disapprove of (and thereby stop) the process of a decision, a law, etc.
- judiciary
-
The court system and judges considered collectively, the judicial branch of government.
- Senate Judiciary Committee
-
A standing committee of the US Senate, the 18-member committee is charged with conducting hearings prior to the Senate votes on confirmation of federal judges (including Supreme Court justices) nominated by the President.
Checks and Balances
One of the theoretical pillars of the United States Constitution is the idea of checks and balances among the powers of the executive, legislative and judiciary branches. For example, while the legislative (Congress) has the power to create law, the executive (president) can veto any legislation; an act that can be overridden by Congress. The president nominates judges to the nation’s highest judiciary authority (Supreme Court), but Congress must approve those nominees. The Supreme Court, meanwhile, has the power to invalidate as unconstitutional any law passed by the Congress. Thus, the nomination and appointment process of federal judges serves as an important component of the checks and balances process.
The Appointment Clause of the Constitution
The president has the power to nominate candidates for Supreme Court and other federal judge positions based on the Appointments Clause of the United States Constitution. This clause empowers the president to appoint certain public officials with the “advice and consent” of the U.S. Senate. Acts of Congress have established 13 courts of appeals (also called “circuit courts”) with appellate jurisdiction over different regions of the country. Every judge appointed to the court may be categorized as a federal judge with approval from the Senate.
The Nomination Process
The president nominates all federal judges, who must then be approved by the Senate . The appointment of judges to lower federal courts is important because almost all federal cases end there. Through lower federal judicial appointments, a president “has the opportunity to influence the course of national affairs for a quarter of a century after he leaves office.” Once in office, federal judges can be removed only by impeachment and conviction. Judges may time their departures so that their replacements are appointed by a president who shares their views. For example, Supreme Court Justice Souter retired in 2009 and Justice Stevens in 2010, enabling President Obama to nominate – and the Democratic controlled Senate to confirm – their successors. A recess appointment is the appointment, by the President of the United States, of a senior federal official while the U.S. Senate is in recess. To remain in effect a recess appointment must be approved by the Senate by the end of the next session of Congress, or the position becomes vacant again; in current practice this means that a recess appointment must be approved by roughly the end of the next calendar year.
Chief Justice Roberts
John G. Roberts, Jr., Chief Justice of the United States of America. Federal judges, such as Supreme Court Justices, must be nominated.
Choosing Supreme Court Justices
In nominating Supreme Court justices, presidents seek to satisfy their political, policy and personal goals. They do not always succeed, as justices sometimes change their views over time or may surprise the president from the start. The following are some other factors that can influence a president’s choice of Supreme Court nominee:
- Senate composition: Whether the president’s party has a majority or a minority in the Senate is a factor. In 1990, when the Democrats had a majority, Republican President George H. W. Bush nominated the judicially experienced and reputedly ideologically moderate David H. Souter, who was easily approved.
- Timing: The closer to an upcoming presidential election the appointment occurs, the more necessary it is to appoint a highly qualified, noncontroversial figure acceptable to the Senate. Otherwise, senators have an incentive to stall until after the election.
- Public approval of the president: The higher the president’s approval ratings, the more nominating leeway the president possesses. However, even presidents riding a wave of popularity can fail to get their nominees past the Senate, as was the case with Richard Nixon and his failed nominations of Clement Haynesworth and G. Harrold Carswell in 1970. So lacking were Carswell’s qualifications that a senator defended him saying, “Even if he were mediocre, there are a lot of mediocre judges and people and lawyers. They are entitled to a little representation…and a little chance.”
- Interest groups: Nominees must be acceptable to interest groups that support the president. They also must be invulnerable to being depicted in ways that would significantly reduce their chances of Senate approval.
Nominations go to the Senate Judiciary Committee, which usually holds hearings. Whether senators should concern themselves with anything more than the nominee’s professional qualifications is often debated. Arguably, “nothing in the Constitution, historical experience, political practice, ethical norms, or statutory enactments prohibits senators from asking questions that reveal judicial nominees’ views on political and ideological issues.” The next step for the Judiciary Committee is to vote on whether or not to send the nomination to the Senate floor. If it reaches the floor, senators then can vote to confirm or reject the nomination, or filibuster so that a vote is delayed or does not take place. Fewer than half of recent nominees to the federal appeals courts have been confirmed.
14.5.2: The Confirmation Process
To be appointed as a federal judge, nominees must be confirmed by the Senate after being interviewed by a Committee.
Learning Objective
Explain the confirmation process for nominees to the U.S. Supreme Court
Key Points
- In modern times, the confirmation process has attracted considerable attention from special-interest groups, many of which lobby senators to confirm or to reject a nominee depending on whether the nominee’s track record aligns with the group’s views.
- The modern practice of the Committee questioning nominees on their judicial views began with the nomination of John Marshall Harlan II in 1955. The nomination came shortly after the Court handed down the landmark Brown v. Board of Education decision.
- A simple majority vote is required to confirm or to reject a nominee. Once the Committee reports out the nomination, the whole Senate considers it. Rejections are relatively uncommon; the Senate has explicitly rejected twelve Supreme Court nominees in its history.
- Once the Senate confirms the nomination by an affirmative vote, the President must prepare and sign a commission and have the Seal of the United States Department of Justice affixed to the document before the new Justice can take office.
Key Terms
- Senate Judiciary Committee
-
A standing committee of the US Senate, the 18-member committee is charged with conducting hearings prior to the Senate votes on confirmation of federal judges (including Supreme Court justices) nominated by the President.
- interest groups
-
The term interest group refers to virtually any voluntary association that seeks to publicly promote and create advantages for its cause. It applies to a vast array of diverse organizations. This includes corporations, charitable organizations, civil rights groups, neighborhood associations, and professional and trade associations.
- John Marshall Harlan II
-
An American jurist who served as an Associate Justice of the Supreme Court from 1955 to 1971.
Background
Federal judicial appointments must go through a confirmation process before they are approved. During this process, a committee called the Senate Judiciary Committee conducts hearings, questioning nominees to determine their suitability. At the close of confirmation hearings, the Committee votes on whether the nomination should go to the full Senate with a positive, negative, or neutral report.
In modern times, the confirmation process has attracted considerable attention from special-interest groups, many of which lobby senators to confirm or to reject a nominee depending on whether the nominee’s track record aligns with the group’s views.
The Committees and Confirmation Process
The Senate Judiciary Committee personally interviews nominees, a practice that is relatively recent and began in 1925. The modern practice of the Committee questioning every nominee on their judicial views began with the nomination of John Marshall Harlan II in 1955. The nomination came shortly after the Court handed down the landmark Brown v. Board of Education decision, and several Southern senators attempted to block Harlan’s confirmation – hence the decision to testify.
A simple majority vote is required to confirm or to reject a nominee. Once the Committee reports out the nomination, the whole Senate considers it. Rejections are relatively uncommon; the Senate has explicitly rejected only twelve Supreme Court nominees in its history.
It is also possible for the President to withdraw a nominee’s name before the actual confirmation vote occurs. This usually happens when the President feels that the nominee has little chance of being confirmed. Supreme Court nominations have caused media speculation about whether the judge leans to the left, middle, or right. One indication of the politicized selection process is how much time each nominee spends being questioned under the glare of media coverage. Before 1925, nominees were never questioned; after 1955, every nominee has been required to appear before the Senate Judiciary Committee and answer questions. The number of hours hours spent being grilled has increased from single digits (before 1980) to double digits today.
The U.S. Supreme Court
The United States Supreme Court, the highest court in the United States, in 2010. Top row (left to right): Associate Justice Sonia Sotomayor, Associate Justice Stephen G. Breyer, Associate Justice Samuel A. Alito, and Associate Justice Elena Kagan. Bottom row (left to right): Associate Justice Clarence Thomas, Associate Justice Antonin Scalia, Chief Justice John G. Roberts, Associate Justice Anthony Kennedy, and Associate Justice Ruth Bader Ginsburg.
Once the Senate confirms the nomination by an affirmative vote, the President must prepare and sign a commission and have the Seal of the United States Department of Justice affixed to the document before the new Justice can take office. It is this act of the President which officially commences the beginning of an individual Justice’s tenure.
Chapter 13: Bureaucracy
13.1: Bureaucracy
13.1.1: Bureaucracy
Bureaucracy may be defined as a form of government: government by many bureaus, administrators, and petty officials.
Learning Objective
Define bureaucracies and their distinctive features
Key Points
- A bureaucracy is a group of specifically non-elected officials within a government or other institution that implements the rules, laws, ideas, and functions of their institution through “a system of administration marked by officials, red tape, and proliferation”.
- As the most efficient and rational way of organizing, bureaucratization for Weber was the key part of the rational-legal authority, and furthermore, he saw it as the key process in the ongoing rationalization of the Western society.
- Weber also saw it as a threat to individual freedoms, in which increasing rationalization of human life could trap individuals in the iron cage of bureaucratic, rule-based, rational control. To counteract this bureaucratic possibility, the system needs entrepreneurs and politicians.
Key Terms
- bureaucratization
-
The formation of, or the conversion of something into, a bureaucracy.
- counteract
-
To act in opposition to; to hinder, defeat, or frustrate, by contrary agency or influence; as, to counteract the effect of medicines; to counteract good advice.
Example
- An example of bureaucracy is what is called a civil service job, which can be in a governmental service agency such as the Department of Labor or the Department of Defense.
Background
A bureaucracy is a group of specifically non-elected officials within a government or other institution that implements the rules, laws, ideas, and functions of their institution through “a system of administration marked by officials, red tape, and proliferation. ” In other words, a government administration should carry out the decisions of the legislature or democratically elected representation of a state.
Bureaucracy may also be defined as a form of government: government by many bureaus, administrators, and petty officials. A government is defined as the political direction and control exercised over the actions of its citizens. On the other hand, democracy is defined as: government by the people. In other words, supreme power is vested in the people and exercised directly by them or by their elected agents under a free electoral system and not by non-elected bureaucrats.
Weberian bureaucracy
Weberian bureaucracy has its origin in the works by Max Weber (1864-1920), a notable German sociologist, political economist, and administrative scholar who contributed to the study of bureaucracy and administrative discourses and literatures during the late 1800s and early 1900s. Max Weber belongs to the Scientific School of Thought, who discussed such topics as specialization of job-scope, merit system, uniform principles, structure, and hierarchy.
Weber described many ideal types of public administration and government in his magnum opus Economy and Society (1922). His critical study of the bureaucratization of society became one of the most enduring parts of his work. It was Weber who began the studies of bureaucracy and whose works led to the popularization of this term. Many aspects of modern public administration go back to him, and a classic, hierarchically organized civil service of the Continental type is called Weberian civil service. As the most efficient and rational way of organizing, bureaucratization for Weber was the key part of the rational-legal authority, and furthermore, he saw it as the key process in the ongoing rationalization of the Western society.
Weber’s ideal bureaucracy is characterized by hierarchical organization, delineated lines of authority in a fixed area of activity, action taken on the basis of and recorded in written rules, bureaucratic officials with expert training, rules implemented by neutral officials, and career advancement depends on technical qualifications judged by an organization, not individuals.The decisive reason for the advancement of bureaucratic organization has always been its purely technical superiority over any other form of organization.
While recognizing bureaucracy as the most efficient form of organization, and even indispensable for the modern state, Weber also saw it as a threat to individual freedoms. In his view, ongoing bureaucratization could lead to a polar night of icy darkness, in which individuals are trapped in an iron cage of bureaucratic, rule-based, rational control. To counteract this bureaucratic possibility, the system needs entrepreneurs and politicians.
The Cabinet and the Bureaucracy
The Cabinet of the United States is composed of the most senior appointed officers of the executive branch of the federal government of the United States, who are generally the heads of the federal executive departments. All Cabinet members are nominated by the president and then presented to the Senate for confirmation or rejection by a simple majority. If they are approved, they are sworn in and then begin their duties. Aside from the Attorney General, and the Postmaster General when it was a Cabinet office, they all receive the title of Secretary. Members of the Cabinet serve at the pleasure of the President, which means that the President may dismiss them or reappoint them (to other posts) at will.
U.S. Department of Labor headquarters
The Frances Perkins Building located at 200 Constitution Avenue, N.W., in the Capitol Hill neighborhood of Washington, D.C. Built in 1975, the modernist office building serves as headquarters of the United States Department of Labor.
13.1.2: Size of the Federal Bureaucracy
The size of federal bureaucracy has been steady despite the government’s claims of cutting the role of government.
Learning Objective
Illustrate the factors that affect the size of bureaucracies
Key Points
- Political officials often pledge to shrink the size of federal bureaucracy while at the same time promising to enhance its efficiency. The number of civilian federal employees, at least, has not increased since the 1960s.
- There are 16.2 million state and local government workers, meaning federal government does not need to hire approximately 4.05 million workers to carry out its policies.
- From the 1960s to the 1990s, the number of senior executives and political appointees in federal bureaucracy quintupled.
- The average number of layers between president and street-level bureaucrats swelled from 17 in 1960 to 32 in 1992.
- To manage the growing federal bureaucracy, Presidents have gradually surrounded themselves with many layers of staff, who were eventually organized into the Executive Office of the President of the United States.
Key Term
- mandate
-
An official or authoritative command; a judicial precept.
Example
- The fact that the Defense Department contracted out for military interrogators and security officers in war zones did not become public knowledge until the Abu Ghraib prison abuse scandal broke in April 2004. The federal government directly supports 5.6 million jobs through contracts and 2.4 million jobs through grants.
Background
Political officials often pledge to shrink the size of the federal bureaucracy while at the same time enhancing its efficiency. By one measure, they have succeeded: the number of civilian federal employees has not increased since the 1960s. How can politicians proclaim that the era of big government is over while providing the increase in government services that people expect? They have accomplished this by vastly expanding the number of workers owing jobs to federal money. Over 16 million full-time workers now administer federal policy, including 1.9 million federal civilian workers, 1.5 million uniformed military personnel, and 850,000 postal workers.
State and local government workers are subject to federal mandates. On average, they devote one-fourth of their work to carrying out federal directives. With 16.2 million state and local government workers, the federal government does not need to hire approximately 4.05 million workers to carry out its policies. The government also contracts with private companies to provide goods and services. The fact that the Defense Department contracted out for military interrogators and security officers in war zones did not become public knowledge until the Abu Ghraib prison abuse scandal broke in April 2004. The federal government directly supports 5.6 million jobs through contracts and 2.4 million jobs through grants.
A worker makes final checks
This is an example of R&D in action.
The Thickening of Government
The reliance on mandates and contracts have resulted in fewer civil servants directly interacting with the public as much as street-level bureaucrats. Instead, federal employees have become professionals and managers. From the 1960s to the 1990s, even as the size of civil service stayed constant, the number of senior executives and political appointees quintupled. This proliferation of managers creates thickening government. The number of layers between the president and street-level bureaucrats swelled from 17 in 1960 to 32 in 1992. New administrative titles like “assistant,” “associate” and “deputy” were created to streamline and and supervise state and local workers as well as other bureacrats. As a result, much of federal bureaucracy now consists of “managers managing managers. “
The Congress and President of the United States delegate specific authority to government agencies to regulate the complex facets of the modern American federal state. The majority of the independent agencies of the United States government are also classified as executive agencies. To manage the growing federal bureaucracy, Presidents have gradually surrounded themselves with many layers of staff, who were eventually organized into the Executive Office of the President of the United States. Within the Executive Office, the President’s innermost layer of aides (and their assistants) are located in the White House Office.
Throughout the 20th century, presidents have changed the size of bureaucracies at the federal level. Starting with the Reagan administration, conservatives have sought to downsize bureaucracies in pursuit of the “small government” tenet of the conservative movement. Small government is government which minimizes its own activities, particularly bureaucratic “red tape. ” Red tape is excessive regulation or rigid conformity to formal rules that is considered redundant or bureaucratic and hinders or prevents action or decision-making. It is usually applied to governments, corporations, and other large organizations. The “cutting of red tape” is a popular electoral and policy promise. In the United States, a number of committees have discussed and debated Red Tape Reduction Acts. The reduction in red tape, essentially means the reduction of petty government (and occasionally business) bureaucracy. Such processes are often very slow as it usually means a government employee who was fulfilling that petty function either loses some of their administrative power (and any indirect benefits that it may bestow) or a lower level office worker loses their job. Though the functions performed by that office worker are at that point deemed unproductive, government job losses are often resisted by unions hence red tape continues to keep that unproductive worker in a job.
13.1.3: The Growth of Bureaucracy
As modernity came into place in the Western hemisphere, the growth of bureaucratization came into place.
Learning Objective
Identify the causes for the growth of bureaucracies over time
Key Points
- As Weber understood, particularly during the industrial revolution of the late 19th century, society was being driven by the passage of rational ideas into culture that in turn transformed society into an increasingly bureaucratic entity.
- Bureaucracy is a complex means of managing life in social institutions that includes rules and regulations, patterns and procedures that both are designed to simplify the functioning of complex organizations.
- Weber did believe bureaucracy was the most rational form of institutional governance, but because Weber viewed rationalization as the driving force of society, he believed bureaucracy would increase until it ruled society. Society, for Weber, would become almost synonymous with bureaucracy.
Key Term
- rational
-
Healthy or balanced intellectually; exhibiting the ability to think with reason.
Background
Weber listed several preconditions for the emergence of the bureaucracy: The growth in space and population being administered, the growth in complexity of the administrative tasks being carried out and the existence of a monetary economy – these resulted in a need for a more efficient administrative system. Development of communication and transportation technologies made more efficient administration possible (and popularly requested) and democratization and rationalization of culture resulted in demands that the new system treat everybody equally.
The growth of bureaucratization developed due to the rapid industrialization that United States was facing during the 19th century. As Weber understood, particularly during the industrial revolution of the late 19th century, society was being driven by the passage of rational ideas into culture that in turn transformed society into an increasingly bureaucratic entity.
Black Student Welders Work in a Machine Shop Course Taught at The Chicago Opportunities Industrialization Center
Black Student Welders Work in a machine shop course taught at the Chicago opportunities industrialization center at a former grade school in the heart of the Cabrini-Green Housing Project on Chicago’s near north side.
The Growth of Bureaucratization
Bureaucracy is a type of organizational or institutional management that is, as Weber understood it, rooted in legal-rational authority. Bureaucracy is a complex means of managing life in social institutions that includes rules and regulations, patterns and procedures that both are designed to simplify the functioning of complex organizations. An example of bureaucracy would be the forms used to pay one’s income taxes – they require specific information and procedures to fill them out. Included in that form, however, are countless rules and laws the dictate what can and can’t be tied into one’s taxes. Thus, bureaucracy simplifies the process of paying one’s taxes by putting the process into a formulaic structure, but simultaneously complicates it by adding rules and regulations that govern the procedure. Weber did believe bureaucracy was the most rational form of institutional governance, but because Weber viewed rationalization as the driving force of society, he believed bureaucracy would increase until it ruled society. Society, for Weber, would become almost synonymous with bureaucracy.
Governing Bureaucratic Institutions
As society became more populated and industrialized, department and federal agencies develop to regulate the flow and integration of people of growing cities. For example, one well-known bureaucratic agency in which people deal with regularly is the Department of Motor Vehicles. This is the agency, which issues driver’s licenses and registration. In some states, the function is handled by an actual Department of Motor Vehicles (or similar agency with a different name), while in other states it is handled by subdivisions of the state’s transportation department. In Hawaii, this function is done at the county level. Some other agencies you may be familiar with include Fish & Game, Forestry, or Transportation.
13.1.4: The Cost of Maintaining the Government
The cost of maintaining the United States government is a lengthy budgetary process, requiring approval from many governmental committees.
Learning Objective
Identify the institutions and offices responsible for maintaining the federal government
Key Points
- The Office of Management and Budget (OMB) is a cabinet-level office, the largest within the Executive Office of the President of the United States (EOP).
- The OMB ensures that agency reports, rules, testimony and proposed legislation are consistent with the President’s Budget and with Administration Policies.
- CBO computes a current law baseline budget projection that is intended to estimate what federal spending and revenues would be in the absence of new legislation for the current fiscal year, as well as for the coming 10 fiscal years.
- The budget resolution serves as a blueprint for the actual appropriation process and provides Congress with some control over the appropriations process.
- Authorizations for many programs have long lapsed, yet still receive appropriated amounts. Other programs that are authorized receive no funds at all. In addition, policy language, that is, legislative text changing permanent law, is included in appropriation measures.
Key Terms
- jurisdiction
-
the power, right, or authority to interpret and apply the law
- appropriation
-
Public funds set aside for a specific purpose.
The Cost of Maintaining the Government
Background
The Office of Management and Budget (OMB) is a cabinet-level office, the largest within the Executive Office of the President of the United States (EOP). The current OMB Acting Director is Jeffrey Zients.
U.S. Office of Management and Budget Seal
The Office of Management and Budget plays a key role in preparing the president’s budget request to Congress.
The Budget and Accounting Act of 1921, which was signed into law by President Warren G. Harding, established The Bureau of the Budget, OMB’s predecessor, as a part of the Department of the Treasury. As such, it was moved to the EOP in 1939, and then reorganized into OMB in 1970 during the Nixon administration.
The first OMB included Roy Ash (Head), Paul O’Neill (Assistant Director), Fred Malek (Deputy Director) and Frank Zarb (Associate Director) and two dozen others. In the 1990s, OMB was reorganized to remove the distinction between management and budgetary staff by combining those dual roles within the Resource Management Offices.
The OMB’s predominant mission is to assist the President in overseeing the preparation of the federal budget and to supervise its administration in Executive Branch agencies. The OMB ensures that agency reports, rules, testimony and proposed legislation are consistent with the President’s Budget and with Administration Policies.
In addition, the OMB oversees and coordinates the Administration’s procurement, financial management, information and regulatory policies. In each of these areas, the OMB’s role is to help improve administrative management; to develop better performance measures and coordinating mechanisms, and to reduce any unnecessary burdens on the public.
United States’ Budget Process
Each year in March, the Congressional Budget Office (CBO) publishes an analysis of the President’s budget proposals. (The CBO budget report and other publications can be found at the CBO’s website. )
CBO computes a current law baseline budget projection that is intended to estimate what federal spending and revenues would be in the absence of new legislation for the current fiscal year and for the coming 10 fiscal years. However, the CBO also computes a current-policy baseline, which makes assumptions about, for instance, votes on tax cut sunset provisions. The current CBO 10 year budget baseline projection grows from $3.7 trillion in 2011 to $5.7 trillion in 2021.
The Houseand Senate Budget Committees begin consideration of the President’s budget proposals in February and March. Other committees with budgetary responsibilities submit requests and estimates to the Budget committees during this time. The Budget committees each submit a budget resolution by April 1. The House and Senate each consider those budget resolutions and are expected to pass them, possibly with amendments, by April 15. Budget resolutions specify funding levels for appropriations committees and subcommittees.
Appropriations Committees, starting with allocations in the budget resolution, put together appropriations bills, which may be considered in the House after May 15. Once appropriations committees pass their bills, the House and Senate consider them. A conference committee is typically required to resolve differences between House and Senate bills. Once a conference bill has passed both chambers of Congress, it is sent to the President, who may sign the bill or veto. If he signs, the bill becomes law. Otherwise, Congress must pass another bill to avoid a shutdown of at least part of the federal government.
In recent years, Congress has not passed all of the appropriations bills before the start of the fiscal year. Congress has then enacted continuing resolutions that provide for the temporary funding of government operations.
Budget Resolution
The next step is the drafting of a budget resolution. The United States House Committee on the Budget and the United States Senate Committee on the Budget are responsible for drafting budget resolutions. Following the traditional calendar, by early April both committees finalize their drafts and submit it to their respective floors for consideration and adoption.
A budget resolution, which is one form of a concurrent resolution, binds Congress, but is not a law, and so does not require the President’s signature. The budget resolution serves as a blueprint for the actual appropriation process and provides Congress with some control over the appropriations process.
In general, an Authorizing Committee, through enactment of legislation, must authorize funds for Federal Government programs. Then, through subsequent acts by Congress, the Appropriations Committee of the House then appropriates budget authority. In principle, committees with jurisdiction to authorize programs make policy decisions, while the Appropriations Committees decide on funding levels, limited to a program’s authorized funding level, though the amount may be any amount less than the limit.
In practice, the separation between policy-making and funding, and the division between appropriations and authorization activities are imperfect. Authorizations for many programs have long lapsed, yet still receive appropriated amounts. Other programs that are authorized receive no funds at all. In addition, policy language, that is, legislative text changing permanent law, is included in appropriation measures.
13.1.5: Public and Private Bureaucracies
Public and private bureaucracies both influence each other in terms of laws and regulations because they are mutually dependent.
Learning Objective
Discuss the interaction between public and private bureaucracies
Key Points
- In the United States during the 1930’s, the typical company laws (e.g. in Delaware) did not clearly mandate such rights. Berle argued that the unaccountable company directors were likely to funnel the fruits of enterprise profits into their own pockets as well as manage in their own interests.
- In The New Industrial State, Galbraith argued that a private-bureaucracy, a techno-structure of experts who manipulated marketing and public relations channels, planned economic decisions.
- Private bureaucracies still have to comply with public regulations imposed by the government. In addition, private enterprises continue to influence governmental structures. Therefore, the relationship is reciprocal.
Key Term
- monetary policy
-
The process by which the government, central bank, or monetary authority manages the supply of money or trading in foreign exchange markets.
Background
The Great Depression was a time of significant upheaval in the United States. One of the most original contributions to understanding what had gone wrong came from a Harvard University lawyer, named Adolf Berle (1895–1971). Berle, who like John Maynard Keynes had resigned from his diplomatic job at the Paris Peace Conference of 1919, was deeply disillusioned by the Versailles Treaty. In his book with Gardiner C. Means, The Modern Corporation and Private Property (1932), he detailed the evolution in the contemporary economy of big business. Berle argued that the individuals who controlled big firms should be held accountable. Directors of companies are held accountable by the shareholders of companies. At times, they are not held accountable because of rules found in company law statutes. This might include the right to elect and fire the management, requirements for regular general meetings, accounting standards, and so on.
In the United States during the 1930’s, the typical company laws (e.g. in Delaware) did not clearly mandate such rights. Berle argued that the unaccountable directors of companies were therefore apt to funnel the fruits of enterprise profits into their own pockets, as well as manage in their own interests. They were able to do this because the majority of shareholders in big public companies were single individuals, with scant means of communication. Quite simply, they divided and conquered. Berle served in President Franklin Delano Roosevelt’s administration through the depression. He was also a key member of the so-called “Brain trust” that developed many of the New Deal policies. In 1967, Berle and Means issued a revised edition of their work, in which the preface added a new dimension. It was not only the separation of company directors from the owners as shareholders at stake. They posed the question of what the corporate structure was really meant to achieve.
Adolf Augustus Berle
Adolf Berle, in The Modern Corporation and Private Property, argued that the separation of control of companies from the investors who were meant to own them endangered the American economy and led to a unequal distribution of wealth.
The Interaction with Public and Private Bureaucracies
After World War II, John Kenneth Galbraith (1908–2006) became one of the standard bearers for pro-active government and liberal-democrat politics. In The Affluent Society (1958), Galbraith urged voters reaching a certain material wealth begin to vote against the common good. He argued that the “conventional wisdom” of the conservative consensus was not enough to solve the problems of social inequality. In an age of big business, he argued, that it is unrealistic to think of markets of the classical kind. They set prices and use advertising to create artificial demand for their own products, which distorts people’s real preferences. Consumer preferences actually come to reflect those of corporations—a “dependence effect”—and the economy as a whole is geared towards irrational goals.
In The New Industrial State, Galbraith argued that a private-bureaucracy, a techno-structure of experts who manipulated marketing and public relations channels, planned economic decisions. This hierarchy is self-serving, profits are no longer the prime motivator, and even managers are not in control. Since they are the new planners, corporations detest risk. They require steady economy and stable markets. They recruit governments to serve their interests with fiscaland monetary policy. An example would be adhering to monetarist policies that enrich moneylenders in the city through increases in interest rates. While the goals of an affluent society and complicit government serve the irrational techno-structure, public space is simultaneously impoverished. Galbraith paints the picture of stepping from penthouse villas onto unpaved streets, from landscaped gardens to unkempt public parks. In Economics and the Public Purpose (1973) Galbraith advocates a “new socialism” as the solution. He promotes nationalizing military production and public services such as health care as well as introducing disciplined salary and price controls to reduce inequality.
Today, the formation of private bureaucracies within the private corporate entities has created their own regulations and practices. Its organizational structure can be compared to that of a public bureaucracy. However, private bureaucracies still have to comply with public regulations imposed by the government. In addition, private enterprises continue to influence governmental structures. Therefore, the relationship is reciprocal.
13.1.6: Models of Bureaucracy
Bureaucracies have different type of models, depending upon their governmental organizational structure.
Learning Objective
Compare and contrast the different types of authority according to Max Weber and how these relate to bureaucracy
Key Points
- Through rationalization, Weber understood the individual cost-benefit calculation and the wider, bureaucratic structure of organizations, which in general was the opposite of understanding reality through mystery and magic (disenchantment).
- As the most efficient and rational way of organizing, Weber viewed bureaucratization as the key part of the rational-legal authority.
- The Weberian characteristics of bureaucracy are clear, defined roles and responsibilities, a hierarchical structure and respect for merit.
- The acquisition model of bureaucracy can incite succession of roles and power between different bureaucratic departments. Monopolistic bureaucracies do not provide room for competition within each bureaucratic department.
Key Term
- acquisition
-
The act or process of acquiring.
Example
- The Department of Defense has historically competed to seek more funding than other federal departments. This is what’s called an acquisition model of bureaucracy.
Background
Many scholars have described rationalization and the question of individual freedom as the main theme of Weber’s work. Through rationalization, Weber understood the individual cost-benefit calculation and the wider, bureaucratic structure of organizations, which generally was the opposite of understanding reality through mystery and magic (disenchantment). The fate of our time is characterized by rationalization, intellectualization and, above all, the “disenchantment of the world. “
What Weber depicted was not only the secularization of Western culture, but also the development of modern societies from the viewpoint of rationalization. New structures of society were marked by two intermeshing systems that had taken shape around the organizational cores of capitalist enterprise and bureaucratic state apparatus. Weber understood this process as the institutionalization of purposive-rational economic and administrative action. To the degree that everyday life was affected by cultural and societal rationalization, traditional forms of life differentiated primarily according to one’s trade were dissolved.
Models of Bureaucracy
Many aspects of modern public administration go back to Weber. Weberian civil service is hierarchically organized and viewed as the most efficient and rational way of organizing. Bureaucratization for Weber was the key part of the rational-legal authority. He saw it as the key process in the ongoing rationalization of Western society.
Weberian characteristics of bureaucracy are clear, defined roles and responsibilities, a hierarchical structure and respect for merit. The acquisition model of bureaucracy, meanwhile, can incite succession of roles and power between different bureaucratic departments. At the same time, monopolistic bureaucracy does not provide room for competition within each bureaucratic department.
Weberian Bureaucracy
Weber described many ideal types of public administration and government in his masterpiece Economy and Society (1922). His critical study of the bureaucratisation of society became one of the most enduring parts of his work. It was Weber who began the studies of bureaucracy and whose works led to the popularisation of this term. Many aspects of modern public administration go back to him and a classic, hierarchically organised civil service of the Continental type is called “Weberian civil service”. [98] As the most efficient and rational way of organising, bureaucratisation for Weber was the key part of the rational-legal authority and furthermore, he saw it as the key process in the ongoing rationalisation of the Western society.
Weber listed several preconditions for the emergence of the bureaucracy: The growth in space and population being administered, the growth in complexity of the administrative tasks being carried out and the existence of a monetary economy – these resulted in a need for a more efficient administrative system. [99] Development of communication and transportation technologies made more efficient administration possible (and popularly requested) and democratisation and rationalisation of culture resulted in demands that the new system treat everybody equally.
Weber’s ideal bureaucracy is characterised by hierarchical organisation, by delineated lines of authority in a fixed area of activity, by action taken (and recorded) on the basis of written rules, by bureaucratic officials needing expert training, by rules being implemented neutrally and by career advancement depending on technical qualifications judged by organisations, not by individuals
United States Defense Attaché System Seal
The Department of Defense is allocated the highest level of budgetary resources among all Federal agencies, amounting to more than one half of the annual Federal discretionary budget.
13.2: The Organization of Bureaucracy
13.2.1: Cabinet Departments
The cabinet is the collection of top-ranking advisors in the executive branch of government, particularly executive department secretaries.
Learning Objective
Describe the constitutional origin of the Cabinet and the shape of its growth Washington’s presidency
Key Points
- Cabinet members are appointed by the president and serve as the president’s primary advisors.
- Cabinet members carry out numerous domestic and foreign affairs.
- George Washington established the first cabinet by appointing four departmental leaders as his main advisors.
Key Terms
- executive department
-
An executive organ that serves at the disposal of the president and normally acts as an advisory body to the presidency.
- Cabinet
-
A governmental group composed of the most senior appointed officers of the executive branch of the federal government of the United States who are generally the heads of the federal executive departments.
- line of succession
-
An ordered sequence of named people who would succeed to a particular office upon the death, resignation, or removal of the current occupant.
Example
- The attorney general is an example of a cabinet member, and oversees the executive Department of Justice.
The Cabinet of the United States consists of the highest-ranking appointed officers in the executive branch of the federal government: the secretaries of each of the 15 executive departments. These Cabinet members preside over bureaucratic operations and serve as advisors to the president. All secretaries are directly accountable to the president, and they do not have the power to enforce any policy recommendations outside of their department.
The president nominates secretaries to their offices, and the Senate votes to confirm them. Secretaries are not subject to elections or term limits, but most turnover when a new political party wins the presidency.
Structure of the Cabinet Departments
Each of the Cabinet departments is organized with a similar hierarchical structure. At the top of each department is the secretary (in the Department of Justice, the highest office is called the “attorney general,” but the role is parallel to that of the secretary of state, defense, etc.).
Beneath the secretary, each executive department has a deputy secretary. The deputy secretary advises and assists the secretary, and fills the Office of Secretary if it becomes vacant. The deputy secretary is nominated by the president, just as the secretary is.
Below the level of deputy secretary, departmental organization varies. Most departments have several under secretaries, who preside over specific branches of the organization rather than being accountable for the functioning of the entire department, as the secretary and deputy secretary are. Under secretaries are appointed by the president, but range in prestige depending on the size of the department they are employed in and the breadth of affairs they oversee.
Finally, each department has its own staff. Departmental staffs are not appointed by the president, but instead are hired by internal supervisors (such as under secretaries). Staff qualifications and duties range widely by department. For example, national park service employees are considered staff of the Department of the Interior, but some may work on policy in Washington, while others tend to conservation in Yellowstone. Likewise, military staff includes soldiers on active duty who are not administrative employees but are nonetheless under the purview of the Department of Defense.
Taken as a group, the executive departments employ over 4 million people and have an operating budget of over $2.3 trillion.
History of the Cabinet
The first president of the United States, George Washington, established the tradition of having a cabinet of advisors. The U.S. Constitution specifically calls for the creation of executive departments, but it only addresses the leaders of executive departments to specify that as unelected officials they must answer to the president and do not have the power to enforce their recommendations. George Washington thus began the practice of having a formal cabinet of advisors when he appointed Secretary of State Thomas Jefferson, Secretary of the Treasury Alexander Hamilton, Secretary of War Henry Knox, and Attorney General Edmund Randolph.
The three oldest executive departments are the Department of State, the Department of War, and the Treasury, all of which were established in 1789. The Department of War has since been subsumed by the Department of Defense, and many other executive departments have been formed.
The Line of Succession
The secretaries are formally in the line of presidential succession, after the vice president, speaker of the house, and president pro tempore of the Senate. In other words, if the president, vice president, speaker, and president pro temopre were all incapacitated by death, resignation, or impeachment, the Cabinet members would ascend to the Office of President in a predetermined order. The order of the departments, and the roles of the secretaries of each department, is as follows:
- State: The Secretary of State oversees international relations.
- Treasury: The Secretary of the Treasury is concerned with financial and monetary issues.
- Defense: The Secretary of Defense supervises national defense and the armed forces.
- Justice: The Attorney General is responsible for law enforcement.
- Interior: The Secretary of the Interior oversees federal land and natural resource use.
- Agriculture: The Secretary of Agriculture advises policy on food, farming, and agricultural trade.
- Commerce: The Secretary of Commerce is concerned with economic growth.
- Labor: The Secretary of Labor is responsible for occupational safety and workplace regulation.
- Health and Human Services: The Secretary of Health and Human Services is charged with ensuring the health and well being of Americans.
- Housing and Urban Development: The Secretary of Housing and Urban Development administers affordable housing and city planning.
- Transportation: The Secretary of Transportation oversees transportation infrastructure and policies.
- Energy: The Secretary of Energy is responsible for research into energy sources and the handling of nuclear material.
- Education: The Secretary of Education oversees public schools.
- Veterans Affairs: The Secretary of Veterans Affairs coordinates programs and benefits for veterans
- Homeland Security: The Secretary of Homeland Security is responsible for domestic security measures
In addition to the secretaries of the established executive departments, there are some cabinet-level officers who are the heads of independent executive agencies. These agencies do not answer to the president directly and, therefore, there are no executive departments strictly speaking. Still, their heads are considered high ranking advisors to the president. These cabinet-level officers include the vice president, the chief of staff, the director of the Office of Management and Budget, the administrator of the Environmental Protection Agency, the trade representative, the ambassador to the United Nations, the chairman of the Council of Economic Advisors, and the administrator of the Small Business Administration.
Department of Justice Seal
The attorney general is the head of the Department of Justice, and is a prominent cabinet member.
13.2.2: Independent Agencies
Independent executive agencies operate as regulatory and service agencies to oversee federal government functions.
Learning Objective
Differentiate between executive agencies and executive departments
Key Points
- Executive agencies operate as services and/or regulatory agencies and are distinct because they exist independently from other executive departments.
- In executive agencies, the president can terminate people’s positions only if there is proof of removal according to statutory provisions.
- Most executive agencies need to have bipartisan membership and presidents cannot just remove and rehire for positions.
Key Terms
- executive department
-
An executive organ that serves at the disposal of the president and normally acts as an advisory body to the presidency.
- executive agency
-
A permanent or semi-permanent organization in the machinery of government that is responsible for the oversight and administration of specific functions.
- enabling act
-
A piece of legislation by which a legislative body grants an entity which depends on it for authorization or legitimacy the power to take certain actions.
Example
- The Federal Communication Commission (FCC) is an example of an executive agency, and acts as an outpost of the executive government to regulate communications technology and media in the U.S.
In the United States federal government, Congress and the President have the ability to delegate authority to independent executive agencies, sometimes called federal agencies or administrative agencies. These agencies are distinct from executive departments because they have some degree of independence from the President. In executive departments, department heads are nominated by the President and confirmed by Congress, and can be removed from their posts for political reasons. Department heads, who comprise the Cabinet, therefore often turn over when a new president is elected. For example, the Secretary of State is a high status position that a high ranking diplomat in the leading political party usually fills. Unlike in executive departments, the leaders of agencies can only be removed from office for corruption charges under statutory provisions. Even though the president appoints them, agency leadership is non-partisan, or independent from Presidential politics and election turn over.
The leaders of agencies often participate as members of commissions, boards, or councils with internal structures resembling tripartite government. That is, a single agency may “legislate” by producing regulations; “adjudicate” by resolving disputes between parties; and “enforce” by penalizing regulation violations. To illustrate this point, consider one independent agency — the Federal Communication Commission (FCC). The FCC oversees media in the United States. One notorious function of the FCC is to regulate decency on television. To carry out this function, the FCC sets regulations defining what television programming is decent and what is indecent; if a station is accused of violating these regulations, the complaint is brought to the FCC; if the FCC finds that the programming was a violation of regulations regarding decency, it may fine the station.
Other independent executive agencies include the CIA (Central Intelligence Agency), the NASA (National Aeronautics and Space Administration) and the EPA (Environmental Protection Agency). The CIA helps gather intelligence and provides national security assessments to policymakers in the United States. It acts as the primary human intelligence provider for the federal government. The National Aeronautics and Space Administration NASA, is a government agency responsible for the civilian space program as well as aeronautics and aerospace research. The EPA was created for the purpose of protecting human health and the environment by writing and enforcing regulations based on laws passed by Congress. EPA enforcement powers include fines, sanctions, and other measures.
The U.S. Constitution does not explicitly reference federal agencies. Instead, these agencies are generally justified by acts of Congress designed to manage delineated government functions, such as the maintenance of infrastructure and regulation of commerce. Congress passes statutes called enabling acts that define the scope of agencies’ authority. Once created, agencies are considered part of the executive branch of government and are partly regulated by government parties. However, executive agencies have to remain nonpartisan.
Federal Communications Commission
The Federal Communications Commission (FCC) is one of many independent executive agencies.
13.2.3: Regulatory Commissions
Independent regulatory agencies create and enforce regulations to protect the public at large.
Learning Objective
Use the work of the FDA as an example to describe the activity and mission of regulatory agencies more broadly
Key Points
- Independent regulatory agencies are situated in the executive branch of the government but are not directly under the control of the President.
- Regulatory agencies conduct investigations and audits to ensure that industries and organizations do not pose threats to public safety or well-being.
- Regulatory agencies are intended to be transparent, such that they are accountable to public oversight and legal review.
Key Terms
- regulatory agency
-
A public authority or government agency responsible for exercising autonomous authority over some area of human activity in a regulatory or supervisory capacity.
- Food and Drug Administration
-
An agency of the United States Department of Health and Human Services, one of the United States federal executive departments, responsible for protecting and promoting public health.
Example
- The Food and Drug Administration (FDA) is an independent regulatory agency intended to promote public health by overseeing food and drug safety.
A regulatory agency is a body in the U.S. government with the authority to exercise authority over some area of human activity in a supervisory capacity. An independent regulatory agency is separate from the other branches of the federal government. These agencies are within the purview of the executive branch of government, but are internally regulated rather than subject to the direct control of the President.
Regulatory agencies exist to supervise the administrative functions of organizations for the benefit of the public at large. To carry out this function, regulatory agencies are composed of experts in a specific policy area of administrative law, such as tax or health codes. Agencies may carry out investigations or audits to determine if organizations are adhering to federal regulations.
To better understand how independent regulatory agencies function, let us consider the U.S. Food and Drug Administration (FDA). The FDA’s mission is to promote public health by regulating the production, distribution, and consumption of food and drugs. When a pharmaceutical company produces a new drug, the manufacturers must submit it to the FDA for approval. The FDA employs experts in pharmaceuticals and drug safety, who evaluate the potential benefits and consequences of the drug. Following reports on the safety of the drug, the FDA determines whether it can be distributed, to whom it can be distributed, and under what conditions it can be safely consumed. The FDA thus uses internal expertise to regulate the pharmaceutical industry.
Regulatory agencies are authorized to produce and enforce regulations by Congress, and are subject to Congressional and legal review as they carry out their functions. Congress may determine that regulatory agencies are obsolete, for example, and may therefore discontinue funding them. Similarly, Congress may choose to expand the authority of a regulatory agency in response to a perceived threat to public safety. Additionally, regulatory agencies are designed to be transparent, such that their decisions and activities are able to be evaluated by the public and by legal review boards.
Food and Drug Administration Regulations
The FDA sets regulations governing which drugs can be distributed over the counter and which require a prescription based on an expert evaluation of the drug’s effects.
13.2.4: Government Corporations
Government corporations are revenue generating enterprises that are legally distinct from but operated by the federal government.
Learning Objective
Differentiate between a government-owned corporation, a government-sponsored enterprise, and organizations chartered by the government that provide public services
Key Points
- In many cases, the government owns a controlling share of a corporation’s stock but does not directly operate the corporation.
- Government-sponsored enterprises are financial services corporations that the government backs in order to provide low cost loans for economic development.
- Government-acquired corporations are those companies that come under government control as a result of unpaid debts or unfulfilled contracts, which are usually returned to private sector control after a time.
Key Terms
- government corporation
-
a legal entity created by a government to undertake commercial activities on behalf of an owner government
- government-sponsored enterprise
-
A group of financial services corporations created by Congress that are structured and regulated by the US government to enhance the availability and reduce the cost of credit to targeted borrowing sectors.
- government-owned corporation
-
A legal entity created by a government to undertake commercial activities on behalf of an owner government; their legal status varies from being a part of government to being a private enterprise in which the government holds a majority of stock.
Example
- Fannie Mae and Freddie Mac are examples of government-sponsored enterprises that provide loans for mortgages and real estate investment.
A government-owned corporation, also known as a state-owned company, state enterprise, publicly owned corporation, or commercial government agency, is a legal entity created by a government to undertake commercial activities on behalf of the government. In some cases, government-owned corporations are considered part of the government, and are directly controlled by it. In other instances, government-owned corporations are similar to private enterprises except that the government is the majority stockholder. Government-owned agencies sometimes have public policy functions, but unlike other executive agencies, are primarily intended to bring in revenue.
In the United States, there is a specific subset of government-owned corporations known as government-sponsored enterprises (GSEs). GSEs are financial services corporations created by Congress to increase the availability of low cost credit to particular borrowing sectors. The first GSE in the United States was the Farm Credit System in 1916, which made loans available for agricultural expansion and development. Currently, the largest segment of GSEs operates in the mortgage borrowing segment. Fannie Mae, Freddie Mac, and the twelve Federal Home Loan Banks operate as independent corporations and provide loans for mortgages and real estate development. However, the government possesses sufficient stock to claim 79.9% ownership of the corporations, should it choose to do so.
In addition to the financial sector GSEs, the U.S. government has chartered corporations that are legally distinct from the government (unlike federal agencies) but that provide public services. These chartered corporations sometimes receive money from the federal government, but are largely responsible for generating their own revenue. Corporations in this category include the Corporation for Public Broadcasting, the National Fish and Wildlife Foundation, The National Park Foundation, and many others.
Lastly, the government sometimes controls government acquired corporations–corporations that were not chartered or created by the government, but which it comes to possess and operate. These corporations are usually controlled by the government only temporarily, often as the result of government seizure due to unpaid debts. For example, a delinquent taxpayer’s property may be repossessed by the government. Government acquired corporations are generally sold at auction or returned to the original controller once debts are repaid.
13.3: Functions of Bureaucracy
13.3.1: Promoting Public Welfare and Income Redistribution
Social welfare programs seek to provide basic social protections for all Americans.
Learning Objective
Identify key legislative milestones designed to promote public welfare
Key Points
- President Roosevelt implemented public welfare programs, such as Social Security, to mitigate the devastating effects of the Great Depression.
- President Johnson continued the effort to promote public welfare in the 1960s by implementing programs such as Medicaid and Medicare.
- Contemporary politicians commit themselves to promoting the public welfare through the passage of laws such as the Affordable Care Act.
Key Terms
- Great Society
-
A set of domestic programs in the United States announced by President Lyndon B. Johnson at Ohio University and subsequently promoted by him and fellow Democrats in Congress in the 1960s. Two main goals of the Great Society social reforms were the elimination of poverty and racial injustice. New major spending programs that addressed education, medical care, urban problems, and transportation were launched during this period
- New Deal
-
The New Deal was a series of economic programs enacted in the United States between 1933 and 1936. They involved presidential executive orders or laws passed by Congress during the first term of President Franklin D. Roosevelt. The programs were in response to the Great Depression, and focused on what historians call the “3 Rs”: Relief, Recovery, and Reform.
- Affordable Care Act
-
A law promoted by President Obama and passed by Congress in 2010 to improve access to health care for Americans.
The American government is charged with keeping Americans safe and promoting their wellbeing. Americans vote for candidates whom they believe have their best interests in mind; American political candidates (and the bureaucracy they marshall) seek to implement policies that will support the welfare of the American public. This is primarily achieved through social service programing.
The United States has a long political history of seeking to implement policy to promote public welfare. One of the most well-known initiatives to improve public welfare in times of need was President Franklin D. Roosevelt’s response to the Great Depression. Following the stock market crash of 1929, President Roosevelt invested unprecedented governmental funds into the expansion of the executive bureaucracy in order to employ Americans and mitigate the extreme financial decline of the era. President Roosevelt’s program was called the New Deal and is partially credited with lifting America out of the Great Depression. Under the auspices of the New Deal, President Roosevelt implemented programs that have lasted to the present day, such as Social Security . Thirty years later, President Lyndon B. Johnson assisted with the implementation of his Great Society initiative. President Johnson’s programs weren’t in response to economic decline, but rather solely sought to improve the welfare of the American public. President Johnson sought to improve racial and economic equality. He did so through the establishment of programs such as Medicare and Medicaid– federal programs that exist to the present day that ensure certain levels of health care coverage for America’s poor and elderly.The Great Society initiative further established educational programs such as the National Endowment for the Arts and generally deployed the executive bureaucracy to better welfare programs for the American public at large.
More Security for the American Family
This image from the 1930s depicts a poster promoting the new Social Security program. Social Security exists to this day as a federal program to promote public welfare.
Current American politicians also attempt to ensure that programs exist to promote public welfare. Given downturn in the American economy since 2008, many public welfare programs have been cut due to lack of public resources. However, the federal government has, in some areas, reorganized funding to promote programs for public wellbeing. The gesture to improving the wellbeing of the public writ large is represented by President Obama’s 2010 law to increase public access to health insurance. This law is called the Affordable Care Act, but is more commonly known as Obamacare. Liberals and conservatives are divided on the merits of the law, but regardless of one’s political assessment of the law, it speaks to the government’s attempts to improve the wellbeing of the public.
13.3.2: Providing National Security
National security is the protection of the state through a variety of means that include military might, economic power, and diplomacy.
Learning Objective
Name the government departments and agencies responsible for national defense
Key Points
- Specific measures taken to ensure national security include using diplomacy to rally allies, using economic power to facilitate cooperation, maintaining effective armed forces, and using intelligence and counterintelligence services to detect and defeat internal and external threats.
- Within the United States, there are a variety of governmental departments and agencies responsible for developing policies to ensure national security.
- These organizations include the Department of Defense, the Department of Homeland Security, the Central Intelligence Agency, and the White House National Security Council.
Key Terms
- counterintelligence
-
counterespionage
- infrastructure
-
The basic facilities, services and installations needed for the functioning of a community or society
- espionage
-
The act or process of learning secret information through clandestine means.
Example
- These organizations include the Department of Defense, the Department of Homeland Security, the Central Intelligence Agency, and the White House National Security Council.
Providing National Security
National security, a concept which developed mainly in the United States after World War II, is the protection of the state and its citizens through a variety of means, including military might, economic power, diplomacy, and power projection.
Specific measures taken to ensure national security include:
- using diplomacy to rally allies and isolate threats;
- marshaling economic power to facilitate or compel cooperation;
- maintaining effective armed forces;
- implementing civil defense and emergency preparedness measures (including anti-terrorism legislation);
- ensuring the resilience and redundancy of critical infrastructure; using intelligence services to detect and defeat or avoid threats and espionage, and to protect classified information;
- using counterintelligence services or secret police to protect the nation from internal threats.
There are a variety of governmental departments and agencies within the United States that are responsible for developing policies to ensure national security. The Department of Defense is responsible for coordinating and supervising all agencies and functions of the government concerned directly with the U.S. Armed Forces. The Department—headed by the Secretary of Defense—has three subordinate military departments: the Department of the Army, the Department of the Navy, and the Department of the Air Force. The Department of Homeland Security, established after the September 11, 2001 attacks, is responsible for working within the civilian sphere to protect the country from and respond to terrorist attacks, man-made accidents, and natural disasters.
The Central Intelligence Agency is part of the Executive Office of the President of the United States. It is responsible for providing national security intelligence assessments, performed by non-military commissioned civilian intelligence agents, to senior U.S. policymakers. The White House National Security Council is the principal forum used by the President for considering national security and foreign policy matters with his senior national security advisers, and Cabinet officials.
the Central Intelligence Agency
the Central Intelligence Agency, responsible for providing national security intelligence assessments
13.3.3: Maintaining a Strong Economy
Within the United States, there are numerous government departments and agencies responsible for maintaining a strong economy.
Learning Objective
Differentiate between the various departments and agencies responsible for the health of the economy
Key Points
- The Department of Commerce is the Cabinet department of the U.S. government concerned with promoting economic growth.
- The Federal Reserve is the central banking system of the United States, conducting the nation’s monetary policy, supervising and regulating banking institutions, maintaining the stability of the financial system, and providing financial services to U.S. and global institutions.
- The Federal Trade Commission promotes consumer protection and the elimination and prevention of anti-competitive business practices, such as coercive monopoly.
Key Terms
- monopoly
-
An exclusive control over the trade or production of a commodity or service through exclusive possession.
- currency
-
Money or other items used to facilitate transactions.
Within the United States, there are numerous government departments and agencies responsible for maintaining a strong economy.
The Department of Commerce is the Cabinet department of the U.S. government concerned with promoting economic growth. Among its tasks are gathering economic and demographic data for business and government decision-making, issuing patents and trademarks, and helping to set industrial standards. Organizations within the Department of Commerce include the Census Bureau, the Bureau of Economic Analysis, and the International Trade Administration.
The Department of the Treasury is an executive department and the treasury of the U.S. government. It prints and mints all paper currency and coins in circulation through the Bureau of Engraving and Printing and the United States Mint. The Department also collects all federal taxes through the Internal Revenue Service and manages U.S. government debt instruments. The Federal Reserve is the central banking system of the United States, which conducts the nation’s monetary policy, supervises and regulates banking institutions, maintains the stability of the financial system, and provides financial services to depository institutions, the U.S. government, and foreign official institutions.
the Federal Reserve
The seal of the Federal Reserve.
The Office of the United States Trade Representative is the government agency responsible for developing and recommending U.S. trade policy to the President, conducting trade negotiations at bilateral and multilateral levels, and coordinating trade policy within the government through the interagency Trade Policy Staff Committee (TPSC) and Trade Policy Review Group (TPRG). The Federal Trade Commission promotes consumer protection and the elimination and prevention of anti-competitive business practices, such as coercive monopoly.The Small Business Administration provides support to entrepreneurs and small businesses by providing loans, contracts, and counseling.
13.3.4: Making Policy
The actual development and implementation of policies are under the purview of different bureaucratic institutions.
Learning Objective
Differentiate between cabinet departments, independent executive agencies, government corporation, and regulatory agencies in making policy
Key Points
- Fifteen agencies are designated by law as cabinet departments, which are major administrative units responsible for specified areas of government operations.
- The remaining government organizations within the executive branch outside of the presidency are independent executive agencies, such as NASA, the EPA, and the SSA.
- Some agencies, such as the U.S. Postal Service and the national rail passenger system, Amtrak, are government corporations, which charge fees for services too far-reaching or too unprofitable for private corporations to handle.
- Another type of bureaucratic institution is a regulatory commission, an agency charged with writing rules and arbitrating disputes in a specific part of the economy.
Key Terms
- regulatory
-
Of or pertaining to regulation.
- bureaucratic
-
Of or pertaining to bureaucracy or the actions of bureaucrats.
Making Policy
The executive and legislative branches of the United States pass and enforce laws. However, the actual development and implementation of policies are under the purview of different bureaucratic institutions mainly comprised cabinet departments, independent executive agencies, government corporations, and regulatory agencies.
Fifteen agencies are designated by law as cabinet departments, which are major administrative units responsible for specified areas of government operations. Examples of cabinet departments include the Department of Defense, State, and Justice. Each department controls a detailed budget appropriated by Congress and has a designated staff. Each is headed by a department secretary appointed by the President and confirmed by the Senate. Many departments subsume distinct offices directed by an assistant secretary. For instance, the Interior Department includes the National Park Service, the Bureau of Indian Affairs, and the U.S. Geological Survey.
The remaining government organizations within the executive branch outside of the presidency are independent executive agencies. The best known include the National Aeronautics and Space Administration (NASA), the Environmental Protection Agency (EPA), and the Social Security Administration (SSA). Apart from a smaller jurisdiction, such agencies resemble cabinet departments. Their heads are confirmed by Congress, though they are appointed by and report directly to the President.
NASA
NASA is an example of an independent executive agency
Some agencies, such as the U.S. Postal Service and Amtrak (the national rail passenger system) are government corporations. They charge fees for services too far-reaching or too unprofitable for private corporations to handle. Ideally, they bring in enough funds to be self-sustaining. To help them make ends meet, Congress may give government corporations a legal monopoly over given services, provide subsidies, or both. Government corporations are more autonomous in policymaking than most agencies. For instance, the Postal Rate Commission sets rates for postage on the basis of revenues and expenditures.
Another type of bureaucratic institution is a regulatory commission, an agency charged with writing rules and arbitrating disputes in a specific part of the economy. Chairs and members of regulatory commissions are named by the president and confirmed by the Senate to terms of fixed length from which they cannot be summarily dismissed. Probably the most prominent regulatory commission currently in the news is the Federal Reserve Board.
13.3.5: Making Agencies Accountable
The institution responsible for ensuring that government agencies are held accountable is the Government Accountability Office (GAO).
Learning Objective
Describe the role the GAO plays in holding government agencies accountable
Key Points
- The GAO is the audit, evaluation, and investigative arm of the United States Congress and conducts financial and performance audits.
- Over the years, GAO has been referred to as “The Congressional Watchdog” and “The Taxpayers’ Best Friend” for its frequent audits and investigative reports that have uncovered waste and inefficiency in government.
- The GAO also establishes standards for audits of government organizations, programs, activities, and functions, and of government assistance received by contractors, nonprofit organizations, and other nongovernmental organizations.
- The GAO is headed by the Comptroller General of the United States, a professional and non-partisan position in the U.S. government.
Key Term
- audit
-
An independent review and examination of records and activities to assess the adequacy of system controls, to ensure compliance with established policies and operational procedures, and to recommend necessary changes in controls, policies, or procedures
The Government Accountability Office (GAO) is the audit, evaluation, and investigative arm of the United States Congress. It is responsible for ensuring that government agencies are held accountable. The GAO’s auditors conduct not only financial audits, but also engage in a wide assortment of performance audits.
The Government Accountability Office
The GAO is the audit, evaluation, and investigative arm of the United States Congress.
Over the years, GAO has been referred to as “The Congressional Watchdog” and “The Taxpayers’ Best Friend” for its frequent audits and investigative reports that have uncovered waste and inefficiency in government. The news, media, and television often draw attention to GAO’s work by covering stories on the findings, conclusions, and recommendations in GAO reports. In addition, members of Congress frequently cite GAO’s work in statements to the press, congressional hearings, and floor debates on proposed legislation.
The GAO also establishes standards for audits of government organizations, programs, activities, and functions, and of government assistance received by contractors, nonprofit organizations, and other nongovernmental organizations. These standards, often referred to as Generally Accepted Government Auditing Standards (GAGAS), must be followed by auditors and audit organizations when required by law, regulation, agreement, contract, or policy. These standards pertain to auditors’ professional qualifications, the quality of audit effort, and the characteristics of professional and meaningful audit reports.
The GAO is headed by the Comptroller General of the United States, a professional and non-partisan position in the U.S. government. The Comptroller General is appointed by the President, with the advice and consent of the Senate, for a 15-year, non-renewable term. Since 1921, there have been only seven Comptrollers General, and no formal attempt has ever been made to remove a Comptroller General.
13.4: Bureaucratic Reform
13.4.1: Bureaucratic Reform
Bureaucratic reform in the U.S. was a major issue in the late 19th century and the early 20th century.
Learning Objective
Describe the key moments in the history of bureaucratic reform, including the Tenure of Office Acts, the Pendleton Act, the Hatch Acts, and the Civil Service Reform Acts.
Key Points
- The five important civil service reforms were the two Tenure of Office Acts of 1820 and 1867, the Pendleton Act of 1883, the Hatch Acts (1939 and 1940) and the CSRA of 1978.
- The Civil Service Reform Act (the Pendleton Act) is an 1883 federal law that established the United States Civil Service Commission, placing most federal employees on the merit system and marking the end of the so-called “spoils system”.
- The CSRA was an attempt to reconcile the need for improved performance within bureaucratic organizations with the need for protection of employees.
Key Terms
- patronage
-
granting favours, giving contracts or making appointments to office in return for political support
- spoils system
-
The systematic replacement of office holders every time the government changed party hands.
- merit
-
Something deserving good recognition.
- merit system
-
the process of promoting and hiring government employees based on their ability to perform a job, rather than on their political connections
Bureaucratic reform in the United States was a major issue in the late nineteenth century at the national level and in the early twentieth century at the state level. Proponents denounced the distribution of office by the winners of elections to their supporters as corrupt and inefficient. They demanded nonpartisan scientific methods and credentials be used to select civil servants. The five important civil service reforms were the two Tenure of Office Acts of 1820 and 1867, the Pendleton Act of 1883, the Hatch Acts (1939 and 1940), and the Civil Service Reform Act (CSRA) of 1978.
In 1801, President Thomas Jefferson, alarmed that Federalists dominated the civil service and the army, identified the party affiliation of office holders, and systematically appointed Republicans. President Andrew Jackson in 1829 began the systematic rotation of office holders after four years, replacing them with his own partisans. By the 1830s, the “spoils system” referred to the systematic replacement of office holders every time the government changed party hands.
The Civil Service Reform Act (the Pendleton Act) is an 1883 federal law that established the United States Civil Service Commission. It eventually placed most federal employees on the merit system and marked the end of the so-called “spoils system. ” Drafted during the Chester A. Arthur administration, the Pendleton Act served as a response to President James Garfield’s assassination by a disappointed office seeker.
Chester A. Arthur
The Pendleton Act was passed under Chester A. Arthur’s administration.
The new law prohibited mandatory campaign contributions, or “assessments,” which amounted to 50-75% of party financing during the Gilded Age. Second, the Pendleton Act required entrance exams for aspiring bureaucrats. One result of this reform was more expertise and less politics among members of the civil service. An unintended result was political parties’ increasing reliance on funding from business, since they could no longer depend on patronage hopefuls.
The CSRA became law in 1978. Civil service laws have consistently protected federal employees from political influence, and critics of the system complained that it was impossible for managers to improve performance and implement changes recommended by political leaders. The CSRA was an attempt to reconcile the need for improved performance with the need for protection of employees.
13.4.2: Termination
Bureaucratic reform includes the history of civil service reform and efforts to curb or eliminate excessive bureaucratic red tape.
Learning Objective
Describe the efforts undertaken to reform the civil service
Key Points
- A bureaucracy is a group of specifically non-elected officials within a government or other institution that implements the rules, laws, ideas, and functions of their institution. A bureaucrat is a member of a bureaucracy and can comprise the administration of any organization of any size.
- Civil service reform is a deliberate action to improve the efficiency, effectiveness, professionalism, representation and democratic character of a bureaucracy, with a view to promoting better delivery of public goods and services, with increased accountability.
- Red tape is excessive regulation or rigid conformity to formal rules that is considered redundant or bureaucratic and hinders or prevents action or decision-making.
- The “cutting of red tape” is a popular electoral and policy promise. In the United States, a number of committees have discussed and debated Red Tape Reduction Acts.
Key Terms
- civil service reform
-
Civil service reform is a deliberate action to improve the efficiency, effectiveness, professionalism, representation and democratic character of a bureaucracy, with a view to promoting better delivery of public goods and services, with increased accountability.
- bureaucracy
-
Structure and regulations in place to control activity. Usually in large organizations and government operations.
- red tape
-
A derisive term for regulations or bureaucratic procedures that are considered excessive or excessively time- and effort-consuming.
Example
- The Pendleton Civil Service Reform of United States is a federal law established in 1883 that stipulated that government jobs should be awarded on the basis of merit. The act provided selection of government employees competitive exams, rather than ties to politicians or political affiliation. It also made it illegal to fire or demote government employees for political reasons and prohibits soliciting campaign donations on Federal government property.
Introduction
A bureaucracy is a group of specifically non-elected officials within a government or other institution that implements the rules, laws, ideas, and functions of their institution. In other words, a government administrative unit that carries out the decisions of the legislature or democratically-elected representation of a state. Bureaucracy may also be defined as a form of government: “government by many bureaus, administrators, and petty officials. A government is defined as: “the political direction and control exercised over the actions of the members, citizens, or inhabitants of communities, societies, and states; direction of the affairs of a state, community, etc.” On the other hand democracy is defined as: “government by the people; a form of government in which the supreme power is vested in the people and exercised directly by them or by their elected agents under a free electoral system”, thus not by non-elected bureaucrats.
A bureaucrat is a member of a bureaucracy and can comprise the administration of any organization of any size, though the term usually connotes someone within an institution of government. Bureaucrat jobs were often “desk jobs” (the French for “desk” being bureau, though bureau can also be translated as “office”), though the modern bureaucrat may be found “in the field” as well as in an office.
Civil Service Reform
A civil servant is a person in the public sector employed for a government department or agency. The term explicitly excludes the armed services, although civilian officials can work at “Defence Ministry” headquarters. Civil service reform is a deliberate action to improve the efficiency, effectiveness, professionalism, representation and democratic character of a bureaucracy, with a view to promoting better delivery of public goods and services, with increased accountability. Such actions can include data gathering and analysis, organizational restructuring, improving human resource management and training, enhancing pay and benefits while assuring sustainability under overall fiscal constraints, and strengthening measures for public participation, transparency, and combating corruption. Important differences between developing countries and developed countries require that civil service and other reforms first rolled out in developed countries be carefully adapted to local conditions in developing countries.
The Problem of Bureaucratic Red Tape
Red tape is excessive regulation or rigid conformity to formal rules that is considered redundant or bureaucratic and hinders or prevents action or decision-making . It is usually applied to governments, corporations and other large organizations.Red tape generally includes filling out paperwork, obtaining licenses, having multiple people or committees approve a decision and various low-level rules that make conducting one’s affairs slower, more difficult, or both. Red tape can also include “filing and certification requirements, reporting, investigation, inspection and enforcement practices, and procedures. ” The “cutting of red tape” is a popular electoral and policy promise. In the United States, a number of committees have discussed and debated Red Tape Reduction Acts.
Bureaucratic Red Tape
Bundle of U.S. pension documents from 1906 bound in red tape.
Examples of Bureaucratic Reform in the United States
The Pendleton Civil Service Reform of United States is a federal law established in 1883 that stipulated that government jobs should be awarded on the basis of merit. The act provided selection of government employees competitive exams, rather than ties to politicians or political affiliation. It also made it illegal to fire or demote government employees for political reasons and prohibits soliciting campaign donations on Federal government property. To enforce the merit system and the judicial system, the law also created the United States Civil Service Commission. A crucial result was the shift of the parties to reliance on funding from business, since they could no longer depend on patronage hopefuls.
The Paperwork Reduction Act of 1980 is a United States federal law enacted in 1980 that gave authority over the collection of certain information to the Office of Management and Budget (OMB). Within the OMB, the Office of Information and Regulatory Affairs (OIRA) was established with specific authority to regulate matters regarding federal information and to establish information policies. These information policies were intended to reduce the total amount of paperwork handled by the United States government and the general public. A byproduct is that it has become harder to track internal transfers to tax havens in Consolidated Corporate Income Tax returns.
13.4.3: Devolution
Devolution is the statutory granting of powers from central government to government at a regional, local, or state level.
Learning Objective
Describe the relationship between a central government and a subordinate entity in possession of certain “devolved” powers
Key Points
- Devolution differs from federalism in that the devolved powers of the subnational authority may be temporary and ultimately reside in central government.
- In the United States, the District of Columbia is a devolved government. The District is separate from any state and has its own elected government.
- Local governments like municipalities, counties, parishes, boroughs and school districts are devolved. They are established and regulated by the constitutions or laws of the state in which they reside.
Key Term
- statutory
-
Of, relating to, enacted or regulated by a statute.
Devolution is the statutory granting of powers from central government to government at a regional, local, or state level. The power to make legislation relevant to the area may also be granted. Devolution differs from federalism in that the devolved powers of the subnational authority may be temporary and ultimately reside in central government. Legislation creating devolved parliaments or assemblies can be repealed or amended by central government in the same way as any statute. Federal systems differ in that state or provincial government is guaranteed in the constitution.
The District of Columbia in the Unites States offers an illustration of devolved government. The District is separate from any state and has its own elected government, which operates much like other state with its own laws and court system. However, the broad range of powers reserved for the 50 states cannot be voided by any act of U.S. federal government. The District of Columbia is constitutionally under the control of the United States Congress, which created the current District government. Any law passed by District legislature can be nullified by Congressional action. Indeed, the District government itself could be significantly altered by a simple majority vote in Congress.
District of Colmbia
The District of Columbia is an example of devolved government.
In the United States, local governments are subdivisions of states, while the federal government, state governments and federally recognized American Indian tribal nations are recognized by the United States Constitution. Theoretically, a state could abolish all local governments within its borders.
Local governmental entities like municipalities, counties, parishes, boroughs and school districts are devolved. This is because they are established and regulated by the constitutions or laws of the state in which they reside. In most cases, U.S. state legislatures have the power to change laws that affect local government. The governor of some states may also have power over local government affairs.
13.4.4: Privatization
Privatization is the process of transferring ownership of a business from the public sector to the private sector.
Learning Objective
Differentiate between two different senses of privatization
Key Points
- Privatization can also mean government outsourcing of services or functions to private firms, e.g. revenue collection, law enforcement, and prison management.
- Privatization has also been used to describe two unrelated transactions: the buying of all outstanding shares of a publicly traded company by a single entity, making the company private, or the demutualization of a mutual organization or cooperative to form a joint stock company.
- Outsourcing is the contracting out of a business process, which an organization may have previously performed internally or has a new need for, to an independent organization from which the process is purchased back as a service.
- Though the practice of purchasing a business function—instead of providing it internally—is a common feature of any modern economy, the term outsourcing became popular in America near the turn of the 21st century.
Key Terms
- outsourcing
-
The transfer of a business function to an external service provider
- privatization
-
The transfer of a company or organization from government to private ownership and control.
Example
- Privatization of government functions is evidenced in the administration of transportation, such as the 2008 sale of the proceeds from Chicago parking meters for 75 years.
Introduction
Privatization can have several meanings. Primarily, it is the process of transferring ownership of a business, enterprise, agency, public service, or public property from the public sector (a government) to the private sector, either to a business that operates for a profit or to a non-profit organization. The term can also mean government outsourcing of services or functions to private firms, e.g. revenue collection, law enforcement, and prison management. Privatization has also been used to describe two unrelated transactions. The first is the buying of all outstanding shares of a publicly traded company by a single entity, making the company private. This is often described as private equity. The second is a demutualization of a mutual organization or cooperative to form a joint stock company.
Outsourcing is the contracting out of a business process, which an organization may have previously performed internally or has a new need for, to an independent organization from which the process is purchased back as a service. Though the practice of purchasing a business function—instead of providing it internally—is a common feature of any modern economy, the term outsourcing became popular in America near the turn of the 21st century. An outsourcing deal may also involve transfer of the employees and assets involved to the outsourcing business partner. The definition of outsourcing includes both foreign or domestic contracting, which may include offshoring, described as “a company taking a function out of their business and relocating it to another country.”
Some privatization transactions can be interpreted as a form of a secured loan and are criticized as a “particularly noxious form of governmental debt. ” In this interpretation, the upfront payment from the privatization sale corresponds to the principal amount of the loan, while the proceeds from the underlying asset correspond to secured interest payments – the transaction can be considered substantively the same as a secured loan, though it is structured as a sale. This interpretation is particularly argued to apply to recent municipal transactions in the United States, particularly for fixed term, such as the 2008 sale of the proceeds from Chicago parking meters for 75 years. It is argued that this is motivated by “politicians’ desires to borrow money surreptitiously,” due to legal restrictions on and political resistance to alternative sources of revenue, namely raising taxes or issuing debt.
Outsourcing in the United States
“Outsourcing” became a popular political issue in the United States, having been confounded with offshoring, during the 2004 U.S. presidential election. The political debate centered on outsourcing’s consequences for the domestic U.S. workforce. Democratic U.S. presidential candidate John Kerry criticized U.S. firms that outsource jobs abroad or that incorporate overseas in tax havens to avoid paying their “fair share” of U.S. taxes during his 2004 campaign, calling such firms “Benedict Arnold corporations. “
Criticism of outsourcing, from the perspective of U.S. citizens, generally revolves around the costs associated with transferring control of the labor process to an external entity in another country. A Zogby International poll conducted in August 2004 found that 71% of American voters believed that “outsourcing jobs overseas” hurt the economy while another 62% believed that the U.S. government should impose some legislative action against companies that transfer domestic jobs overseas, possibly in the form of increased taxes on companies that outsource.
Union busting is one possible cause of outsourcing . As unions are disadvantaged by union busting legislation, workers lose bargaining power and it becomes easier for corporations to fire them and ship their job overseas. Another given rationale is the high corporate income tax rate in the U.S. relative to other OECD nations, and the uncommonness of taxing revenues earned outside of U.S. jurisdiction. However, outsourcing is not solely a U.S. phenomenon as corporations in various nations with low tax rates outsource as well, which means that high taxation can only partially, if at all, explain US outsourcing. For example, the amount of corporate outsourcing in 1950 would be considerably lower than today, yet the tax rate was actually higher in 1950.
Union Busting
Pinkerton guards escort strikebreakers in Buchtel, Ohio, 1884.
It is argued that lowering the corporate income tax and ending the double-taxation of foreign-derived revenue (taxed once in the nation where the revenue was raised, and once from the U.S.) will alleviate corporate outsourcing and make the U.S. more attractive to foreign companies. However, while the US has a high official tax rate, the actual taxes paid by US corporations may be considerably lower due to the use of tax loopholes, tax havens, and “gaming the system. ” Rather than avoiding taxes, outsourcing may be mostly driven by the desire to lower labor costs (see standpoint of labor above). Sarbanes-Oxley has also been cited as a factor for corporate flight from U.S. jurisdiction.
13.4.5: Sunshine Laws
The Sunshine Laws enforce the principle of liberal democracy that governments are typically bound by a duty to publish and promote openness.
Learning Objective
Summarize the obligations that the Freedom of Information Act places on executive branch government agencies
Key Points
- They establish a right-to-know legal process by which requests may be made for government-held information to be received freely or at minimal cost, barring standard exceptions.
- In the United States the Freedom of Information Act was signed into law by President Lyndon B. Johnson on July 4, 1966. It went into effect the following year.
- The Freedom of Information Act (FOIA) is a federal freedom of information law that allows for the full or partial disclosure of previously unreleased information and documents controlled by the United States government.
- The Act applies only to federal agencies. However, all of the states, as well as the District of Columbia and some territories, have enacted similar statutes to require disclosures by agencies of the state and of local governments, although some are significantly broader than others.
- The Electronic Freedom of Information Act Amendments of 1996 (E-FOIA) stated that all agencies are required by statute to make certain types of records, created by the agency on or after November 1, 1996, available electronically.
- A major issue in released documentation is government “redaction” of certain passages deemed applicable to the Exemption section of the FOIA. The extensive practice by the FBI has been considered highly controversial, given that it prevents further research and inquiry.
Key Term
- redaction
-
The process of editing or censoring.
Introduction
Freedom of information laws by country detail legislation that gives access by the general public to data held by national governments. They establish a “right-to-know” legal process by which requests may be made for government-held information, to be received freely or at minimal cost, barring standard exceptions. Also variously referred to as open records, or sunshine laws in the United States, governments are also typically bound by a duty to publish and promote openness. In many countries there are constitutional guarantees for the right of access to information, but usually these are unused if specific support legislation does not exist.
Freedom of Information Act
In the United States the Freedom of Information Act was signed into law by President Lyndon B. Johnson on July 4, 1966 and went into effect the following year. In essence, The Freedom of Information Act (FOIA) is a federal freedom of information law that allows for the full or partial disclosure of previously unreleased information and documents controlled by the United States government. The Act defines agency records subject to disclosure, outlines mandatory disclosure procedures and grants nine exemptions to the statute.
The act explicitly applies only to executive branch government agencies. These agencies are under several mandates to comply with public solicitation of information. Along with making public and accessible all bureaucratic and technical procedures for applying for documents from that agency, agencies are also subject to penalties for hindering the process of a petition for information. If “agency personnel acted arbitrarily or capriciously with respect to the withholding, [a] Special Counsel shall promptly initiate a proceeding to determine whether disciplinary action is warranted against the officer or employee who was primarily responsible for the withholding.” In this way, there is recourse for one seeking information to go to a federal court if suspicion of illegal tampering or delayed sending of records exists. However, there are nine exemptions, ranging from a withholding “specifically authorized under criteria established by an Executive order to be kept secret in the interest of national defense or foreign policy” and “trade secrets” to “clearly unwarranted invasion of personal privacy.”
The Electronic Freedom of Information Act Amendments were signed by President Bill Clinton on October 2, 1996. The Electronic Freedom of Information Act Amendments of 1996 (E-FOIA) stated that all agencies are required by statute to make certain types of records, created by the agency on or after November 1, 1996, available electronically. Agencies must also provide electronic reading rooms for citizens to use to have access to records. Given the large volume of records and limited resources, the amendment also extended the agencies’ required response time to FOIA requests. Formerly, the response time was ten days and the amendment extended it to twenty days.
Notable Cases
A major issue in released documentation is government “redaction” of certain passages deemed applicable to the Exemption section of the FOIA . Federal Bureau of Investigation (FBI) officers in charge of responding to FOIA requests have prevented further research by using extensive use of redaction of documents in print or electronic form. This has also brought into question just how one can verify that they have been given complete records in response to a request.
FBI Redaction
In the early 1970s, the US government conducted surveillance on ex-Beatle John Lennon. This is a letter from FBI director J. Edgar Hoover to the Attorney General. After a 25-year Freedom of Information Act Request battle initiated by historian Jon Wiener, the files were released. Here is one page from the file. This first release received by Wiener had some information missing — it had been blacked out presumably with magic marker — or what is termed “redacted”. A subsequent version was released which showed almost all of the previously blacked-out text.
This trend of unwillingness to release records was especially evident in the process of making public the FBI files on J. Edgar Hoover . Of the 164 files and about eighteen thousand pages collected by the FBI, two-thirds were withheld from Athan G. Theoharis and plaintiff, most notably one entire folder entitled the “White House Security Survey. “
J. Edgar Hoover
John Edgar Hoover (January 1, 1895 – May 2, 1972) was the first Director of the Federal Bureau of Investigation (FBI) of the United States. Appointed director of the Bureau of Investigation—predecessor to the FBI—in 1924, he was instrumental in founding the FBI in 1935, where he remained director until his death in 1972 at age 77. Hoover is credited with building the FBI into a large and efficient crime-fighting agency, and with instituting a number of modernizations to police technology, such as a centralized fingerprint file and forensic laboratories.
In the case of Scott Armstrong, v. Executive Office of the President, et al., the White House used the PROFS computer communications software. With encryption designed for secure messaging, PROFS notes concerning the Iran-Contra affair (arms-for-hostages) under the Reagan Administration were insulated. However, they were also backed up and transferred to paper memos. The National Security Council, on the eve of President George H.W. Bush’s inauguration, planned to destroy these records.
13.4.6: Sunset Laws
A sunset provision is a measure within a statute that provides that a law shall cease to be in effect after a specific date.
Learning Objective
Describe sunset clauses and several prominent examples of their use
Key Points
- In American federal law parlance, legislation that is meant to renew an expired mandate is known as a reauthorization act or extension act. Reauthorizations of controversial laws or agencies may be preceded by extensive political wrangling before final votes.
- The Sedition Act of 1798 was a political tool used by John Adams and the Federalist Party to suppress opposition. It contained a sunset provision ensuring that the law would cease at the end of Adams’ term so it could not be used against the Federalists.
- Several surveillance portions of the USA Patriot Act were set to originally expire on December 31, 2005. These were later renewed but expired again on March 10, 2006, and was renewed once more in 2010.
- The Congressional Budget Act governs the role of Congress in the budget process. Among other provisions, it affects Senate rules of debate during the budget reconciliation, not least by preventing the use of the filibuster against the budget resolutions.
- In the Economic Growth and Tax Relief Reconciliation Act of 2001 the US Congress enacted a phase-out of the federal estate tax over the following 10 years, so that the tax would be completely repealed in 2010.
Key Terms
- mandate
-
An official or authoritative command; a judicial precept.
- warrant
-
Authorization or certification; sanction, as given by a superior.
- sedition
-
the organized incitement of rebellion or civil disorder against authority or the state
Sunset Laws
A sunset provision or clause in public policy is a measure within a statute, regulation, or other law that provides for the law to cease to have effect after a specific date, unless further legislative action is taken to extend the law. Most laws do not have sunset clauses and therefore remain in force indefinitely.
In American federal law parlance, legislation that is meant to renew an expired mandate is known as a reauthorization act or extension act. Extensive political wrangling before final votes may precede reauthorizations of controversial laws or agencies.
The Sedition Act of 1798 was a political tool used by John Adams and the Federalist Party to suppress opposition that contained a sunset provision. The authors ensured the act would terminate at the end of Adams’s term (the date the law would cease) so that Democratic Republicans against the Federalist Party could not use it.
John Adams
John Adams and his Federalist Party used a sunset provision in the Sedition Act of 1798 to ensure that the Sedition Act would cease once Adams was out of office.
Several surveillance portions of the USA Patriot Act were originally set to expire on December 31, 2005. These were later renewed, but expired again on March 10, 2006, and was renewed once more in 2010. The Patriot Act is a sunset law on wiretapping for terrorism cases, wiretapping for computer fraud and abuse, sharing of wiretap and foreign intelligence information, warranted seizure of voicemail messages, computer trespasser communications, nationwide service or warrants for electronic evidence, and privacy violation of civil liability.
The Congressional Budget Act governs the role of Congress in the budget process. Among other provisions, it affects Senate rules of debate during the budget reconciliation, not least by preventing the use of the filibuster against the budget resolutions. The Byrd rule was adopted in 1985 and amended in 1990 to modify the Budget Act and is contained in section 313. The rule allows Senators to raise a point of order against any provision held to be extraneous, where extraneous is defined according to one of several criteria. The definition of extraneous includes provisions that are outside the jurisdiction of the committee or that do not affect revenues or outlays.
In the Economic Growth and Tax Relief Reconciliation Act of 2001, the US Congress enacted a phase-out of the federal estate tax over the following 10 years, so that the tax would be completely repealed in 2010. However, while a majority of the Senate favored the repeal, there was not a three-fifths supermajority in favor. Therefore, a sunset provision in the Act reinstated the tax to its original levels on January 1, 2011 in order to comply with the Byrd Rule. As of April 2011, Republicans in Congress have tried to repeal the sunset provision, but their efforts have been unsuccessful. Uncertainty over the prolonged existence of the sunset provision has made estate planning more complicated.
13.4.7: Incentives for Efficiency and Productivity
Efficiency is the extent to which effort is used for a task and productivity is the measure of the efficiency of production.
Learning Objective
Summarize the Efficiency Movement and the institutions it, in part, bequeathed
Key Points
- The Efficiency Movement played a central role in the Progressive Era (1890-1932) in the United States.
- The result was strong support for building research universities and schools of business and engineering, municipal research agencies, as well as reform of hospitals and medical schools and the practice of farming.
- At the national level, productivity growth raises living standards because more real income improves people’s ability to purchase goods and services, enjoy leisure, improve housing and education and contribute to social environment programs.
Key Terms
- operationalization
-
The act or process of operationalizing.
- manifold
-
Exhibited at diverse times or in various ways.
Efficiency describes the extent to which time or effort is well used for the intended task or purpose for relaying the capability of a specific application of effort to produce a specific outcome effectively with a minimum amount or quantity of waste, expense or unnecessary effort.
The Efficiency Movement was a major movement in the United States, Britain and other industrial nations in the early 20th century that sought to identify and eliminate waste in all areas of the economy and society and to develop and implement best practices. The concept covered mechanical, economic, social and personal improvement. The quest for efficiency promised effective, dynamic management rewarded by growth.
The movement played a central role in the Progressive Era in the US, where it flourished 1890-1932. Adherents argued that all aspects of the economy, society and government were riddled with waste and inefficiency. Everything would be better if experts identified the problems and fixed them. The result was strong support for building research universities and schools of business and engineering, municipal research agencies, as well as reform of hospitals and medical schools and the practice of farming. Perhaps the best known leaders were engineers Frederick Taylor (1856–1915, ), who used a stopwatch to identify the smallest inefficiencies, and Frank Gilbreth (1868–1924) who proclaimed there was always one best way to fix a problem.
Frederick Winslow Taylor
Frederick Winslow Taylor, a mechanical engineer by training, is often credited with inventing scientific management and improving industrial efficiency.
Productivity is a measure of the efficiency of production. Productivity is a ratio of production output to what is required to produce it. The measure of productivity is defined as a total output per one unit of a total input. In order to obtain a measurable form of productivity, operationalization of the concept is necessary. A production model is a numerical expression of the production process that is based on production data.
The benefits of high productivity are manifold. At the national level, productivity growth raises living standards because more real income improves people’s ability to purchase goods and services, enjoy leisure, improve housing and education and contribute to social and environmental programs. Productivity growth is important to the firm because more real income means that the firm can meet its obligations to customers, suppliers, workers, shareholders and governments and still remain competitive or even improve its competitiveness in the market place.
13.4.8: Protecting Whistleblowers
There exist several U.S. laws protecting whistleblowers, people who inform authorities of alleged dishonest or illegal activities.
Learning Objective
Describe whistleblowers and the protections afforded them under various laws
Key Points
- Alleged misconduct may be classified as a violation of a law, rule, regulation or a direct threat to public interest in the realms of fraud, health/safety violations and corruption.
- The 1863 United States False Claim Act encourages whistleblowers by promising them a percentage of damages won by the government. It also protects them from wrongful dismissal.
- Investigation of retaliation against whistleblowers falls under the jurisdiction of the Office of the Whistleblower Protection Program (OWPP) of the Department of Labor’s Occupational Safety and Health Administration (OSHA).
- Whistleblowers frequently face reprisal at the hands of the accused organization, related organizations, or under law.
Key Term
- qui tam
-
A writ whereby a private individual who assists a prosecution can receive all or part of any penalty imposed.
A whistleblower is a person who tells the public or someone in authority about alleged dishonest or illegal activities occurring in a government department, private company or organization. The misconduct may be classified as a violation of a law, rule, regulation or a direct threat to public interest in the realms of fraud, health/safety violations and corruption. Whistleblowers may make their allegations internally or externally to regulators, law enforcement agencies or the media.
One of the first laws that protected whistleblowers was the 1863 United States False Claim Act, which tried to combat fraud by suppliers of the United States government during the Civil War. The act promises whistleblowers a percentage of damages won by the government and protects them from wrongful dismissal.
The Lloyd-La Follette Act of 1912 guaranteed the right of federal employees to furnish information to Congress. The first US environmental law to include employee protection was the Clean Water Act of 1972. Similar protections were included in subsequent federal environmental laws including the Safe Drinking water Act (1974), Energy Reorganization Act of 1974, and the Clean Air Act (1990) . In passing the 2002 Sarbanes-Oxley Act, the Senate Judiciary Committee found that whistleblower protections were dependent on the vagaries of varying state statutes.
Clean Air Act
The signing of the Clean Air Act, the first U.S. environmental law offering employee protection as a result of whistleblower action.
Whistleblowers frequently face reprisal at the hands of the accused organization, related organizations, or under law. Investigation of retaliation against whistleblowers falls under jurisdiction of the Office of the Whistleblower Protection Program (OWPP) of the Department of Labor’s Occupational Safety and Health Administration (OSHA). The patchwork of laws means that victims of retaliation must determine the deadlines and means for making proper complaints.
Those who report a false claim against the federal government and as a result suffer adverse employment actions may have up to six years to file a civil suit for remedies under the US False Claims Act. Under a qui tam provision, the original source for the report may be entitled to a percentage of what the government recovers from the offenders. The original source must be the first to file a federal civil complaint for recovery of the funds fraudulently obtained, while also avoiding publicizing the fraud claim until the Justice Department decides whether to prosecute the claim.
Federal employees could benefit from the Whistleblower Protection Act as well as the No-Fear Act, which made individual agencies directly responsible for the economic sanctions of unlawful retaliation.