Chapter 2: Strategy and Technology: Concepts and Frameworks for Understanding What Separates Winners from Losers

2.1 Introduction

Learning Objectives

After studying this section you should be able to do the following:

  1. Define operational effectiveness and understand the limitations of technology-based competition leveraging this principle.
  2. Define strategic positioning and the importance of grounding competitive advantage in this concept.
  3. Understand the resource-based view of competitive advantage.
  4. List the four characteristics of a resource that might possibly yield sustainable competitive advantage.

Managers are confused, and for good reason. Management theorists, consultants, and practitioners often vehemently disagree on how firms should craft tech-enabled strategy, and many widely read articles contradict one another. Headlines such as “Move First or Die” compete with “The First-Mover Disadvantage.” A leading former CEO advises, “destroy your business,” while others suggest firms focus on their “core competency” and “return to basics.” The pages of the Harvard Business Review declare, “IT Doesn’t Matter,” while a New York Times bestseller hails technology as the “steroids” of modern business.

Theorists claiming to have mastered the secrets of strategic management are contentious and confusing. But as a manager, the ability to size up a firm’s strategic position and understand its likelihood of sustainability is one of the most valuable and yet most difficult skills to master. Layer on thinking about technology—a key enabler to nearly every modern business strategy, but also a function often thought of as easily “outsourced”—and it’s no wonder that so many firms struggle at the intersection where strategy and technology meet. The business landscape is littered with the corpses of firms killed by managers who guessed wrong.

Developing strong strategic thinking skills is a career-long pursuit—a subject that can occupy tomes of text, a roster of courses, and a lifetime of seminars. While this chapter can’t address the breadth of strategic thought, it is meant as a primer on developing the skills for strategic thinking about technology. A manager that understands issues presented in this chapter should be able to see through seemingly conflicting assertions about best practices more clearly; be better prepared to recognize opportunities and risks; and be more adept at successfully brainstorming new, tech-centric approaches to markets.

The Danger of Relying on Technology

Firms strive for sustainable competitive advantageFinancial performance that consistently outperforms industry averages., financial performance that consistently outperforms their industry peers. The goal is easy to state, but hard to achieve. The world is so dynamic, with new products and new competitors rising seemingly overnight, that truly sustainable advantage might seem like an impossibility. New competitors and copycat products create a race to cut costs, cut prices, and increase features that may benefit consumers but erode profits industry-wide. Nowhere is this balance more difficult than when competition involves technology. The fundamental strategic question in the Internet era is, “How can I possibly compete when everyone can copy my technology and the competition is just a click away?” Put that way, the pursuit of sustainable competitive advantage seems like a lost cause.

But there are winners—big, consistent winners—empowered through their use of technology. How do they do it? In order to think about how to achieve sustainable advantage, it’s useful to start with two concepts defined by Michael Porter. A professor at the Harvard Business School and father of the value chain and the five forces concepts (see the sections later in this chapter), Porter is justifiably considered one of the leading strategic thinkers of our time.

According to Porter, the reason so many firms suffer aggressive, margin-eroding competition is because they’ve defined themselves according to operational effectiveness rather than strategic positioning. Operational effectivenessPerforming the same tasks better than rivals perform them. refers to performing the same tasks better than rivals perform them. Everyone wants to be better, but the danger in operational effectiveness is “sameness.” This risk is particularly acute in firms that rely on technology for competitiveness. After all, technology can be easily acquired. Buy the same stuff as your rivals, hire students from the same schools, copy the look and feel of competitor Web sites, reverse engineer their products, and you can match them. The fast follower problemExists when savvy rivals watch a pioneer’s efforts, learn from their successes and missteps, then enter the market quickly with a comparable or superior product at a lower cost before the first mover can dominate. exists when savvy rivals watch a pioneer’s efforts, learn from their successes and missteps, then enter the market quickly with a comparable or superior product at a lower cost.

Since tech can be copied so quickly, followers can be fast, indeed. Several years ago while studying the Web portal industry (Yahoo! and its competitors), a colleague and I found that when a firm introduced an innovative feature, at least one of its three major rivals would match that feature in, on average, only one and a half months.J. Gallaugher and C. Downing, “Portal Combat: An Empirical Study of Competition in the Web Portal Industry,” Journal of Information Technology Management 11, no. 1–2 (2000): 13–24. When technology can be matched so quickly, it is rarely a source of competitive advantage. And this phenomenon isn’t limited to the Web.

Tech giant EMC saw its stock price appreciate more than any other firm during the decade of the 1990s. However, when IBM and Hitachi entered the high-end storage market with products comparable to EMC’s Symmetrix unit, prices plunged 60 percent the first year and another 35 percent the next.P. Engardio and F. F. Keenan, “The Copycat Economy,” BusinessWeek, August 26, 2002. Needless to say, EMC’s stock price took a comparable beating. TiVo is another example. At first blush, it looks like this first mover should be a winner since it seems to have established a leading brand; TiVo is now a verb for digitally recording TV broadcasts. But despite this, TiVo has largely been a money loser, going years without posting an annual profit. And while 1.5 million TiVos have been sold, there are over thirty million digital video recorders (DVRs) in use.N. DiMeo, “TiVo’s Goal with New DVR: Become the Google of TV,” Morning Edition, National Public Radio, April 7, 2010. Rival devices offered by cable and satellite companies appear the same to consumers, and are offered along with pay television subscriptions—a critical distribution channel for reaching customers that TiVo doesn’t control.

Operational effectiveness is critical. Firms must invest in techniques to improve quality, lower cost, and generate design-efficient customer experiences. But for the most part, these efforts can be matched. Because of this, operational effectiveness is usually not sufficient enough to yield sustainable dominance over the competition. In contrast to operational effectiveness, strategic positioningPerforming different tasks than rivals, or the same tasks in a different way. refers to performing different activities from those of rivals, or the same activities in a different way. While technology itself is often very easy to replicate, technology is essential to creating and enabling novel approaches to business that are defensibly different from those of rivals and can be quite difficult for others to copy.

Different Is Good: FreshDirect Redefines the NYC Grocery Landscape

For an example of the relationship between technology and strategic positioning, consider FreshDirect. The New York City–based grocery firm focused on the two most pressing problems for Big Apple shoppers: selection is limited and prices are high. Both of these problems are a function of the high cost of real estate in New York. The solution? Use technology to craft an ultraefficient model that makes an end-run around stores.

The firm’s “storefront” is a Web site offering one-click menus, semiprepared specials like “meals in four minutes,” and the ability to pull up prior grocery lists for fast reorders—all features that appeal to the time-strapped Manhattanites who were the firm’s first customers. (The Web’s not the only channel to reach customers—the firm’s iPhone app was responsible for 2.5 percent of sales just weeks after launch.)R. M. Schneiderman, “FreshDirect Goes to Greenwich,” Wall Street Journal, April 6, 2010. Next-day deliveries are from a vast warehouse the size of five football fields located in a lower-rent industrial area of Queens. At that size, the firm can offer a fresh goods selection that’s over five times larger than local supermarkets. Area shoppers—many of whom don’t have cars or are keen to avoid the traffic-snarled streets of the city—were quick to embrace the model. The service is now so popular that apartment buildings in New York have begun to redesign common areas to include secure freezers that can accept FreshDirect deliveries, even when customers aren’t there.L. Croghan, “Food Latest Luxury Lure,” New York Daily News, March 12, 2006.

Figure 2.1 The FreshDirect Web Site and the Firm’s Tech-Enabled Warehouse Operation

The FreshDirect model crushes costs that plague traditional grocers. Worker shifts are highly efficient, avoiding the downtime lulls and busy rush hour spikes of storefronts. The result? Labor costs that are 60 percent lower than at traditional grocers. FreshDirect buys and prepares what it sells, leading to less waste, an advantage that the firm claims is “worth 5 percentage points of total revenue in terms of savings.”P. Fox, “Interview with FreshDirect Co-Founder Jason Ackerman,” Bloomberg Television, June 17, 2009. Overall perishable inventory at FreshDirect turns 197 times a year versus 40 times a year at traditional grocers.E. Schonfeld, “The Big Cheese of Online Grocers Joe Fedele’s Inventory-Turning Ideas May Make FreshDirect the First Big Web Supermarket to Find Profits,” Business 2.0, January 1, 2004. Higher inventory turnsSometimes referred to as inventory turnover, stock turns, or stock turnover. It is the number of times inventory is sold or used during the course of a year. A higher figure means that a firm is selling products quickly. mean the firm is selling product faster, so it collects money quicker than its rivals do. And those goods are fresher since they’ve been in stock for less time, too. Consider that while the average grocer may have seven to nine days of seafood inventory, FreshDirect’s seafood stock turns each day. Stock is typically purchased direct from the docks in order to fulfill orders placed less than twenty-four hours earlier.T. Laseter, B. Berg, and M. Turner, “What FreshDirect Learned from Dell,” Strategy+Business, February 12, 2003.

Artificial intelligence software, coupled with some seven miles of fiber-optic cables linking systems and sensors, supports everything from baking the perfect baguette to verifying orders with 99.9 percent accuracy.J. Black, “Can FreshDirect Bring Home the Bacon?” BusinessWeek, September 24, 2002; S. Sieber and J. Mitchell, “FreshDirect: Online Grocery that Actually Delivers!” IESE Insight, 2007. Since it lacks the money-sucking open-air refrigerators of the competition, the firm even saves big on energy (instead, staff bundle up for shifts in climate-controlled cold rooms tailored to the specific needs of dairy, deli, and produce). And a new initiative uses recycled biodiesel fuel to cut down on delivery costs.

FreshDirect buys directly from suppliers, eliminating middlemen wherever possible. The firm also offers suppliers several benefits beyond traditional grocers, all in exchange for more favorable terms. These include offering to carry a greater selection of supplier products while eliminating the “slotting fees” (payments by suppliers for prime shelf space) common in traditional retail, cobranding products to help establish and strengthen supplier brand, paying partners in days rather than weeks, and sharing data to help improve supplier sales and operations. Add all these advantages together and the firm’s big, fresh selection is offered at prices that can undercut the competition by as much as 35 percent.H. Green, “FreshDirect,” BusinessWeek, November 24, 2003. And FreshDirect does it all with margins in the range of 20 percent (to as high as 45 percent on many semiprepared meals), easily dwarfing the razor-thin 1 percent margins earned by traditional grocers.S. Sieber and J. Mitchell, “FreshDirect: Online Grocery that Actually Delivers!” IESE Insight, 2007; D. Kirkpatrick, “The Online Grocer Version 2.0,” Fortune, November 25, 2002; P. Fox, “Interview with FreshDirect Co-Founder Jason Ackerman,” Bloomberg Television, June 17, 2009.

Today, FreshDirect serves a base of some 600,000 paying customers. That’s a population roughly the size of metro-Boston, serviced by a single grocer with no physical store. The privately held firm has been solidly profitable for several years. Even in recession-plagued 2009, the firm’s CEO described 2009 earnings as “pretty spectacular,”P. Fox, “Interview with FreshDirect Co-Founder Jason Ackerman,” Bloomberg Television, June 17, 2009. while 2010 revenues are estimated to grow to roughly $300 million.R. M. Schneiderman, “FreshDirect Goes to Greenwich,” Wall Street Journal, April 6, 2010.

Technology is critical to the FreshDirect model, but it’s the collective impact of the firm’s differences when compared to rivals, this tech-enabled strategic positioning, that delivers success. Operating for more than half a decade, the firm has also built up a set of strategic assets that not only address specific needs of a market but are now extremely difficult for any upstart to compete against. Traditional grocers can’t fully copy the firm’s delivery business because this would leave them straddlingAttempts to occupy more than one position, while failing to match the benefits of a more efficient, singularly focused rival. two markets (low-margin storefront and high-margin delivery), unable to gain optimal benefits from either. Entry costs for would-be competitors are also high (the firm spent over $75 million building infrastructure before it could serve a single customer), and the firm’s complex and highly customized software, which handles everything from delivery scheduling to orchestrating the preparation of thousands of recipes, continues to be refined and improved each year.C. Valerio, “Interview with FreshDirect Co-Founder Jason Ackerman,” Venture, Bloomberg Television, September 18, 2009. On top of all this comes years of customer data used to further refine processes, speed reorders, and make helpful recommendations. Competing against a firm with such a strong and tough-to-match strategic position can be brutal. Just five years after launch there were one-third fewer supermarkets in New York City than when FreshDirect first opened for business.R. Shulman, “Groceries Grow Elusive for Many in New York City,” Washington Post, February 19, 2008.

But What Kinds of Differences?

The principles of operational effectiveness and strategic positioning are deceptively simple. But while Porter claims strategy is “fundamentally about being different,”M. Porter, “What Is Strategy?” Harvard Business Review 74, no. 6 (November–December 1996): 61–78. how can you recognize whether your firm’s differences are special enough to yield sustainable competitive advantage?

An approach known as the resource-based view of competitive advantageThe strategic thinking approach suggesting that if a firm is to maintain sustainable competitive advantage, it must control an exploitable resource, or set of resources, that have four critical characteristics. These resources must be (1) valuable, (2) rare, (3) imperfectly imitable, and (4) nonsubstitutable. can help. The idea here is that if a firm is to maintain sustainable competitive advantage, it must control a set of exploitable resources that have four critical characteristics. These resources must be (1) valuable, (2) rare, (3) imperfectly imitable (tough to imitate), and (4) nonsubstitutable. Having all four characteristics is key. Miss value and no one cares what you’ve got. Without rareness, you don’t have something unique. If others can copy what you have, or others can replace it with a substitute, then any seemingly advantageous differences will be undercut.

Strategy isn’t just about recognizing opportunity and meeting demand. Resource-based thinking can help you avoid the trap of carelessly entering markets simply because growth is spotted. The telecommunications industry learned this lesson in a very hard and painful way. With the explosion of the Internet it was easy to see that demand to transport Web pages, e-mails, MP3s, video, and everything else you can turn into ones and zeros, was skyrocketing.

Most of what travels over the Internet is transferred over long-haul fiber-optic cables, so telecom firms began digging up the ground and laying webs of fiberglass to meet the growing demand. Problems resulted because firms laying long-haul fiber didn’t fully appreciate that their rivals and new upstart firms were doing the exact same thing. By one estimate there was enough fiber laid to stretch from the Earth to the moon some 280 times!L. Kahney, “Net Speed Ain’t Seen Nothin’ Yet,” Wired News, March 21, 2000. On top of that, a technology called dense wave division multiplexing (DWDM)A technology that increases the transmission capacity (and hence speed) of fiber-optic cable. Transmissions using fiber are accomplished by transmitting light inside “glass” cables. In DWDM, the light inside fiber is split into different wavelengths in a way similar to how a prism splits light into different colors. enabled existing fiber to carry more transmissions than ever before. The end result—these new assets weren’t rare and each day they seemed to be less valuable.

For some firms, the transmission prices they charged on newly laid cable collapsed by over 90 percent. Established firms struggled, upstarts went under, and WorldCom became the biggest bankruptcy in U.S. history. The impact was felt throughout all industries that supplied the telecom industry. Firms like Sun, Lucent, and Nortel, whose sales growth relied on big sales to telecom carriers, saw their value tumble as orders dried up. Estimates suggest that the telecommunications industry lost nearly $4 trillion in value in just three years,L. Endlich, Optical Illusions: Lucent and the Crash of Telecom (New York: Simon & Schuster, 2004). much of it due to executives that placed big bets on resources that weren’t strategic.

Key Takeaways

  • Technology can be easy to copy, and technology alone rarely offers sustainable advantage.
  • Firms that leverage technology for strategic positioning use technology to create competitive assets or ways of doing business that are difficult for others to copy.
  • True sustainable advantage comes from assets and business models that are simultaneously valuable, rare, difficult to imitate, and for which there are no substitutes.

Questions and Exercises

  1. What is operational effectiveness?
  2. What is strategic positioning?
  3. Is a firm that competes based on the features of technology engaged in operational effectiveness or strategic positioning? Give an example to back up your claim.
  4. What is the “resource-based” view of competitive advantage? What are the characteristics of resources that may yield sustainable competitive advantage?
  5. TiVo has a great brand. Why hasn’t it profitably dominated the market for digital video recorders?
  6. Examine the FreshDirect business model and list reasons for its competitive advantage. Would a similar business work in your neighborhood? Why or why not?
  7. What effect did FreshDirect have on traditional grocers operating in New York City? Why?
  8. Choose a technology-based company. Discuss its competitive advantage based on the resources it controls.
  9. Use the resource-based view of competitive advantage to explain the collapse of many telecommunications firms in the period following the burst of the dot-com bubble.
  10. Consider the examples of Barnes and Noble competing with Amazon, and Apple offering iTunes. Are either (or both) of these efforts straddling? Why or why not?

2.2 Powerful Resources

Learning Objectives

After studying this section you should be able to do the following:

  1. Understand that technology is often critical to enabling competitive advantage, and provide examples of firms that have used technology to organize for sustained competitive advantage.
  2. Understand the value chain concept and be able to examine and compare how various firms organize to bring products and services to market.
  3. Recognize the role technology can play in crafting an imitation-resistant value chain, as well as when technology choice may render potentially strategic assets less effective.
  4. Define the following concepts: brand, scale, data and switching cost assets, differentiation, network effects, and distribution channels.
  5. Understand and provide examples of how technology can be used to create or strengthen the resources mentioned above.

Management has no magic bullets. There is no exhaustive list of key resources that firms can look to in order to build a sustainable business. And recognizing a resource doesn’t mean a firm will be able to acquire it or exploit it forever. But being aware of major sources of competitive advantage can help managers recognize an organization’s opportunities and vulnerabilities, and can help them brainstorm winning strategies. And these assets rarely exist in isolation. Oftentimes, a firm with an effective strategic position can create an arsenal of assets that reinforce one another, creating advantages that are particualrly difficult for rivals to successfully challenge.

Imitation-Resistant Value Chains

While many of the resources below are considered in isolation, the strength of any advantage can be far more significant if firms are able to leverage several of these resources in a way that makes each stronger and makes the firm’s way of doing business more difficult for rivals to match. Firms that craft an imitation-resistant value chainA way of doing business that competitors struggle to replicate and that frequently involves technology in a key enabling role. have developed a way of doing business that others will struggle to replicate, and in nearly every successful effort of this kind, technology plays a key enabling role. The value chain is the set of interrelated activities that bring products or services to market (see below). When we compare FreshDirect’s value chain to traditional rivals, there are differences across every element. But most importantly, the elements in FreshDirect’s value chain work together to create and reinforce competitive advantages that others cannot easily copy. Incumbents would be straddled between two business models, unable to reap the full advantages of either. And late-moving pure-play rivals will struggle, as FreshDirect’s lead time allows the firm to develop brand, scale, data, and other advantages that newcomers lack (see below for more on these resources).

Key Framework: The Value Chain

The value chainThe “set of activities through which a product or service is created and delivered to customers.” is the “set of activities through which a product or service is created and delivered to customers.”M. Porter, “Strategy and the Internet,” Harvard Business Review 79, no. 3 (March 2001): 62–78. There are five primary components of the value chain and four supporting components. The primary components are as follows:

  • Inbound logistics—getting needed materials and other inputs into the firm from suppliers
  • Operations—turning inputs into products or services
  • Outbound logistics—delivering products or services to consumers, distribution centers, retailers, or other partners
  • Marketing and sales—customer engagement, pricing, promotion, and transaction
  • Support—service, maintenance, and customer support

The secondary components are the following:

  • Firm infrastructure—functions that support the whole firm, including general management, planning, IS, and finance
  • Human resource management—recruiting, hiring, training, and development
  • Technology / research and development—new product and process design
  • Procurement—sourcing and purchasing functions

While the value chain is typically depicted as it’s displayed in the figure below, goods and information don’t necessarily flow in a line from one function to another. For example, an order taken by the marketing function can trigger an inbound logistics function to get components from a supplier, operations functions (to build a product if it’s not available), or outbound logistics functions (to ship a product when it’s available). Similarly, information from service support can be fed back to advise research and development (R&D) in the design of future products.

Figure 2.2 The Value Chain

When a firm has an imitation-resistant value chain—one that’s tough for rivals to copy while gaining similar benefits—then a firm may have a critical competitive asset. From a strategic perspective, managers can use the value chain framework to consider a firm’s differences and distinctiveness compared to rivals. If a firm’s value chain can’t be copied by competitors without engaging in painful trade-offs, or if the firm’s value chain helps to create and strengthen other strategic assets over time, it can be a key source for competitive advantage. Many of the cases covered in this book, including FreshDirect, Amazon, Zara, Netflix, and eBay, illustrate this point.

An analysis of a firm’s value chain can also reveal operational weaknesses, and technology is often of great benefit to improving the speed and quality of execution. Firms can often buy software to improve things, and tools such as supply chain management (SCM; linking inbound and outbound logistics with operations), customer relationship management (CRM; supporting sales, marketing, and in some cases R&D), and enterprise resource planning software (ERP; software implemented in modules to automate the entire value chain), can have a big impact on more efficiently integrating the activities within the firm, as well as with its suppliers and customers. But remember, these software tools can be purchased by competitors, too. While valuable, such software may not yield lasting competitive advantage if it can be easily matched by competitors as well.

There’s potential danger here. If a firm adopts software that changes a unique process into a generic one, it may have co-opted a key source of competitive advantage particularly if other firms can buy the same stuff. This isn’t a problem with something like accounting software. Accounting processes are standardized and accounting isn’t a source of competitive advantage, so most firms buy rather than build their own accounting software. But using packaged, third-party SCM, CRM, and ERP software typically requires adopting a very specific way of doing things, using software and methods that can be purchased and adopted by others. During its period of PC-industry dominance, Dell stopped deployment of the logistics and manufacturing modules of a packaged ERP implementation when it realized that the software would require the firm to make changes to its unique and highly successful operating model and that many of the firm’s unique supply chain advantages would change to the point where the firm was doing the same thing using the same software as its competitors. By contrast, Apple had no problem adopting third-party ERP software because the firm competes on product uniqueness rather than operational differences.

Dell’s Struggles: Nothing Lasts Forever

Michael Dell enjoyed an extended run that took him from assembling PCs in his dorm room as an undergraduate at the University of Texas at Austin to heading the largest PC firm on the planet. For years Dell’s superefficient, vertically integrated manufacturing and direct-to-consumer model combined to help the firm earn seven times more profit on its own systems when compared with comparably configured rival PCs.B. Breen, “Living in Dell Time,” Fast Company, December 19, 2007, http://www.fastcompany.com/magazine/88/dell.html. And since Dell PCs were usually cheaper, too, the firm could often start a price war and still have better overall margins than rivals.

It was a brilliant model that for years proved resistant to imitation. While Dell sold direct to consumers, rivals had to share a cut of sales with the less efficient retail chains responsible for the majority of their sales. Dell’s rivals struggled in moving toward direct sales because any retailer sensing its suppliers were competing with it through a direct-sales effort could easily chose another supplier that sold a nearly identical product. It wasn’t that HP, IBM, Sony, and so many others didn’t see the advantage of Dell’s model—these firms were wedded to models that made it difficult for them to imitate their rival.

But then Dell’s killer model, one that had become a staple case study in business schools, began to lose steam. Nearly two decades of observing Dell had allowed the contract manufacturers serving Dell’s rivals to improve manufacturing efficiency.T. Friscia, K. O’Marah, D. Hofman, and J. Souza, “The AMR Research Supply Chain Top 25 for 2009,” AMR Research, May 28, 2009, http://www.amrresearch.com/Content/View.aspx?compURI=tcm:7-43469. Component suppliers located near contract manufacturers, and assembly times fell dramatically. And as the cost of computing fell, the price advantage Dell enjoyed over rivals also shrank in absolute terms. That meant savings from buying a Dell weren’t as big as they once were. On top of that, the direct-to-consumer model also suffered when sales of notebook PCs outpaced the more commoditized desktop market. Notebooks can be considered to be more differentiated than desktops, and customers often want to compare products in person—lift them, type on keyboards, and view screens—before making a purchase decision.

In time, these shifts created an opportunity for rivals to knock Dell from its ranking as the world’s number one PC manufacturer. Dell has even abandoned its direct-only business model and now sells products through third-party brick-and-mortar retailers. Dell’s struggles as computers, customers, and the product mix changed, all underscore the importance of continually assessing a firm’s strategic position among changing market conditions. There is no guarantee that today’s winning strategy will dominate forever.

Brand

A firm’s brandThe symbolic embodiment of all the information connected with a product or service. is the symbolic embodiment of all the information connected with a product or service, and a strong brand can also be an exceptionally powerful resource for competitive advantage. Consumers use brands to lower search costs, so having a strong brand is particularly vital for firms hoping to be the first online stop for consumers. Want to buy a book online? Auction a product? Search for information? Which firm would you visit first? Almost certainly Amazon, eBay, or Google. But how do you build a strong brand? It’s not just about advertising and promotion. First and foremost, customer experience counts. A strong brand proxies quality and inspires trust, so if consumers can’t rely on a firm to deliver as promised, they’ll go elsewhere. As an upside, tech can play a critical role in rapidly and cost-effectively strengthening a brand. If a firm performs well, consumers can often be enlisted to promote a product or service (so-called viral marketingLeveraging consumers to promote a product or service.). Consider that while scores of dot-coms burned through money on Super Bowl ads and other costly promotional efforts, Google, Hotmail, Skype, eBay, MySpace, Facebook, Twitter, YouTube, and so many other dominant online properties built multimillion member followings before committing any significant spending to advertising.

Figure 2.3

The “E-mail” and “Share” links at the New York Times Web site enlist customers to spread the word about products and services, user to user, like a virus.

Early customer accolades for a novel service often mean that positive press (a kind of free advertising) will also likely follow.

But show up late and you may end up paying much more to counter an incumbent’s place in the consumer psyche. In recent years, Amazon has spent no money on television advertising, while rivals Buy.com and Overstock.com spent millions. Google, another strong brand, has become a verb, and the cost to challenge it is astonishingly high. Yahoo! and Microsoft’s Bing each spent $100 million on Google-challenging branding campaigns, but the early results of these efforts seemed to do little to grow share at Google’s expense.J. Edwards, “JWT’s $100 Million Campaign for Microsoft’s Bing Is Failing,” BNET, July 16, 2009. Branding is difficult, but if done well, even complex tech products can establish themselves as killer brands. Consider that Intel has taken an ingredient product that most people don’t understand, the microprocessor, and built a quality-conveying name recognized by computer users worldwide.

Scale

Many firms gain advantages as they grow in size. Advantages related to a firm’s size are referred to as scale advantagesAdvantages related to size.. Businesses benefit from economies of scaleWhen costs can be spread across increasing units of production or in serving multiple customers. Businesses that have favorable economies of scale (like many Internet firms) are sometimes referred to as being highly scalable. when the cost of an investment can be spread across increasing units of production or in serving a growing customer base. Firms that benefit from scale economies as they grow are sometimes referred to as being scalable. Many Internet and tech-leveraging businesses are highly scalable since, as firms grow to serve more customers with their existing infrastructure investment, profit margins improve dramatically.

Consider that in just one year, the Internet firm BlueNile sold as many diamond rings with just 115 employees and one Web site as a traditional jewelry retailer would sell through 116 stores.T. Mullaney, “Jewelry Heist,” BusinessWeek, May 10, 2004. And with lower operating costs, BlueNile can sell at prices that brick-and-mortar stores can’t match, thereby attracting more customers and further fueling its scale advantages. Profit margins improve as the cost to run the firm’s single Web site and operate its one warehouse is spread across increasing jewelry sales.

A growing firm may also gain bargaining power with its suppliers or buyers. As Dell grew larger, the firm forced suppliers wanting in on Dell’s growing business to make concessions such as locating close to Dell plants. Similarly, for years eBay could raise auction fees because of the firm’s market dominance. Auction sellers who left eBay lost pricing power since fewer bidders on smaller, rival services meant lower prices.

The scale of technology investment required to run a business can also act as a barrier to entry, discouraging new, smaller competitors. Intel’s size allows the firm to pioneer cutting-edge manufacturing techniques and invest $7 billion on next-generation plants.J. Flatley, “Intel Invests $7 Billion in Stateside 32nm Manufacturing,” Engadget, February 10, 2009. And although Google was started by two Stanford students with borrowed computer equipment running in a dorm room, the firm today runs on an estimated 1.4 million servers.R. Katz, “Tech Titans Building Boom,” IEEE Spectrum 46, no. 2 (February 1, 2009): 40–43. The investments being made by Intel and Google would be cost-prohibitive for almost any newcomer to justify.

Switching Costs and Data

Switching costsThe cost a consumer incurs when moving from one product to another. It can involve actual money spent (e.g., buying a new product) as well as investments in time, any data loss, and so forth. exist when consumers incur an expense to move from one product or service to another. Tech firms often benefit from strong switching costs that cement customers to their firms. Users invest their time learning a product, entering data into a system, creating files, and buying supporting programs or manuals. These investments may make them reluctant to switch to a rival’s effort.

Similarly, firms that seem dominant but that don’t have high switching costs can be rapidly trumped by strong rivals. Netscape once controlled more than 80 percent of the market share in Web browsers, but when Microsoft began bundling Internet Explorer with the Windows operating system and (through an alliance) with America Online (AOL), Netscape’s market share plummeted. Customers migrated with a mouse click as part of an upgrade or installation. Learning a new browser was a breeze, and with the Web’s open standards, most customers noticed no difference when visiting their favorite Web sites with their new browser.

Sources of Switching Costs

  • Learning costs: Switching technologies may require an investment in learning a new interface and commands.
  • Information and data: Users may have to reenter data, convert files or databases, or may even lose earlier contributions on incompatible systems.
  • Financial commitment: Can include investments in new equipment, the cost to acquire any new software, consulting, or expertise, and the devaluation of any investment in prior technologies no longer used.
  • Contractual commitments: Breaking contracts can lead to compensatory damages and harm an organization’s reputation as a reliable partner.
  • Search costs: Finding and evaluating a new alternative costs time and money.
  • Loyalty programs: Switching can cause customers to lose out on program benefits. Think frequent purchaser programs that offer “miles” or “points” (all enabled and driven by software).Adapted from C. Shapiro and H. Varian, “Locked In, Not Locked Out,” Industry Standard, November 2–9, 1998.

It is critical for challengers to realize that in order to win customers away from a rival, a new entrant must not only demonstrate to consumers that an offering provides more value than the incumbent, they have to ensure that their value added exceeds the incumbent’s value plus any perceived customer switching costs (see Figure 2.4). If it’s going to cost you and be inconvenient, there’s no way you’re going to leave unless the benefits are overwhelming.

Data can be a particularly strong switching cost for firms leveraging technology. A customer who enters her profile into Facebook, movie preferences into Netflix, or grocery list into FreshDirect may be unwilling to try rivals—even if these firms are cheaper—if moving to the new firm means she’ll lose information feeds, recommendations, and time savings provided by the firms that already know her well. Fueled by scale over time, firms that have more customers and have been in business longer can gather more data, and many can use this data to improve their value chain by offering more accurate demand forecasting or product recommendations.

Figure 2.4

In order to win customers from an established incumbent, a late-entering rival must offer a product or service that not only exceeds the value offered by the incumbent but also exceeds the incumbent’s value and any customer switching costs.

Competing on Tech Alone Is Tough: Gmail versus Rivals

Switching e-mail services can be a real a pain. You’ve got to convince your contacts to update their address books, hope that any message-forwarding from your old service to your new one remains active and works properly, and regularly check the old service to be sure nothing is caught in junk folder purgatory. Not fun. So when Google entered the market for free e-mail, challenging established rivals Yahoo! and Microsoft Hotmail, it knew it needed to offer an overwhelming advantage to lure away customers who had used these other services for years. Google’s offering? A mailbox with vastly more storage than its competitors. With 250 to 500 times the capacity of rivals, Gmail users were liberated from the infamous “mailbox full” error, and could send photos, songs, slideshows, and other rich media files as attachments.

A neat innovation, but one based on technology that incumbents could easily copy. Once Yahoo! and Microsoft saw that customers valued the increased capacity, they quickly increased their own mailbox size, holding on to customers who might otherwise have fled to Google. Four years after Gmail was introduced, the service still had less than half the users of each of its two biggest rivals.

Figure 2.5 E-mail Market Share in Millions of UsersJ. Graham, “E-mail Carriers Deliver Gifts of Nifty Features to Lure, Keep Users,” USA Today, April 16, 2008.

Differentiation

Commodities are products or services that are nearly identically offered from multiple vendors. Consumers buying commodities are highly price-focused since they have so many similar choices. In order to break the commodity trap, many firms leverage technology to differentiate their goods and services. Dell gained attention from customers not only because of its low prices, but also because it was one of the first PC vendors to build computers based on customer choice. Want a bigger hard drive? Don’t need the fast graphics card? Dell will oblige.

Technology has allowed Lands’ End to take this concept to clothing. Now 40 percent of the firm’s chino and jeans orders are for custom products, and consumers pay a price markup of one-third or more for the tailored duds.J. Schlosser, “Cashing In on the New World of Me,” Fortune, December 1, 2004. This kind of tech-led differentiation creates and reinforces other assets. While rivals also offer custom products, Lands’ End has established a switching cost with its customers, since moving to rivals would require twenty minutes to reenter measurements and preferences versus two minutes to reorder from LandsEnd.com. The firm’s reorder rates are 40 to 60 percent on custom clothes, and Lands’ End also gains valuable information on more accurate sizing—critical because current clothes sizes provided across the U.S. apparel industry comfortably fit only about one-third of the population.

Data is not only a switching cost, it also plays a critical role in differentiation. Each time a visitor returns to Amazon, the firm uses browsing records, purchase patterns, and product ratings to present a custom home page featuring products that the firm hopes the visitor will like. Customers value the experience they receive at Amazon so much that the firm received the highest score ever recorded on the University of Michigan’s American Customer Satisfaction Index (ACSI). The score was not just the highest performance of any online firm, it was the highest ranking that any service firm in any industry had ever received.

Capital One has also used data to differentiate its offerings. The firm mines data and runs experiments to create risk models on potential customers. Because of this, the credit card firm aggressively pursued a set of customers that other lenders considered too risky based on simplistic credit scoring. Technology determined that these underserved customers not properly identified by conventional techniques were actually good bets. Finding profitable new markets that others ignored allowed Capital One to grow its EPS (earnings per share) 20 percent a year for seven years, a feat matched by less than 1 percent of public firms.T. Davenport and J. Harris, Competing on Analytics: The New Science of Winning (Boston: Harvard Business School Press, 2007).

Network Effects

AOL’s instant messaging client, AIM, has the majority of instant messaging users in the United States. Microsoft Windows has a 90 percent market share in operating systems. EBay has an 80 percent share of online auctions. Why are these firms so dominant? Largely due to the concept of network effectsAlso known as Metcalfe’s Law, or network externalities. When the value of a product or service increases as its number of users expands. (see Chapter 6 “Understanding Network Effects”). Network effects (sometimes called network externalities or Metcalfe’s Law) exist when a product or service becomes more valuable as more people use it. If you’re the first person with an AIM account, then AIM isn’t very valuable. But with each additional user, there’s one more person to chat with. A firm with a big network of users might also see value added by third parties. Sony’s PlayStation 2 dominated the prior generation of video game consoles in large part because it had more games than its rivals, and most of these games were provided by firms other than Sony. Third-party add-on products, books, magazines, or even skilled labor are all attracted to networks of the largest number of users, making dominant products more valuable.

Switching costs also play a role in determining the strength of network effects. Tech user investments often go far beyond simply the cost of acquiring a technology. Users spend time learning a product; they buy add-ons, create files, and enter preferences. Because no one wants to be stranded with an abandoned product and lose this additional investment, users may choose a technically inferior product simply because the product has a larger user base and is perceived as having a greater chance of being offered in the future. The virtuous cycle of network effectsA virtuous adoption cycle occurs when network effects exist that make a product or service more attractive (increases benefits, reduces costs) as the adopter base grows. doesn’t apply to all tech products, and it can be a particularly strong asset for firms that can control and leverage a leading standard (think Apple’s iPhone and iPad with their closed systems versus Netscape, which was almost entirely based on open standards), but in some cases where network effects are significant, they can create winners so dominant that firms with these advantages enjoy a near-monopoly hold on a market.

Distribution Channels

If no one sees your product, then it won’t even get considered by consumers. So distribution channelsThe path through which products or services get to customers.—the path through which products or services get to customers—can be critical to a firm’s success. Again, technology opens up opportunities for new ways to reach customers.

Users can be recruited to create new distribution channels for your products and services (usually for a cut of the take). You may have visited Web sites that promote books sold on Amazon.com. Web site operators do this because Amazon gives them a percentage of all purchases that come in through these links. Amazon now has over 1 million of these “associates” (the term the firm uses for its affiliatesThird parties that promote a product or service, typically in exchange for a cut of any sales.), yet it only pays them if a promotion gains a sale. Google similarly receives some 30 percent of its ad revenue not from search ads, but from advertisements distributed within third-party sites ranging from lowly blogs to the New York Times.Google Fourth Quarter 2008 Earnings Summary, http://investor.google.com/earnings.html.

In recent years, Google and Microsoft have engaged in bidding wars, trying to lock up distribution deals that would bundle software tools, advertising, or search capabilities with key partner offerings. Deals with partners such as Dell, MySpace, and Verizon Wireless have been valued at up to $1 billion each.N. Wingfield, “Microsoft Wins Key Search Deals,” Wall Street Journal, January 8, 2009.

The ability to distribute products by bundling them with existing offerings is a key Microsoft advantage. But beware—sometimes these distribution channels can provide firms with such an edge that international regulators have stepped in to try to provide a more level playing field. Microsoft was forced by European regulators to unbundle the Windows Media Player, for fear that it provided the firm with too great an advantage when competing with the likes of RealPlayer and Apple’s QuickTime (see Chapter 6 “Understanding Network Effects”).

What about Patents?

Intellectual property protection can be granted in the form of a patent for those innovations deemed to be useful, novel, and nonobvious. In the United States, technology and (more controversially) even business models can be patented, typically for periods of twenty years from the date of patent application. Firms that receive patents have some degree of protection from copycats that try to identically mimic their products and methods.

The patent system is often considered to be unfairly stacked against start-ups. U.S. litigation costs in a single patent case average about $5 million,B. Feld, “Why the Decks Are Stacked against Software Startups in Patent Litigation,” Technology Review, April 12, 2009. and a few months of patent litigation can be enough to sink an early stage firm. Large firms can also be victims. So-called patent trolls hold intellectual property not with the goal of bringing novel innovations to market but instead in hopes that they can sue or extort large settlements from others. BlackBerry maker Research in Motion’s $612 million settlement with the little-known holding company NTP is often highlighted as an example of the pain trolls can inflict.T. Wu, “Weapons of Business Destruction,” Slate, February 6, 2006; R. Kelley, “BlackBerry Maker, NTP Ink $612 Million Settlement,” CNN Money, March 3, 2006.

Even if an innovation is patentable, that doesn’t mean that a firm has bulletproof protection. Some patents have been nullified by the courts upon later review (usually because of a successful challenge to the uniqueness of the innovation). Software patents are also widely granted, but notoriously difficult to defend. In many cases, coders at competing firms can write substitute algorithms that aren’t the same, but accomplish similar tasks. For example, although Google’s PageRank search algorithms are fast and efficient, Microsoft, Yahoo! and others now offer their own noninfringing search that presents results with an accuracy that many would consider on par with PageRank. Patents do protect tech-enabled operations innovations at firms like Netflix and Harrah’s (casino hotels), and design innovations like the iPod click wheel. But in a study of the factors that were critical in enabling firms to profit from their innovations, Carnegie Mellon professor Wes Cohen found that patents were only the fifth most important factor. Secrecy, lead time, sales skills, and manufacturing all ranked higher.T. Mullaney and S. Ante, “InfoWars,” BusinessWeek, June 5, 2000.

Key Takeaways

  • Technology can play a key role in creating and reinforcing assets for sustainable advantage by enabling an imitation-resistant value chain; strengthening a firm’s brand; collecting useful data and establishing switching costs; creating a network effect; creating or enhancing a firm’s scale advantage; enabling product or service differentiation; and offering an opportunity to leverage unique distribution channels.
  • The value chain can be used to map a firm’s efficiency and to benchmark it against rivals, revealing opportunities to use technology to improve processes and procedures. When a firm is resistant to imitation, its value chain may yield sustainable competitive advantage.
  • Firms may consider adopting packaged software or outsourcing value chain tasks that are not critical to a firm’s competitive advantage. A firm should be wary of adopting software packages or outsourcing portions of its value chain that are proprietary and a source of competitive advantage.
  • Patents are not necessarily a sure-fire path to exploiting an innovation. Many technologies and business methods can be copied, so managers should think about creating assets like the ones defined above if they wish to create truly sustainable advantage.
  • Nothing lasts forever, and shifting technologies and market conditions can render once strong assets as obsolete.

Questions and Exercises

  1. Define and diagram the value chain.
  2. Discuss the elements of FreshDirect’s value chain and the technologies that FreshDirect uses to give the firm a competitive advantage. Why is FreshDirect resistant to imitation from incumbent firms? What advantages does FreshDirect have that insulate the firm from serious competition from start-ups copying its model?
  3. Which firm should adopt third-party software to automate its supply chain—Dell or Apple? Why? Identify another firm that might be at risk if adopting generic enterprise software. Why do you think this is risky and what would they do as an alternative?
  4. Identify two firms in the same industry that have different value chains. Why do you think these firms have different value chains? What role do you think technology plays in the way that each firm competes? Do these differences enable strategic positioning? Why or why not?
  5. How can information technology help a firm build a brand inexpensively?
  6. Describe BlueNile’s advantages over a traditional jewelry chain. Can conventional jewelers successfully copy BlueNile? Why or why not?
  7. What are switching costs? What role does technology play in strengthening a firm’s switching costs?
  8. In most markets worldwide, Google dominates search. Why hasn’t Google shown similar dominance in e-mail, as well?
  9. Should Lands’ End fear losing customers to rivals that copy its custom clothing initiative? Why or why not?
  10. How can technology be a distribution channel? Name a firm that has tried to leverage its technology as a distribution channel.
  11. Do you think it is possible to use information technology to achieve competitive advantage? If so, how? If not, why not?
  12. What are network effects? Name a product or service that has been able to leverage network effects to its advantage.
  13. For well over a decade, Dell earned above average industry profits. But lately the firm has begun to struggle. What changed?
  14. What are the potential sources of switching costs if you decide to switch cell phone service providers? Cell phones? Operating systems? PayTV service?
  15. Why is an innovation based on technology alone often subjected to intense competition?
  16. Can you think of firms that have successfully created competitive advantage even though other firms provide essentially the same thing? What factors enable this success?
  17. What role did network effects play in your choice of an instant messaging client? Of an operating system? Of a social network? Of a word processor? Why do so many firms choose to standardize on Microsoft Windows?
  18. What can a firm do to prepare for the inevitable expiration of a patent (patents typically expire after twenty years)? Think in terms of the utilization of other assets and the development of advantages through employment of technology.

2.3 Barriers to Entry, Technology, and Timing

Learning Objectives

After studying this section you should be able to do the following:

  1. Understand the relationship between timing, technology, and the creation of resources for competitive advantage.
  2. Argue effectively when faced with broad generalizations about the importance (or lack of importance) of technology and timing to competitive advantage.
  3. Recognize the difference between low barriers to entry and the prospects for the sustainability of new entrant’s efforts.

Some have correctly argued that the barriers to entry for many tech-centric businesses are low. This argument is particularly true for the Internet where rivals can put up a competing Web site seemingly overnight. But it’s absolutely critical to understand that market entry is not the same as building a sustainable business and just showing up doesn’t guarantee survival.

Platitudes like “follow, don’t lead”N. Carr, “IT Doesn’t Matter,” Harvard Business Review 81, no. 5 (May 2003): 41–49. can put firms dangerously at risk, and statements about low entry barriers ignore the difficulty many firms will have in matching the competitive advantages of successful tech pioneers. Should Blockbuster have waited while Netflix pioneered? In a year where Netflix profits were up seven-fold, Blockbuster lost more than $1 billion.“Movies to Go,” Economist, July 9, 2005. Should Sotheby’s have dismissed seemingly inferior eBay? Sotheby’s lost over $6 million in 2009; eBay earned nearly $2.4 billion in profits. Barnes & Noble waited seventeen months to respond to Amazon.com. Amazon now has twelve times the profits of its offline rival and its market cap is over forty-eight times greater.FY 2008 net income and June 2009 market cap figures for both firms: http://www.barnesandnobleinc.com/newsroom/financial_only.html and http://phx.corporate-ir.net/phoenix.zhtml?c=97664&p=irol-reportsOther. Today’s Internet giants are winners because in most cases, they were the first to move with a profitable model and they were able to quickly establish resources for competitive advantage. With few exceptions, established offline firms have failed to catch up to today’s Internet leaders.

Timing and technology alone will not yield sustainable competitive advantage. Yet both of these can be enablers for competitive advantage. Put simply, it’s not the time lead or the technology; it’s what a firm does with its time lead and technology. True strategic positioning means that a firm has created differences that cannot be easily matched by rivals. Moving first pays off when the time lead is used to create critical resources that are valuable, rare, tough to imitate, and lack substitutes. Anything less risks the arms race of operational effectiveness. Build resources like brand, scale, network effects, switching costs, or other key assets and your firm may have a shot. But guess wrong about the market or screw up execution and failure or direct competition awaits. It is true that most tech can be copied—there’s little magic in eBay’s servers, Intel’s processors, Oracle’s databases, or Microsoft’s operating systems that past rivals have not at one point improved upon. But the lead that each of these tech-enabled firms had was leveraged to create network effects, switching costs, data assets, and helped build solid and well-respected brands.

But Google Arrived Late! Why Incumbents Must Constantly Consider Rivals

Yahoo! was able to maintain its lead in e-mail because the firm quickly matched and nullified Gmail’s most significant tech-based innovations before Google could inflict real damage. Perhaps Yahoo! had learned from prior errors. The firm’s earlier failure to respond to Google’s emergence as a credible threat in search advertising gave Sergey Brin and Larry Page the time they needed to build the planet’s most profitable Internet firm.

Yahoo! (and many Wall Street analysts) saw search as a commodity—a service the firm had subcontracted out to other firms including Alta Vista and Inktomi. Yahoo! saw no conflict in taking an early investment stake in Google or in using the firm for its search results. But Yahoo! failed to pay attention to Google’s advance. As Google’s innovations in technology and interface remained unmatched over time, this allowed the firm to build its brand, scale, and advertising network (distribution channel) that grew from network effects whereby content providers and advertisers attract one another. These are all competitive resources that rivals have never been able to match.

Google’s ability to succeed after being late to the search party isn’t a sign of the power of the late mover, it’s a story about the failure of incumbents to monitor their competitive landscape, recognize new rivals, and react to challenging offerings. That doesn’t mean that incumbents need to respond to every potential threat. Indeed, figuring out which threats are worthy of response is the real skill here. Video rental chain Hollywood Video wasted over $300 million in an Internet streaming business years before high-speed broadband was available to make the effort work.N. Wingfield, “Netflix vs. the Naysayers,” Wall Street Journal, March 21, 2007. But while Blockbuster avoided the balance sheet–cratering gaffes of Hollywood Video, the firm also failed to respond to Netflix—a new threat that had timed market entry perfectly (see Chapter 4 “Netflix: The Making of an E-commerce Giant and the Uncertain Future of Atoms to Bits”).

Firms that quickly get to market with the “right” model can dominate, but it’s equally critical for leading firms to pay close attention to competition and innovate in ways that customers value. Take your eye off the ball and rivals may use time and technology to create strategic resources. Just look at Friendster—a firm that was once known as the largest social network in the United States but has fallen so far behind rivals that it has become virtually irrelevant today.

Key Takeaways

  • It doesn’t matter if it’s easy for new firms to enter a market if these newcomers can’t create and leverage the assets needed to challenge incumbents.
  • Beware of those who say, “IT doesn’t matter” or refer to the “myth” of the first mover. This thinking is overly simplistic. It’s not a time or technology lead that provides sustainable competitive advantage; it’s what a firm does with its time and technology lead. If a firm can use a time and technology lead to create valuable assets that others cannot match, it may be able to sustain its advantage. But if the work done in this time and technology lead can be easily matched, then no advantage can be achieved, and a firm may be threatened by new entrants

Questions and Exercises

  1. Does technology lower barriers to entry or raise them? Do low entry barriers necessarily mean that a firm is threatened?
  2. Is there such a thing as the first-mover advantage? Why or why not?
  3. Why did Google beat Yahoo! in search?
  4. A former editor of the Harvard Business Review, Nick Carr, once published an article in that same magazine with the title “IT Doesn’t Matter.” In the article he also offered firms the advice: “Follow, Don’t Lead.” What would you tell Carr to help him improve the way he thinks about the relationship between time, technology, and competitive advantage?
  5. Name an early mover that has successfully defended its position. Name another that had been superseded by the competition. What factors contributed to its success or failure?
  6. You have just written a word processing package far superior in features to Microsoft Word. You now wish to form a company to market it. List and discuss the barriers your start-up faces.

2.4 Key Framework: The Five Forces of Industry Competitive Advantage

Learning Objectives

After studying this section you should be able to do the following:

  1. Diagram the five forces of competitive advantage.
  2. Apply the framework to an industry, assessing the competitive landscape and the role of technology in influencing the relative power of buyers, suppliers, competitors, and alternatives.

Professor and strategy consultant Gary Hamel once wrote in a Fortune cover story that “the dirty little secret of the strategy industry is that it doesn’t have any theory of strategy creation.”G. Hamel, “Killer Strategies that Make Shareholders Rich,” Fortune, June 23, 1997. While there is no silver bullet for strategy creation, strategic frameworks help managers describe the competitive environment a firm is facing. Frameworks can also be used as brainstorming tools to generate new ideas for responding to industry competition. If you have a model for thinking about competition, it’s easier to understand what’s happening and to think creatively about possible solutions.

One of the most popular frameworks for examining a firm’s competitive environment is Porter’s five forcesAlso known as Industry and Competitive Analysis. A framework considering the interplay between (1) the intensity of rivalry among existing competitors, (2) the threat of new entrants, (3) the threat of substitute goods or services, (4) the bargaining power of buyers, and (5) the bargaining power of suppliers., also known as the Industry and Competitive Analysis. As Porter puts it, “analyzing [these] forces illuminates an industry’s fundamental attractiveness, exposes the underlying drivers of average industry profitability, and provides insight into how profitability will evolve in the future.” The five forces this framework considers are (1) the intensity of rivalry among existing competitors, (2) the threat of new entrants, (3) the threat of substitute goods or services, (4) the bargaining power of buyers, and (5) the bargaining power of suppliers (see Figure 2.6 “The Five Forces of Industry and Competitive Analysis”).

Figure 2.6 The Five Forces of Industry and Competitive Analysis

New technologies can create jarring shocks in an industry. Consider how the rise of the Internet has impacted the five forces for music retailers. Traditional music retailers like Tower and Virgin found that customers were seeking music online. These firms scrambled to invest in the new channel out of what is perceived to be a necessity. Their intensity of rivalry increases because they not only compete based on the geography of where brick-and-mortar stores are physically located, they now compete online as well. Investments online are expensive and uncertain, prompting some firms to partner with new entrants such as Amazon. Free from brick-and-mortar stores, Amazon, the dominant new entrant, has a highly scalable cost structure. And in many ways the online buying experience is superior to what customers saw in stores. Customers can hear samples of almost all tracks, selection is seemingly limitless (the long tail phenomenon—see this concept illuminated in Chapter 4 “Netflix: The Making of an E-commerce Giant and the Uncertain Future of Atoms to Bits”), and data is leveraged using collaborative filtering software to make product recommendations and assist in music discovery.For more on the long tail and collaborative filtering, see Chapter 4 “Netflix: The Making of an E-commerce Giant and the Uncertain Future of Atoms to Bits”. Tough competition, but it gets worse because CD sales aren’t the only way to consume music. The process of buying a plastic disc now faces substitutes as digital music files become available on commercial music sites. Who needs the physical atoms of a CD filled with ones and zeros when you can buy the bits one song at a time? Or don’t buy anything and subscribe to a limitless library instead.

From a sound quality perspective, the substitute good of digital tracks purchased online is almost always inferior to their CD counterparts. To transfer songs quickly and hold more songs on a digital music player, tracks are encoded in a smaller file size than what you’d get on a CD, and this smaller file contains lower playback fidelity. But the additional tech-based market shock brought on by digital music players (particularly the iPod) has changed listening habits. The convenience of carrying thousands of songs trumps what most consider just a slight quality degradation. ITunes is now responsible for selling more music than any other firm, online or off. Most alarming to the industry is the other widely adopted substitute for CD purchases—theft. Illegal music “sharing” services abound, even after years of record industry crackdowns. And while exact figures on real losses from online piracy are in dispute, the music industry has seen album sales drop by 45 percent in less than a decade.K. Barnes, “Music Sales Boom, but Album Sales Fizzle for ’08,” USA Today, January 4, 2009. All this choice gives consumers (buyers) bargaining power. They demand cheaper prices and greater convenience. The bargaining power of suppliers—the music labels and artists—also increases. At the start of the Internet revolution, retailers could pressure labels to limit sales through competing channels. Now, with many of the major music retail chains in bankruptcy, labels have a freer hand to experiment, while bands large and small have new ways to reach fans, sometimes in ways that entirely bypass the traditional music labels.

While it can be useful to look at changes in one industry as a model for potential change in another, it’s important to realize that the changes that impact one industry do not necessarily impact other industries in the same way. For example, it is often suggested that the Internet increases bargaining power of buyers and lowers the bargaining power of suppliers. This suggestion is true for some industries like auto sales and jewelry where the products are commodities and the price transparencyThe degree to which complete information is available. of the Internet counteracts a previous information asymmetryA decision situation where one party has more or better information than its counterparty. where customers often didn’t know enough information about a product to bargain effectively. But it’s not true across the board.

In cases where network effects are strong or a seller’s goods are highly differentiated, the Internet can strengthen supplier bargaining power. The customer base of an antique dealer used to be limited by how many likely purchasers lived within driving distance of a store. Now with eBay, the dealer can take a rare good to a global audience and have a much larger customer base bid up the price. Switching costs also weaken buyer bargaining power. Wells Fargo has found that customers who use online bill pay (where switching costs are high) are 70 percent less likely to leave the bank than those who don’t, suggesting that these switching costs help cement customers to the company even when rivals offer more compelling rates or services.

Tech plays a significant role in shaping and reshaping these five forces, but it’s not the only significant force that can create an industry shock. Government deregulation or intervention, political shock, and social and demographic changes can all play a role in altering the competitive landscape. Because we live in an age of constant and relentless change, mangers need to continually visit strategic frameworks to consider any market-impacting shifts. Predicting the future is difficult, but ignoring change can be catastrophic.

Key Takeaways

  • Industry competition and attractiveness can be described by considering the following five forces: (1) the intensity of rivalry among existing competitors, (2) the potential for new entrants to challenge incumbents, (3) the threat posed by substitute products or services, (4) the power of buyers, and (5) the power of suppliers.
  • In markets where commodity products are sold, the Internet can increase buyer power by increasing price transparency.
  • The more differentiated and valuable an offering, the more the Internet shifts bargaining power to sellers. Highly differentiated sellers that can advertise their products to a wider customer base can demand higher prices.
  • A strategist must constantly refer to models that describe events impacting their industry, particularly as new technologies emerge.

Questions and Exercises

  1. What are Porter’s “five forces”?
  2. Use the five forces model to illustrate competition in the newspaper industry. Are some competitors better positioned to withstand this environment than others? Why or why not? What role do technology and resources for competitive advantage play in shaping industry competition?
  3. What is price transparency? What is information asymmetry? How does the Internet relate to these two concepts? How does the Internet shift bargaining power among the five forces?
  4. How has the rise of the Internet impacted each of the five forces for music retailers?
  5. In what ways is the online music buying experience superior to that of buying in stores?
  6. What is the substitute for music CDs? What is the comparative sound quality of the substitute? Why would a listener accept an inferior product?
  7. Based on Porter’s five forces, is this a good time to enter the retail music industry? Why or why not?
  8. What is the cost to the music industry of music theft? Cite your source.
  9. Discuss the concepts of price transparency and information asymmetry as they apply to the diamond industry as a result of the entry of BlueNile. Name another industry where the Internet has had a similar impact.
  10. Under what conditions can the Internet strengthen supplier bargaining power? Give an example.
  11. What is the effect of switching costs on buyer bargaining power? Give an example.
  12. How does the Internet impact bargaining power for providers of rare or highly differentiated goods? Why?

Chapter 1: Setting the Stage: Technology and the Modern Enterprise

1.1 Tech’s Tectonic Shift: Radically Changing Business Landscapes

Learning Objective

After studying this section you should be able to do the following:

  1. Appreciate how in the past decade, technology has helped bring about radical changes across industries and throughout societies.

This book is written for a world that has changed radically in the past decade.

At the start of the prior decade, Google barely existed and well-known strategists dismissed Internet advertising models.M. Porter, “Strategy and the Internet,” Harvard Business Review 79, no. 3 (March 2001): 62–78. By decade’s end, Google brought in more advertising revenue than any firm, online or off, and had risen to become the most profitable media company on the planet. Today billions in advertising dollars flee old media and are pouring into digital efforts, and this shift is reshaping industries and redefining skills needed to reach today’s consumers.

A decade ago the iPod also didn’t exist and Apple was widely considered a tech-industry has-been. By spring 2010 Apple had grown to be the most valuable tech firm in the United States, selling more music and generating more profits from mobile device sales than any firm in the world.

Moore’s Law and other factors that make technology faster and cheaper have thrust computing and telecommunications into the hands of billions in ways that are both empowering the poor and poisoning the planet.

Social media barely warranted a mention a decade ago, but today, Facebook’s user base is larger than any nation, save for China and India. Firms are harnessing social media for new product ideas and for millions in sales. But with promise comes peril. When mobile phones are cameras just a short hop from YouTube, Flickr, and Twitter, every ethical lapse can be captured, every customer service flaw graffiti-tagged on the permanent record that is the Internet. The service and ethics bar for today’s manager has never been higher.

Speaking of globalization, China started the prior decade largely as a nation unplugged and offline. But today China has more Internet users than any other country and has spectacularly launched several publicly traded Internet firms including Baidu, Tencent, and Alibaba. By 2009, China Mobile was more valuable than any firm in the United States except for Exxon Mobil and Wal-Mart. Think the United States holds the number one ranking in home broadband access? Not even close—the United States is ranked fifteenth.S. Shankland, “Google to Test Ultrafast Broadband to the Home,” CNET, February 10, 2010.

The way we conceive of software and the software industry is also changing radically. IBM, HP, and Oracle are among the firms that collectively pay thousands of programmers to write code that is then given away for free. Today, open source software powers most of the Web sites you visit. And the rise of open source has rewritten the revenue models for the computing industry and lowered computing costs for start-ups to blue chips worldwide.

Cloud computing and software as a service is turning sophisticated, high-powered computing into a utility available to even the smallest businesses and nonprofits.

Data analytics and business intelligence are driving discovery and innovation, redefining modern marketing, and creating a shifting knife-edge of privacy concerns that can shred corporate reputations if mishandled.

And the pervasiveness of computing has created a set of security and espionage threats unimaginable to the prior generation.

As the last ten years have shown, tech creates both treasure and tumult. These disruptions aren’t going away and will almost certainly accelerate, impacting organizations, careers, and job functions throughout your lifetime. It’s time to place tech at the center of the managerial playbook.

Key Takeaways

  • In the prior decade, firms like Google and Facebook have created profound shifts in the way firms advertise and individuals and organizations communicate.
  • New technologies have fueled globalization, redefined our concepts of software and computing, crushed costs, fueled data-driven decision making, and raised privacy and security concerns.

Questions and Exercises

  1. Visit a finance Web site such as http://www.google.com/finance. Compare Google’s profits to those of other major media companies. How have Google’s profits changed over the past few years? Why have the profits changed? How do these compare with changes in the firm you chose?
  2. How is social media impacting firms, individuals, and society?
  3. How do recent changes in computing impact consumers? Are these changes good or bad? Explain. How do they impact businesses?
  4. What kinds of skills do today’s managers need that weren’t required a decade ago?
  5. Work with your instructor to decide ways in which your class can use social media. For example, you might create a Facebook group where you can share ideas with your classmates, join Twitter and create a hash tag for your class, or create a course wiki. (See Chapter 7 “Peer Production, Social Media, and Web 2.0” for more on these and other services.)

1.2 It’s Your Revolution

Learning Objective

After studying this section you should be able to do the following:

  1. Name firms across hardware, software, and Internet businesses that were founded by people in their twenties (or younger).

The intersection where technology and business meet is both terrifying and exhilarating. But if you’re under the age of thirty, realize that this is your space. While the fortunes of any individual or firm rise and fall over time, it’s abundantly clear that many of the world’s most successful technology firms—organizations that have had tremendous impact on consumers and businesses across industries—were created by young people. Consider just a few:

Bill Gates was an undergraduate when he left college to found Microsoft—a firm that would eventually become the world’s largest software firm and catapult Gates to the top of the Forbes list of world’s wealthiest people (enabling him to also become the most generous philanthropist of our time).

Figure 1.1

Young Bill Gates appears in a mug shot for a New Mexico traffic violation. Microsoft, now headquartered in Washington State, had its roots in New Mexico when Gates and partner Paul Allen moved there to be near early PC maker Altair.

Michael Dell was just a sophomore when he began building computers in his dorm room at the University of Texas. His firm would one day claim the top spot among PC manufacturers worldwide.

Mark Zuckerberg founded Facebook as a nineteen-year-old college sophomore.

Steve Jobs was just twenty-one when he founded Apple.

Tony Hsieh proved his entrepreneurial chops when, at twenty-four, he sold LinkExchange to Microsoft for over a quarter of a billion dollars.M. Chafkin, “The Zappos Way of Managing,” Inc., May 1, 2009. He’d later serve as CEO of Zappos, eventually selling that firm to Amazon for $900 million.S. Lacy, “Amazon Buys Zappos; The Price Is $928m., Not $847m.,” TechCrunch, July 22, 2009.

Sergey Brin and Larry Page were both twenty-something doctoral students at Stanford University when they founded Google. So were Jerry Yang and David Filo of Yahoo! All would become billionaires.

If you want to go a little older, Kevin Rose of Digg and Steve Chen and Chad Hurley of YouTube were all in their late twenties when they launched their firms. Jeff Bezos hadn’t yet reached thirty when he began working on what would eventually become Amazon.

Of course, those folks would seem downright ancient to Catherine Cook, who founded MyYearbook.com, a firm that at one point grew to become the third most popular social network in the United States. Cook started the firm when she was a sophomore—in high school.

But you don’t have to build a successful firm to have an impact as a tech revolutionary. Shawn Fanning’s Napster, widely criticized as a piracy playground, was written when he was just nineteen. Fanning’s code was the first significant salvo in the tech-fueled revolution that brought about an upending of the entire music industry. Finland’s Linus Torvals wrote the first version of the Linux operating system when he was just twenty-one. Today Linux has grown to be the most influential component of the open source arsenal, powering everything from cell phones to supercomputers.

BusinessWeek regularly runs a list of America’s Best Young Entrepreneurs—the top twenty-five aged twenty-five and under. Inc. magazine’s list of the Coolest Young Entrepreneurs is subtitled the “30 under 30.”D. Fenn, “30 Under 30: For Young Entrepreneurs, Safety in Numbers,” Inc., October 1, 2009. While not exclusively filled with the ranks of tech start-ups, both of these lists are nonetheless dominated with technology entrepreneurs. Whenever you see young people on the cover of a business magazine, it’s almost certainly because they’ve done something groundbreaking with technology. The generals and foot soldiers of the technology revolution are filled with the ranks of the young, some not even old enough to legally have a beer. For the old-timers reading this, all is not lost, but you’d best get cracking with technology, quick. Junior might be on the way to either eat your lunch or be your next boss.

Key Takeaways

  • Recognize that anyone reading this book has the potential to build an impactful business. Entrepreneurship has no minimum age requirement.
  • The ranks of technology revolutionaries are filled with young people, with several leading firms and innovations launched by entrepreneurs who started while roughly the age of the average university student.

Questions and Exercises

  1. Look online for lists of young entrepreneurs. How many of these firms are tech firms or heavily rely on technology? Are there any sectors more heavily represented than tech?
  2. Have you ever thought of starting your own tech-enabled business? Brainstorm with some friends. What kinds of ideas do you think might make a good business?
  3. How have the costs of entrepreneurship changed over the past decade? What forces are behind these changes? What does this mean for the future of entrepreneurship?
  4. Many universities and regions have competitions for entrepreneurs (e.g., business plan competitions, elevator pitch competitions). Does your school have such a program? What are the criteria for participation? If your school doesn’t have one, consider forming such a program.
  5. Research business accelerator programs such as Y-Combinator, TechStars, and DreamIt. Do you have a program like this in your area? What do entrepreneurs get from participating in these programs? What do they give up? Do you think these programs are worth it? Why or why not? Have you ever used a product or service from a firm that has participated in one of these programs?
  6. Explore online for lists of resources for entrepreneurship. Share links to these resources using social media created for class.
  7. Have any alumni from your institution founded technology firms or risen to positions of prominence in tech-focused careers? If so, work with your professor to invite them to come speak to your class or to student groups on campus. Your career services, development (alumni giving), alumni association, and LinkedIn searches may be able to help uncover potential speakers.

1.3 Geek Up—Tech Is Everywhere and You’ll Need It to Thrive

Learning Objectives

After studying this section you should be able to do the following:

  1. Appreciate the degree to which technology has permeated every management discipline.
  2. See that tech careers are varied, richly rewarding, and poised for continued growth.

Shortly after the start of the prior decade, there was a lot of concern that tech jobs would be outsourced, leading many to conclude that tech skills carried less value and that workers with tech backgrounds had little to offer. Turns out this thinking was stunningly wrong. Tech jobs boomed, and as technology pervades all other management disciplines, tech skills are becoming more important, not less. Today, tech knowledge can be a key differentiator for the job seeker. It’s the worker without tech skills that needs to be concerned.

As we’ll present in depth in a future chapter, there’s a principle called Moore’s Law that’s behind fast, cheap computing. And as computing gets both faster and cheaper, it gets “baked into” all sorts of products and shows up everywhere: in your pocket, in your vacuum, and on the radio frequency identification (RFID) tags that track your luggage at the airport.

Well, there’s also a sort of Moore’s Law corollary that’s taking place with people, too. As technology becomes faster and cheaper and developments like open source software, cloud computing, software as a service (SaaS), and outsourcing push technology costs even lower, tech skills are being embedded inside more and more job functions. What this means is that even if you’re not expecting to become the next Tech Titan, your career will doubtless be shaped by the forces of technology. Make no mistake about it—there isn’t a single modern managerial discipline that isn’t being deeply and profoundly impacted by tech.

Finance

Many business school students who study finance aspire to careers in investment banking. Many i-bankers will work on IPOs, or initial public stock offerings, in effect helping value companies the first time these firms wish to sell their stock on the public markets. IPO markets need new firms, and the tech industry is a fertile ground that continually sprouts new businesses like no other. Other i-bankers will be involved in valuing merger and acquisition (M&A) deals, and tech firms are active in this space, too. Leading tech firms are flush with cash and constantly on the hunt for new firms to acquire. Cisco bought forty-eight firms in the prior decade; Oracle bought five firms in 2009 alone. And even in nontech industries, technology impacts nearly every endeavor as an opportunity catalyst or a disruptive wealth destroyer. The aspiring investment banker who doesn’t understand the role of technology in firms and industries can’t possibly provide an accurate guess at how much a company is worth.

Table 1.1 Top Acquirers of VC-Backed Companies 2000–2009

Acquiring Company Acquisitions
Cisco 48
IBM 35
Microsoft 30
EMC Corporation 25
Oracle Corp. 23
Broadcom 18
Symantec 18
Hewlett-Packard 18
Google 17
Sun Microsystems 16

Those in other finance careers will be lending to tech firms and evaluating the role of technology in firms in an investment portfolio. Most of you will want to consider tech’s role as part of your personal investments. And modern finance simply wouldn’t exist without tech. When someone arranges for a bridge to be built in Shanghai, those funds aren’t carried over in a suitcase—they’re digitally transferred from bank to bank. And forces of technology blasted open the two-hundred-year-old floor trading mechanism of the New York Stock Exchange, in effect forcing the NYSE to sell shares in itself to finance the acquisition of technology-based trading platforms that were threatening to replace it. As another example of the importance of tech in finance, consider that Boston-based Fidelity Investments, one of the nation’s largest mutual fund firms, spends roughly $2.8 billion a year on technology. Tech isn’t a commodity for finance—it’s the discipline’s lifeblood.

Accounting

If you’re an accountant, your career is built on a foundation of technology. The numbers used by accountants are all recorded, stored, and reported by information systems, and the reliability of any audit is inherently tied to the reliability of the underlying technology. Increased regulation, such as the heavy executive penalties tied to the Sarbanes-Oxley ActAlso known as Sarbox or SOX; U.S. legislation enacted in the wake of the accounting scandals of the early 2000s. The act raises executive and board responsibility and ties criminal penalties to certain accounting and financial violations. Although often criticized, SOX is also seen as raising stakes for mismanagement and misdeeds related to a firm’s accounting practices. in the United States, have ratcheted up the importance of making sure accountants (and executives) get their numbers right. Negligence could mean jail time. This means the link between accounting and tech have never been tighter, and the stakes for ensuring systems accuracy have never been higher.

Business students might also consider that while accounting firms regularly rank near the top of BusinessWeek’s “Best Places to Start Your Career” list, many of the careers at these firms are highly tech-centric. Every major accounting firm has spawned a tech-focused consulting practice, and in many cases, these firms have grown to be larger than the accounting services functions from which they sprang. Today, Deloitte’s tech-centric consulting division is larger than the firm’s audit, tax, and risk practices. At the time of its spin-off, Accenture was larger than the accounting practice at former parent Arthur Andersen (Accenture executives are also grateful they split before Andersen’s collapse in the wake of the prior decade’s accounting scandals). Now, many accounting firms that had previously spun off technology practices are once again building up these functions, finding strong similarities between the skills of an auditor and skills needed in emerging disciplines such as information security and privacy.

Marketing

Technology has thrown a grenade onto the marketing landscape, and as a result, the skill set needed by today’s marketers is radically different from what was leveraged by the prior generation. Online channels have provided a way to track and monitor consumer activities, and firms are leveraging this insight to understand how to get the right product to the right customer, through the right channel, with the right message, at the right price, at the right time. The success or failure of a campaign can often be immediately assessed base on online activity such as Web site visit patterns and whether a campaign results in an online purchase.

The ability to track customers, analyze campaign results, and modify tactics has amped up the return on investment of marketing dollars, with firms increasingly shifting spending from tough-to-track media such as print, radio, and television to the Web.J. Pontin, “But Who’s Counting?” Technology Review, March/April 2009. And new channels continue to emerge. Firms as diverse as Southwest Airlines, Starbucks, UPS, and Zara have introduced apps for the iPhone and iPod touch. In less than four years, the iPhone has emerged as a channel capable of reaching over 75 million consumers, delivering location-based messages and services, and even allowing for cashless payment.

The rise of social media is also part of this blown-apart marketing landscape. Now all customers can leverage an enduring and permanent voice, capable of broadcasting word-of-mouth influence in ways that can benefit and harm a firm. Savvy firms are using social media to generate sales, improve their reputations, better serve customers, and innovate. Those who don’t understand this landscape risk being embarrassed, blindsided, and out of touch with their customers.

Search engine marketing (SEM), search engine optimization (SEO), customer relationship management (CRM), personalization systems, and a sensitivity to managing the delicate balance between gathering and leveraging data and respecting consumer privacy are all central components of the new marketing toolkit. And there’s no looking back—tech’s role in marketing will only grow in prominence.

Operations

A firm’s operations management function is focused on producing goods and services, and operations students usually get the point that tech is the key to their future. Quality programs, process redesign, supply chain management, factory automation, and service operations are all tech-centric. These points are underscored in this book as we introduce several examples of how firms have designed fundamentally different ways of conducting business (and even entirely different industries), where value and competitive advantage are created through technology-enabled operations.

Human Resources

Technology helps firms harness the untapped power of employees. Knowledge management systems are morphing into social media technologies—social networks, wikis, and Twitter-style messaging systems that can accelerate the ability of a firm to quickly organize and leverage teams of experts. Human resources (HR) directors are using technology for employee training, screening, and evaluation. The accessibility of end-user technology means that every employee can reach the public, creating an imperative for firms to set policy on issues such as firm representation and disclosure and to continually monitor and enforce policies as well as capture and push out best practices. The successful HR manager recognizes that technology continually changes an organization’s required skill sets, as well as employee expectations.

The hiring and retention practices of the prior generation are also in flux. Recruiting hasn’t just moved online; it’s now grounded in information systems that scour databases for specific skill sets, allowing recruiters to cast a wider talent net than ever before. Job seekers are writing résumés with keywords in mind, aware that the first cut is likely made by a database search program, not a human being. The rise of professional social networks also puts added pressure on employee satisfaction and retention. Prior HR managers fiercely guarded employee directories for fear that a headhunter or competitive firm might raid top talent. Now the equivalent of a corporate directory can be easily pulled up via LinkedIn, a service complete with discrete messaging capabilities that can allow competitors to rifle-scope target your firm’s best and brightest. Thanks to technology, the firm that can’t keep employees happy, engaged, and feeling valued has never been more vulnerable.

The Law

And for those looking for careers in corporate law, many of the hottest areas involve technology. Intellectual property, patents, piracy, and privacy are all areas where activity has escalated dramatically in recent years. The number of U.S. patent applications waiting approval has tripled in the past decade, while China saw a threefold increase in patent applications in just five years.J. Schmid and B. Poston, “Patent Backlog Clogs Recovery,” Milwaukee Journal Sentinel, August 15, 2009. Firms planning to leverage new inventions and business methods need legal teams with the skills to sleuth out whether a firm can legally do what it plans to. Others will need legal expertise to help them protect proprietary methods and content, as well as to help enforce claims in the home country and abroad.

Information Systems Careers

While the job market goes through ebbs and flows, recent surveys have shown there to be more IT openings than in any field except health care.2009 figures are from http://www.indeed.com. Money magazine ranked tech jobs as two of the top five “Best Jobs in America.”CNNMoney, “Best Jobs in America,” 2009, http://money.cnn.com/magazines/moneymag/bestjobs/2009/snapshots/1.html. BusinessWeek ranks consulting (which heavily hires tech grads) and technology as the second and third highest paying industries for recent college graduates.L. Gerdes, “The Best Places to Launch a Career,” BusinessWeek, September 15, 2008. Technology careers have actually ranked among the safest careers to have during the most recent downturn.T. Kaneshige, “Surprise! Tech Is a Safe Career Choice Today,” InfoWorld, February 4, 2009. And Fortune’s ranks of the “Best Companies to Work For” is full of technology firms and has been topped by a tech business for four years straight.See Fortune, “Best Companies to Work For,” 2007–2010. For 2010 list, see http://money.cnn.com/magazines/fortune/bestcompanies/2010/full_list/index.html.

Students studying technology can leverage skills in ways that range from the highly technical to those that emphasize a tech-centric use of other skills. Opportunities for programmers abound, particularly for those versed in new technologies, but there are also roles for experts in areas such as user-interface design (who work to make sure systems are easy to use), process design (who leverage technology to make firms more efficient), and strategy (who specialize in technology for competitive advantage). Nearly every large organization has its own information systems department. That group not only ensures that systems get built and keep running but also increasingly takes on strategic roles targeted at proposing solutions for how technology can give the firm a competitive edge. Career paths allow for developing expertise in a particular technology (e.g., business intelligence analyst, database administrator, social media manager), while project management careers leverage skills in taking projects from conception through deployment.

Even in consulting firms, careers range from hard-core programmers who “build stuff” to analysts who do no programming but might work identifying problems and developing a solutions blueprint that is then turned over to another team to code. Careers at tech giants like Apple, Google, and Microsoft don’t all involve coding end-user programs either. Each of these firms has their own client-facing staff that works with customers and partners to implement solutions. Field engineers at these firms may work as part of a sales team to show how a given company’s software and services can be used. These engineers often put together prototypes that are then turned over to a client’s in-house staff for further development. An Apple field engineer might show how a firm can leverage podcasting in its organization, while a Google field engineer can help a firm incorporate search, banner, and video ads into its online efforts. Careers that involve consulting and field engineering are often particularly attractive for those who enjoy working with an ever-changing list of clients and problems across various industries and in many different geographies.

Upper-level career opportunities are also increasingly diverse. Consultants can become partners who work with the most senior executives of client firms, helping identify opportunities for those organizations to become more effective. Within a firm, technology specialists can rise to be chief information officer or chief technology officer—positions focused on overseeing a firm’s information systems development and deployment. And many firms are developing so-called C-level specialties in emerging areas with a technology focus, such as chief information security officer (CISO), and chief privacy officer (CPO). Senior technology positions may also be a ticket to the chief executive’s suite. A recent Fortune article pointed out how the prominence of technology provides a training ground for executives to learn the breadth and depth of a firm’s operations and an understanding of the ways in which firms are vulnerable to attack and where it can leverage opportunities for growth.J. Fortt, “Tech Execs Get Sexy,” Fortune, February 12, 2009.

Your Future

With tech at the center of so much change, realize that you may very well be preparing for careers that don’t yet exist. But by studying the intersection of business and technology today, you develop a base to build upon and critical thinking skills that will help evaluate new, emerging technologies. Think you can afford to wait on tech study, then quickly get up to speed? Think about it. Whom do you expect to have an easier time adapting and leveraging a technology like social media—today’s college students who are immersed in technology or their parents who are embarrassingly dipping their toes into the waters of Facebook? Those who put off an understanding of technology risk being left in the dust.

Consider the nontechnologists who have tried to enter the technology space these past few years. Newscorp head Rupert Murdoch piloted his firm to the purchase of MySpace only to see this one-time leader lose share to rivals.O. Malik, “MySpace, R.I.P.,” GigaOM, February 10, 2010. Former Warner executive Terry Semel presided over Yahoo!’sJ. Thaw, “Yahoo’s Semel Resigns as Chief amid Google’s Gains,” Bloomberg, June 18, 2007. malaise as Google blasted past it. Barry Diller, the man widely credited with creating the Fox Network, led InterActive Corp (IAC) in the acquisition of a slew of tech firms ranging from Expedia to Ask.com, only to break the empire up as it foundered.G. Fabrikant and M. Helft, “Barry Diller Conquered. Now He Tries to Divide,” New York Times, March 16, 2008. And Time Warner head Jerry Levin presided over the acquisition of AOL, executing what many consider to be one of the most disastrous mergers in U.S. business history.J. Quinn, “Final Farewell to Worst Deal in History—AOL-Time Warner,” Telegraph (UK), November 21, 2009. Contrast these guys against the technology-centric successes of Mark Zuckerberg (Facebook), Steve Jobs (Apple), and Sergey Brin and Larry Page (Google).

While we’ll make it abundantly clear that a focus solely on technology is a recipe for disaster, a business perspective that lacks an appreciation for tech’s role is also likely to be doomed. At this point in history, technology and business are inexorably linked, and those not trained to evaluate and make decisions in this ever-shifting space risk irrelevance, marginalization, and failure.

Key Takeaways

  • As technology becomes cheaper and more powerful, it pervades more industries and is becoming increasingly baked into what were once nontech functional areas.
  • Technology is impacting every major business discipline, including finance, accounting, marketing, operations, human resources, and the law.
  • Tech jobs rank among the best and highest-growth positions, and tech firms rank among the best and highest-paying firms to work for.
  • Information systems (IS) jobs are profoundly diverse, ranging from those that require heavy programming skills to those that are focused on design, process, project management, privacy, and strategy.

Questions and Exercises

  1. Look at Fortune’s “Best Companies to Work For” list. How many of these firms are technology firms? Which firm would you like to work for? Are they represented on this list?
  2. Look at BusinessWeek’s “Best Places to Start Your Career” list. Is the firm you mentioned above also on this list?
  3. What are you considering studying? What are your short-term and long-term job goals? What role will technology play in that career path? What should you be doing to ensure that you have the skills needed to compete?
  4. Which jobs that exist today likely won’t exist at the start of the next decade? Based on your best guess on how technology will develop, can you think of jobs and skill sets that will likely emerge as critical five and ten years from now?

1.4 The Pages Ahead

Learning Objective

After studying this section you should be able to do the following:

  1. Understand the structure of this text, the issues and examples that will be introduced, and why they are important.

Hopefully this first chapter has helped get you excited for what’s to come. The text is written in a style meant to be as engaging as the material you’ll be reading for the rest of your management career—articles in business magazines and newspapers. The introduction of concepts in this text are also example rich, and every concept introduced or technology discussed is always grounded in a real-world example to show why it’s important. But also know that while we celebrate successes and expose failures in that space where business and technology come together, we also recognize that firms and circumstances change. Today’s winners have no guarantee of sustained dominance. What you should acquire in the pages that follow are a fourfold set of benefits that (1) provide a description of what’s happening in industry today, (2) offer an introduction to key business and technology concepts, (3) offer a durable set of concepts and frameworks that can be applied even as technologies and industries change, and (4) develop critical thinking that will serve you well throughout your career as a manager.

Chapters don’t have to be read in order, so feel free to bounce around, if you’d like. But here’s what you can expect:

Chapter 2 “Strategy and Technology: Concepts and Frameworks for Understanding What Separates Winners from Losers” focuses on building big-picture skills to think about how to leverage technology for competitive advantage. Technology alone is rarely the answer, but through a rich set of examples, we’ll show how firms can weave technology into their operations in ways that create and reinforce resources that can garner profits while repelling competitors. A mini case examines tech’s role at FreshDirect, a firm that has defied the many failures in the online grocery space and devastated traditional rivals. BlueNile, Dell, Lands’ End, TiVo and Yahoo! are among the many firms providing a rich set of examples illustrating successes and failures in leveraging technology. The chapter will show how firms use technology to create and leverage brand, scale economies, switching costs, data assets, network effects, and distribution channels. We’ll introduce how technology relates to two popular management frameworks—the value chain and the five forces model. And we’ll provide a solid decision framework for considering the controversial and often misunderstood role that technology plays among firms that seek an early-mover advantage.

In Chapter 3 “Zara: Fast Fashion from Savvy Systems”, we see how a tech-fed value chain helped Spanish clothing giant Zara craft a counterintuitive model that seems to defy all conventional wisdom in the fashion industry. We’ll show how Zara’s model differs radically from that of the firm it displaced to become the world’s top clothing retailer: Gap. We’ll see how technology impacts product design, product development, marketing, cycle time, inventory management, and customer loyalty and how technology decisions influence broad profitability that goes way beyond the cost-of-goods thinking common among many retailers. We’ll also offer a mini case on Fair Factories Clearinghouse, an effort highlighting the positive role of technology in improving ethical business practices. Another mini case shows the difference between thinking about technology versus broad thinking about systems, all through an examination of how high-end fashion house Prada failed to roll out technology that on the surface seemed very similar to Zara’s.

Chapter 4 “Netflix: The Making of an E-commerce Giant and the Uncertain Future of Atoms to Bits” tramples the notion that dot-com start-up firms can’t compete against large, established rivals. We’ll show how information systems at Netflix created a set of assets that grew in strength and remains difficult for rivals to match. The economics of pure-play versus brick-and-mortar firms is examined, and we’ll introduce managerial thinking on various concepts such as the data asset, personalization systems (recommendation engines and collaborative filtering), the long tail and the implications of technology on selection and inventory, crowdsourcing, using technology for novel revenue models (subscription and revenue-sharing with suppliers), forecasting, and inventory management. The case ends with a discussion of Netflix’s uncertain future, where we present how the shift from atoms (physical discs) to bits (streaming and downloads) creates additional challenges. Issues of licensing and partnerships, revenue models, and delivery platforms are all discussed.

Chapter 5 “Moore’s Law: Fast, Cheap Computing and What It Means for the Manager” focuses on understanding the implications of technology change for firms and society. The chapter offers accessible definitions for technologies impacted by Moore’s Law, but goes beyond semiconductors and silicon to show how the rate of magnetic storage (e.g., hard drives) and networking create markets filled with uncertainty and opportunity. The chapter will show how tech has enabled the rise of Apple and Amazon, created mobile phone markets that empower the poor worldwide, and has created five waves of disruptive innovation over five decades. We’ll also show how Moore’s Law, perhaps the greatest economic gravy train in history, will inevitably run out of steam as the three demons of heat, power, and limits on shrinking transistors halt the advancement of current technology. Studying technologies that “extend” Moore’s Law, such as multicore semiconductors, helps illustrate both the benefit and limitation of technology options, and in doing so, helps develop skills around recognizing the pros and cons of a given innovation. Supercomputing, grid, and cloud computing are introduced through examples that show how these advances are changing the economics of computing and creating new opportunity. Finally, issues of e-waste are explored in a way that shows that firms not only need to consider the ethics of product sourcing, but also the ethics of disposal.

In Chapter 6 “Understanding Network Effects”, we’ll see how technologies, services, and platforms can create nearly insurmountable advantages. Tech firms from Facebook to Intel to Microsoft are dominant because of network effects—the idea that some products and services get more valuable as more people use them. Studying network effects creates better decision makers. The concept is at the heart of technology standards and platform competition, and understanding network effects can help managers choose technologies that are likely to win, hopefully avoiding getting caught with a failed, poorly supported system. Students learn how network effects work and why they’re difficult to unseat. The chapter ends with an example-rich discussion of various techniques that one can use to compete in markets where network effects are present.

Chapter 7 “Peer Production, Social Media, and Web 2.0” explores business issues behind several services that have grown to become some of the Internet’s most popular destinations. Peer production and social media are enabling new services and empowering the voice of the customer as never before. In this chapter, students learn about various technologies used in social media and peer production, including blogs, wikis, social networking, Twitter, and more. Prediction markets and crowdsourcing are introduced, along with examples of how firms are leveraging these concepts for insight and innovation. Finally, students are offered guidance on how firms can think SMART by creating a social media awareness and response team. Issues of training, policy, and response are introduced, and technologies for monitoring and managing online reputations are discussed.

Chapter 8 “Facebook: Building a Business from the Social Graph” will allow us to study success and failure in IS design and deployment by examining one of the Web’s hottest firms. Facebook is one of the most accessible and relevant Internet firms to so many, but it’s also a wonderful laboratory to discuss critical managerial concepts. The founding story of Facebook introduces concepts of venture capital, the board of directors, and the role of network effects in entrepreneurial control. Feeds show how information, content, and applications can spread virally, but also introduce privacy concerns. Facebook’s strength in switching costs demonstrates how it has been able to envelop additional markets from photos to chat to video and more. The failure of the Beacon system shows how even bright technologists can fail if they ignore the broader procedural and user implications of an information systems rollout. Social networking advertising is contrasted with search, and the perils of advertising alongside social media content are introduced. Issues of predictors and privacy are covered. And the case allows for a broader discussion on firm value and what Facebook might really be worth.

Chapter 9 “Understanding Software: A Primer for Managers” offers a primer to help managers better understand what software is all about. The chapter offers a brief introduction to software technologies. Students learn about operating systems, application software, and how these relate to each other. Enterprise applications are introduced, and the alphabet soup of these systems (e.g., ERP, CRM, and SCM) is accessibly explained. Various forms of distributed systems (client-server, Web services, messaging) are also covered. The chapter provides a managerial overview of how software is developed, offers insight into the importance of Java and scripting languages, and explains the differences between compiled and interpreted systems. System failures, total cost of ownership, and project risk mitigation are also introduced. The array of concepts covered helps a manager understand the bigger picture and should provide an underlying appreciation for how systems work that will serve even as technologies change and new technologies are introduced.

The software industry is changing radically, and that’s the focus of Chapter 10 “Software in Flux: Partly Cloudy and Sometimes Free”. The issues covered in this chapter are front and center for any firm making technology decisions. We’ll cover open source software, software as a service, hardware clouds, and virtualization. Each topic is introduced by discussing advantages, risks, business models, and examples of their effective use. The chapter ends by introducing issues that a manager must consider when making decisions as to whether to purchase technology, contract or outsource an effort, or develop an effort in-house.

In Chapter 11 “The Data Asset: Databases, Business Intelligence, and Competitive Advantage”, we’ll study data, which is often an organization’s most critical asset. Data lies at the heart of every major discipline, including marketing, accounting, finance, operations, forecasting and planning. We’ll help managers understand how data is created, organized, and effectively used. We’ll cover limitations in data sourcing, issues in privacy and regulation, and tools for access including various business intelligence technologies. A mini case on Wal-Mart shows data’s use in empowering a firm’s entire value chain, while the mini case on Harrah’s shows how data-driven customer relationship management is at the center of creating an industry giant.

Chapter 12 “A Manager’s Guide to the Internet and Telecommunications” unmasks the mystery of the Internet—it shows how the Internet works and why a manager should care about IP addresses, IP networking, the DNS, peering, and packet versus circuit switching. We’ll also cover last-mile technologies and the various strengths and weaknesses of getting a faster Internet to a larger population. The revolution in mobile technologies and the impact on business will also be presented.

Chapter 13 “Information Security: Barbarians at the Gateway (and Just About Everywhere Else)” helps managers understand attacks and vulnerabilities and how to keep end users and organizations more secure. Breaches at TJX and Heartland and the increasing vulnerability of end-user systems have highlighted how information security is now the concern of the entire organization, from senior executives to front-life staff. This chapter explains what’s happening with respect to information security—what kinds of attacks are occurring, who is doing them, and what their motivation is. We’ll uncover the source of vulnerabilities in systems: human, procedural, and technical. Hacking concepts such as botnets, malware, phishing, and SQL injection are explained using plain, accessible language. Also presented are techniques to improve information security both as an end user and within an organization. The combination of current issues and their relation to a broader framework for security should help you think about vulnerabilities even as technologies and exploits change over time.

Chapter 14 “Google: Search, Online Advertising, and Beyond” discusses one of the most influential and far-reaching firms in today’s business environment. As pointed out earlier, a decade ago Google barely existed, but it now earns more ad revenue and is a more profitable media company than any firm, online or off. Google is a major force in modern marketing, research, and entertainment. In this chapter you’ll learn how Google (and Web search in general) works. Issues of search engine ranking, optimization, and search infrastructure are introduced. Students gain an understanding of search advertising and other advertising techniques, ad revenue models such as CPM and CPC, online advertising networks, various methods of customer profiling (e.g., IP addresses, geotargeting, cookies), click fraud, fraud prevention, and issues related to privacy and regulation. The chapter concludes with a broad discussion of how Google is evolving (e.g., Android, Chrome, Apps, YouTube) and how this evolution is bringing it into conflict with several well-funded rivals, including Amazon, Apple, Microsoft, and more.

Nearly every industry and every functional area is increasing its investment in and reliance on information technology. With opportunity comes trade-offs: research has shown that a high level of IT investment is associated with a more frenzied competitive environment.E. Brynjolfsson, A. McAfee, M. Sorell, and F. Zhu, “Scale without Mass: Business Process Replication and Industry Dynamics,” SSRN, September 30, 2008. But while the future is uncertain, we don’t have the luxury to put on the brakes or dial back the clock—tech’s impact is here to stay. Those firms that emerge as winners will treat IT efforts “as opportunities to define and deploy new ways of working, rather than just projects to install, configure, or integrate systems.”A. McAfee and E. Brynjolfsson, “Dog Eat Dog,” Sloan Management Review, April 27, 2007. The examples, concepts, and frameworks in the pages that follow will help you build the tools and decision-making prowess needed for victory.

Key Takeaways

  • This text contains a series of chapters and cases that expose durable concepts, technologies, and frameworks, and does so using cutting-edge examples of what’s happening in industry today.
  • While firms and technologies will change, and success at any given point in time is no guarantee of future victory, the issues illustrated and concepts acquired should help shape a manager’s decision making in a way that will endure.

Questions and Exercises

  1. Which firms do you most admire today? How do these firms use technology? Do you think technology gives them an advantage over rivals? Why or why not?
  2. What areas covered in this book are most exciting? Most intimidating? Which do you think will be most useful?

Human Psychology

Licensing Information

This text was adapted by #OpenCourseWare under an Attribution 4.0 International (CC BY 4.0)

Chapter 1: The Science of Psychology

  • Science and Psychology
  • Research Methods in Psychology
  • Psychology, Human Potential and Self-Control

Chapter 2: Biology and Human Potential

  • Evolution: Adaptation through Natural Selection
  • The Nervous and Endocrine Systems
  • Self-Control and Biological Psychology

Chapter 3: Sensation, Perception and Human Potential

  • Sensation and Human Potential
  • Vision
  • Hearing and other Senses
  • Perception and Human Potential

Chapter 4: Emotion, Motivation and Human Potential

  • Emotion and Human Potential
  • Human Motivation
  • Motivation and Human Potential

Chapter 5: Direct Learning and Human Potential

  • Predictive Learning and Human Potential
  • Control Learning and Human Potential
  • Adaptive Learning Applications

Chapter 6: Indirect Learning and Human Potential

  • Observational Learning
  • Speech and Language
  • Memory

Chapter 7: Cognition, Intelligence and Human Potential

  • Knowledge, Skills and Human Potential
  • Tools, Technology and the Human Condition
  • Individual Differences

Chapter 8: Lifespan Development of Human Potential

  • Fetal and Infant Development
  • Child Development
  • Theories of Development
  • Adolescence and Adulthood

Chapter 9: Personality and Human Potential

  • Trait Theories of Personality
  • Personality and Nature/Nurture

Chapter 10: Social Influences on the Development of Human Potential

  • Compliance, Conformity and Obedience
  • Social Roles and Bystander Apathy
  • Group Cohesiveness, Attitudes and Prejudice

Chapter 11: Problems in the Development of Human Potential

  • Psychiatry and Clinical Psychology
  • DSM 5 – Neurodevelopmental and Schizophrenic Spectrum Disorders
  • DSM 5 – Bipolar, Depressive and Anxiety Disorders
  • DSM 5 – Other Disorders

Chapter 12: The Science of Psychology and Human Potential

  • The Scientist Practitioner Model of Professional Psychology
  • Preventing Behavioral Problems and Realizing Human Potential
  • Afterward

Chapter 12: The Science of Psychology and Human Potential

Learning Objectives

  • Describe how adaptive learning principles and procedures are used in applied behavior analysis (ABA) to treat behavioral excesses and deficits
  • Describe the research findings that lead to Seligman’s learned helplessness model of depression
  • Describe cognitive behavioral treatment ofmajor depressive disorder
  • Describe Marlatt’s findings regarding the likelihood and causes of relapse in addictive disorders
  • Describe how multi-systemic interventions applied in the home, school, and community have been used to treat conduct disorder

The Scientist Practitioner Model of Professional Psychology

The most valuable natural resource on the planet earth is not diamonds, gold, or energy; it is the potential of every human being to understand and impact upon nature. Development of this potential resulted in the discovery of diamonds, gold, fossil fuels, and nuclear energy; it resulted in the transformation of Manhattan and much of the rest of the earth since the Scientific Revolution. Development of this potential resulted in the beauty and creativity intrinsic to architecture, music, and the arts. Development of this potential resulted in what we know as civilization.

Over the past 11 chapters, we described how the science of psychology helps us understand the many different ways that experience interacts with our genes to influence our thoughts, emotions, and actions. As humans acquired new knowledge and skills, we altered our environment; as we altered our environment, it became necessary to develop new knowledge and skills. Permanently recording human progress sped up the transformation from the Stone Age to the Information Age (Kurzweil, 2001). Humans are at an unprecedented point in time. Application of the scientific method to understanding nature is a two-edged sword. This knowledge and technological capability has enabled humans to extend our reach beyond our planet. At the same time this capability is threatening our very survival on earth. We need to insure that we continue to eat, survive, and reproduce. Otherwise, it will not matter what we think it is all about!

Often, we contrasted the adaptive requirements of the Stone Age with our current human-constructed conditions. The still existing indigenous tribes often have elders believed to have special powers or knowledge to help those suffering from physical or behavioral problems. As a science, psychology has progressed in the accumulation of knowledge and technology since its beginnings in Wundt’s lab. In the previous chapter, we saw how the profession of clinical psychology uses empirically-validated learning-based procedures to successfully ameliorate severe psychiatric and psychological disorders. In the rainforest, parents, relatives, and band members are responsible for raising children to execute their culturally defined roles. In this chapter, we will examine other examples of the application of professional psychology to assist individuals in adapting to their roles within our complex, technologically-enhanced, cultural institutions.

Academic and Research Psychology

College professors in all disciplines conduct scholarly research in their areas of specialization. Empirical research provides the foundation of the scientist-practitioner model of professional psychology. This model emphasizes the complementary connection between basic and applied research and professional practice. We have seen that if the requirements of internal and external validity are satisfied, it is possible to come to cause-effect conclusions regarding the effectiveness of specific therapeutic procedures in modifying specific behavioral problems. Ethical practice requires remaining current and basing one’s clinical strategy on the results of such research. Throughout this book, we have described the findings and implications of correlational and experimental research conducted in the laboratory and field. Most of the individuals carrying out that research are academic and research psychology faculty members possessing doctoral degrees in departments of Psychology and related disciplines (e.g., Cognitive Science, Human Development, Neuroscience, etc.). Much of the research relates to the basic psychological processes described in Chapters 2 through 7 (biological psychology, perception, motivation, learning, and cognition). Other research relates to the holistic issues involved in normal and problematic human personality and social development described in Chapters 8 through 11.

Prior to the Second World War, psychology was almost exclusively an academic discipline with a small number of practitioners. During and after the war, psychiatrists requested help from psychologists in providing treatment for soldiers. The government funded the development of clinical psychology programs to meet this increased demand for services. It became necessary to develop a standardized curriculum to train psychological practitioners (Frank, 1984). In 1949, a conference was held at the University of Colorado at Boulder to achieve this objective (Baker & Benjamin, 2000). A scientist-practitioner model was adopted for American Psychological Association accreditation in clinical psychology. The rationale was that in the same way that medical practice is based upon research findings from the biological sciences, clinical practice should be based upon findings from the content areas of psychology. After the Second World War, significant changes occurred in another APA division beside clinical psychology. In 1951, the name of the Division of Personnel and Guidance Psychologists was changed to the Division of Counseling Psychology. This change reflected the fact that individuals in this division often worked side-by-side with clinical psychologists on other lifestyle issues beside those related to work.

Although it was not an objective at the time, this grounding in the scientific method was an important first step in the development of the movement to evidence-based practice four decades later. Grounding clinical practice in psychological science was also key to the development of alternatives to the then (by default) prevalent psychodynamic therapeutic model. The Freudian model assumed that psychiatric disorders stemmed from unconscious conflict between impulsive demands of the id and the moral standards of the superego (see Chapter 8). The model postulated the existence of defense mechanisms, such as repression, that prevented the sources of conflict from becoming conscious. Assessment and treatment techniques flowed from this model; it was necessary to circumvent the defense mechanisms in order to bring the sources of conflict to consciousness. The logic of assessment instruments such as the Rorshach inkblots and Thematic Apperception Test was that due to their ambiguity, they would not activate defense mechanisms, thereby enabling individuals to “project” their unconscious thoughts onto the inkblots or pictures.

The reliability of scoring for the Rorshach inkblots has been questioned (Lillenfeld, Wood, & Garb, 2000) and their use challenged in court cases (Gacono & Evans, 2008; Gacono, Evans, & Viglione, 2002). After an enormously influential review of case studies addressing the effectiveness of psychodynamic therapy, it was concluded that they provided no benefit beyond the passage of time (Eysenck, 1952). The time was right after the Second World War to develop an alternative, science-based approach to psychodynamic psychotherapy. This void was filled by learning-principle based behavior modification, applied behavior analysis, and cognitive-behavior modification interventions (Martin & Pear, 2011, chapter 29). Detailed applications of these approaches will be described below.

Contemporary professional psychologists implement evidence-based psychological procedures to assist individuals in developing their potential. Before we consider applications in schools and at the workplace, we will consider those individuals challenged by serious issues. In the prior chapter, applied behavior analysis and cognitive-behavioral procedures were frequently cited as being effective for treating DSM diagnosed psychiatric disorders. We will now provide more detailed descriptions for these procedures and how they are applied.

A Psychological Model of Maladaptive Behavior

DSM disorder labels still constitute the most used terminology for describing behavioral disorders. As we saw in the last chapter, a DSM diagnosis may provide useful information regarding the likely prognosis for behavioral change in the absence of treatment. However, DSM diagnosis provides minimal information regarding specific interventions for specific thoughts, feelings, or behaviors. Despite this, DSM disorder terms are frequently misunderstood as pseudo-explanations in a way which is not usually characteristic of other non-explanatory illness terms. For example, one is unlikely to conclude that high blood pressure readings are caused by hypertension; in comparison, it is common to conclude that hallucinations are caused by schizophrenia. This problem has led some psychologists to suggest an entirely different approach to describing behavioral disorders. Rather than attempting to identify an underlying “mental illness” (e.g., Autism Spectrum Disorder, Major Depressive Disorder, etc.), the behavior itself is considered the target for treatment. Rather than providing DSM diagnoses for different disorders, problematic behavior is categorized according to the following listing:

  • Behavioral Excesses (e.g., head-banging, repetitive hand movements, crying, etc.)
  • Behavioral Deficits (e.g., lack of speech, failure to imitate, failure to get out of bed in the morning, etc.)
  • Inappropriate External Stimulus Control (e.g., speaking out loud at the library, not paying attention to “Stop” signs, etc.)
  • Inappropriate Internal Stimulus Control (e.g., thinking that one is a failure, not thinking of someone else’s needs, etc.)
  • Inappropriate Reinforcement (e.g., problem drinking, not caring about performance in school, etc.)

One’s thoughts, emotions, or behaviors are maladaptive when they interfere with or prevent achieving personal objectives. A psychological model of maladaptive behavior avoids the issues of pseudo-explanation, reliability, and validity that plague DSM diagnoses. There is no disease to be determined or considered an explanation. Diagnosis consists of a detailed description of the individual’s behaviors and environmental circumstances. Psychologists explain such disorders as resulting from nature/nurture interactions and rely upon experiential treatment approaches (e.g., talking therapies and homework assignments). An underlying assumption is that no matter what the “cause” of maladaptive behavior, it can usually be modified by providing appropriate learning experiences. Assessment of the effectiveness of treatment is based on objective measures of improved adaptation to specific environmental conditions (e.g., performance in school, job performance, interpersonal relations, etc.).

Let us take the example of Major Depressive Disorder. The medical model dictates assessing the extent to which an individual’s symptoms fulfill the requirement of a DSM-5 diagnosis. Different individuals vary, however, in terms of the patterns of their depressed behaviors. Some may not get out of bed in the morning; others might. Some might groom and dress themselves; others not. Some might cry a lot; others not; some may be lethargic; others not. Some may no longer enjoy their hobbies; others might. Some may no longer perform adequately on their jobs; others might; and so on. Given the infinite combination of possibilities, it is not surprising that arriving at a DSM diagnosis requires a good deal of interpretation and prioritization by the psychiatrist. Reliability issues are inevitable. In contrast, the psychological assessment model results in a detailed behavioral and environmental description tailored to each individual. One would not expect differences in interpretation of whether or not someone gets out of bed in the morning, grooms and dresses themselves, cries, etc. It is clear that these behavioral descriptions are defined exclusively on the behavioral (dependent variable) side. No one would make the mistake of concluding that a person fails to get out of bed because they fail to get out of bed; this is obviously circular. A DSM diagnosis, however, is seductive. It is tempting to conclude that the person fails to get out of bed as the result of major depressive disorder. We will now see how a psychological model can be applied to help non-verbal individuals with direct learning-based interventions. This will be followed by an application of the model to verbal individuals using indirect-learning procedures.

Treating Behavioral Problems with Non-Verbal Individuals:

Applied Behavior Analysis with Autistic Children

Autism is a severe developmental disorder. It is characterized by an apparent lack of interest in other people, including parents and siblings. A behavioral excess such as head banging, not only is likely to result in serious injury, but will interfere with a child’s acquiring important linguistic and social skills. That is, an extreme behavioral excess may result in serious behavioral deficits. Autistic children often display excesses and/or deficits of attention. For example, they may stare at the same object for an entire day (stimulus over-selectivity) or seem unable to focus upon anything for more than a few seconds (stimulus under-selectivity). In the absence of treatment, an autistic child may fail to acquire the most basic self-help skills such as dressing or feeding oneself, or looking before crossing the street. They require constant attention from care-givers in order to survive, let alone to acquire the social and intellectual skills requisite to making friends and preparing for school.

The principles of direct learning described in Chapter 5 were predominantly established under controlled conditions with non-verbal animals. It should therefore come as no surprise that procedures based upon these principles have been applied to non-verbal children diagnosed with neurodevelopmental disorders. Ivar Lovaas (1967) pioneered the development and implementation of Applied Behavior Analysis (ABA) as a comprehensive learning program for autistic children. In the absence of effective biological treatment approaches, ABA continues to be the treatment of choice for individuals diagnosed with autism spectrum disorder. An excellent summary of this early work (Lovaas & Newsom, 1976) describes his success in reducing self-destructive behavior (e.g., head-banging) and teaching language using control learning procedures. Reinforcement Therapy, a still inspiring film (Lovaas, 1969), portrays this seminal research.

In Lovaas’s words, “What one usually sees when first meeting an autistic child who is 2, 3, or even 10 years of age is a child who has all the external physical characteristics of a normal child – that is, he has hair, and he has eyes and a nose, and he may be dressed in a shirt and trousers – but who really has no behaviors that one can single out as distinctively “human.” The major job then for a therapist – whether he’s behaviorally oriented or not – would seem to be a very intriguing one, namely the creation or construction of a truly human behavioral repertoire where none exists” (Lovaas & Newsom, 1976, p. 310). Since they are non-verbal and do not imitate, teaching an autistic child can have much in common with training a laboratory animal in a Skinner-box. Initially, one needs to rely on direct learning procedures. Unconditioned reinforcers and punishers (i.e., biologically significant stimuli such as food or shock) serve as consequences for arbitrary (to the child) behaviors.

Behavioral Excesses – Eliminating Self-Injurious Behavior

Some early attempts at eliminating self-injurious behaviors by withdrawing attention (Wolf, Risley, & Mees, 1964) or placing the child in social isolation (Hamilton, Stephens, & Allen, 1967) were successful. However, such procedures tend to be slow-acting and risky in extreme cases. In such instances, presentation of an aversive stimulus (a brief mild shock) may be necessary. Lovaas, Schaeffer, and Simmons (1965) were the first to demonstrate the immediate long-lasting suppressive effect of contingent shock on tantrums and self-destructive acts with two 5-year-old children. These findings have been frequently replicated using a device known as the SIBIS (Self-Injurious Behavior Inhibiting System). The SIBIS was developed through collaboration between psychological researchers, engineers, autism advocates, and medical manufacturers. A sensor module that straps onto the head is attached to a radio transmitter. The sensor can be adjusted for different intensities of impact and contingent shock can immediately be delivered to the arm or leg. Rapid substantial reductions in self-injurious behavior were obtained with five previously untreatable older children and young adults using brief mild shocks (Linscheid, Iwata, Ricketts, Williams, & Griffin, 1990).

Behavioral Deficits – Establishing Imitation and Speech

Once interfering behavioral excesses are reduced to being manageable, it is possible to address behavioral deficits and establish the capability of indirect learning through imitation and language. Perhaps the most disheartening aspect of working with an autistic child is her/his indifference to signs of affection. Smiles, coos, hugs, and kisses are often ignored or rejected. Autistic children are typically physically healthy and good eaters. Therefore, Lovaas and his co-workers were able to work at meal time, making food contingent on specific behaviors. A shaping procedure including prompting and fading was used at first to teach the child to emit different sounds. For example, in teaching the child to say “mama”, the teacher would hold the child’s lips closed and then let go when the child tried to vocalize. This would result in the initial “mmm” (You can try this on yourself). Once this was achieved, the teacher would touch the child’s lips without holding them shut, asking him/her to say “mmm.” Eventually the physical prompt could be eliminated and the verbal prompt would be sufficient. At this point one would ask the child to say “ma”, holding the child’s lips closed while he/she is saying “mmm” and suddenly letting go. This will result in an approximation of “ma” that can be refined on subsequent trials. Repeating “ma”, produces the desired “mama.” With additional examples, the child gradually acquires the ability to imitate different sounds and words and the pace of learning picks up considerably.

Once the child is able to imitate what she/he hears, procedures are implemented to teach meaningful speech. Predictive learning procedures are used in which words are paired with the objects they represent, resulting in verbal comprehension. Verbal expression is achieved by rewarding the child for pointing to objects and saying their name. The child is taught to ask questions (e.g., “Is this a book?”) and make requests (e.g., “May I have ice cream?”). After a vocabulary of nouns is established, the child learns about relationships among objects (e.g., “on top of”, “inside of”, etc.) and other parts of speech are taught (e.g., pronouns, adjectives, etc.). Eventually the child becomes capable of describing his/her life (e.g., “What did you have for breakfast?”) and creative storytelling.

Lovaas assessed the extent to which the treatment gains acquired in his program were maintained over a 4-year follow-up in other environments. If the children were discharged to a state institution, they lost the benefit of training. Self-injurious behavior, language, and social skills all returned to pre-treatment levels. Fortunately, providing “booster” sessions rapidly reinstated the treatment gains. Those children remaining with their parents (who received instruction in the basic procedures) maintained their treatment gains and continued to improve in some instances (Lovaas, Koegel, Simmons, & Long, 1973). An intellectual development disorder often accompanies autism, so it is unrealistic to aspire to the age-appropriate grade level for all children. Still, Lovaas (1987) has achieved this impressive ideal with 50 percent of the children started prior to 30 months of age.

Treating Behavioral Problems with Verbal Individuals: Cognitive Behavior Therapy

It is clear how the Applied Behavior Analysis procedures used with non-verbal individuals such as autistic children flow directly from Skinnerian reinforcement and punishment procedures developed with non-speaking animals. Less obvious is how cognitive behavior modification talking therapies relate to the psychology research literature. Their origin can be traced to two books published for the general population a year apart by Albert Ellis: A Guide to Rational Living (1961) and Reason and Emotion in Psychotherapy (1962). Ellis was trained in the Freudian psychodynamic approach to psychotherapy and became dissatisfied with the results obtained with his patients. At best, progress was very slow and often did not occur at all. He attributed these poor results to emphasis upon the past rather than the present and a passive/non-directive as opposed to a more active/directive approach he defined as Rational Emotive Therapy (RET). RET assumed that an individual’s emotional and behavioral reactions to an event resulted from interpretation. For example, if you were walking along a sidewalk and someone bumped into you, you might react with anger until discovering that the person was blind. Ellis developed a systematic approach to therapy based on identifying one’s irrational thoughts and countering them with more adaptive alternatives.

Part of the reason for not connecting Ellis’ verbal approach to psychotherapy with the animal literature is the failure to connect that literature with speech by making the distinction between direct and indirect learning. As described in chapters 5 and 6, word meaning is established through Pavlovian classical conditioning procedures and speech is maintained by its consequences (i.e., may be understood as an operant). Martin Seligman (1975) developed a learned helplessness animal model providing the underpinnings of a cognitive analysis of depression and the psychiatrist Aaron Beck conducted experimental clinical trials comparing the efficacy of cognitive and pharmacologic approaches to the treatment of depression.

Cognitive-Behavioral Treatment of Depression

One might think that if it is possible to socialize an autistic child with extreme behavioral excesses and severely limiting behavioral deficits, treating a healthy verbal cooperative adult for a psychological problem would be easy in comparison. However, we need to appreciate the logistic and treatment implementation issues that arise when working with a free-living individual. Lovaas was able to create a highly controlled environment during the children’s waking hours. It was possible for trained professionals to closely monitor the children’s behavior and immediately provide powerful consequences. In comparison, adult treatment typically consists of weekly 1-hour “talking sessions” and “homework” assignments where the therapist does not have this degree of access or control. Success depends upon the client following through on suggested actions and accurately reporting what transpires.

Learned Helplessness

God grant me the serenity to accept the things I cannot change; courage to change the things I can; and wisdom to know the difference.

The Serenity Prayer, attributed to Friedrich Oetinger (1702-1782) and Reinhold Niebuhr (1892-1971)

From an adaptive learning perspective, there is much wisdom in the serenity prayer. In our continual quest to survive and to thrive, there are physical, behavioral, educational, cultural, economic, political, and other situational factors limiting our options. Research findings suggest that the inability to affect outcomes can have detrimental effects.

In Seligman’s initial research study, dogs were initially placed in a restraining harness with a panel in front of them. One group did not receive shock. A second group received an escape learning contingency in which pressing the panel turned off the shock. A third group was “yoked” to the second group and received inescapable shock. That is, subjects received shock at precisely the same time but could do nothing to control its occurrence. In the second phase, subjects were placed in a shuttle box where they could escape shock by jumping over a hurdle from one side to the other. Dogs not receiving shock or exposed to escapable shock failed to escape on only 10 percent of the trials whereas those exposed to inescapable shock failed to escape on over 50 percent of the trials. Seligman observed that many of these dogs displayed symptoms similar to those characteristic of depressed humans including lethargy, crying, and loss of appetite. Based upon his findings and observations, Seligman formulated a very influential “learned helplessness” model of clinical depression. He suggested that events such as loss of a loved one or losing a job could result in failure to take appropriate action in non-related circumstances as well as development of depressive symptoms. In entertaining and engaging books, Seligman describes how his learned helplessness model helps us understand the etiology (i.e., cause), treatment (Seligman, Maier, and Geer, 1968), and prevention of depression (Seligman, 1975; 1990).

The key factor in the learned helplessness phenomenon is prior exposure to uncontrollable events. Goodkin (1976) demonstrated that prior exposure to uncontrolled food presentations would produce similar detrimental effects to uncontrolled shock presentations on the acquisition of an escape response. The “spoiling” effect with appetitive events has also been demonstrated under laboratory conditions using the learned helplessness model. Pigeons exposed to non-contingent delivery of food were slower to acquire a key pecking response and demonstrated a lower rate of key pecking once it was acquired (Wasserman & Molina, 1975). It is clear from these studies and others that the serenity proverb applies to other animals as well as humans. Successful adaptation requires learning when one does and does not have the ability to control environmental events.

As an example, let us consider someone who becomes depressed after losing a job. Seligman’s learned helplessness research suggests that depression results from a perceived loss of control over significant events. It is as though the person believes “If I do this, it will not matter.” Depression in humans has been related to attributions on three dimensions: internal-external, stable-unstable, and global-specific (Abramson, Seligman, and Teasdale, 1978). With respect to our example, the person is more likely to become depressed if he/she attributes loss of the job to: a personal deficiency such as not being smart (internal) rather than to a downturn in the economy (external); the belief that not being smart is a permanent deficiency (stable) rather than temporary (unstable); and that not being smart will apply to other jobs (global) rather than just the previous one (specific).

Cognitive-behavior therapy for depression would include attempts by the therapist to modify these attributions during therapeutic sessions as well as providing reality-testing exercises as homework assignments. The therapist might challenge the notion that the person is not smart by asking them to recall past job performance successes. They could review the person’s credentials in preparation for a job search. Severe cases of depression might require assignments related to self-care and “small-step” achievements (e.g., making one’s bed, grooming and getting dressed, going out for a walk, etc.). As mentioned previously, homework assignments constitute an essential component of cognitive-behavioral treatment of depression (Jacobson, Dobson, Truax, Addis, Koerner, Gollan, Gortner, & Prince, 1996). Apparently, therapies are effective to the extent that they result in clients experiencing the consequences of their acts under naturalistic circumstances. This finding is consistent with an adaptive learning model of the psychotherapeutic process. That is, therapy is designed to help the individual acquire the necessary skills to cope with their idiosyncratic environmental demands.

Frequently the therapeutic process consists of determining adaptive rules specifying a contingency between a specific behavior and specific consequence. For example, in treating a severely depressed individual, one might start with “If you get out of bed within 30 minutes after the alarm goes off, you can reward yourself with 30 minutes of TV.” This can then be modified to require getting up within 20 minutes, 10 minutes, and 5 minutes. Once this is accomplished, the person may be required to get up and make their bed, get dressed and wash their face, etc. If the person is not severely depressed, it may be sufficient to establish rules such as “After finding appropriate positions in the newspaper and submitting your resume, you can reward yourself with reading your favorite section of the paper.”

Preventing Behavioral Problems and Realizing Human Potential

Relapse Prevention

As indicated in our summaries of the treatment results for DSM disorders last chapter, in several instances (e.g., autism, depression, addictive disorders, etc.) successful results were not maintained. This is often the result of issues other than the failure to generalize beyond the training environment(s). In Chapter 5, we saw that the extinction process does not “undo” prior learning. Rather, an inhibitory response is acquired that counteracts the previously learned behavior. Adaptive learning procedures have been successful in addressing a wide range of behavioral problems. Successful treatments rely upon the establishment of new behaviors to counteract behavioral excesses and eliminate behavioral deficits. Unfortunately, successful treatment may still be subject to relapse. G. Alan Marlatt has published extensively on the conditions likely to result in relapse and developed a strategy for reducing the risk (Marlatt, 1978; Marlatt & Gordon, 1980, 1985; Brownell, Marlatt, Lichtenstein, & Wilson, 1986; Marlatt & Donovan, 2005). Much of this research relates to addictive disorders that have been shown to undergo remarkably similar relapse patterns. Unless provided with additional training, 70 percent of successfully treated smokers, excessive drinkers, and heroin addicts are likely to relapse within six months (Hunt, Arnett, & Branch, 1971). Marlatt conducted follow-up interviews to track the incidence of relapses and attain information regarding the circumstances (e.g., time of day, activity, location, presence of others, associated thoughts and feelings). Approximately 75 percent of the relapses were precipitated by negative emotional states (e.g., frustration, anger, anxiety, depression), social pressure, and interpersonal conflict (e.g., with spouse, family member, friend, employer/employee). Marlatt also described the “abstinence violation effect” in which a minor lapse was followed by a full-blown binge.

Relapse prevention methods involve identifying personal high-risk situations, acquiring, and practicing coping skills. For example, depending upon one’s environmental demands, any combination of the following treatments may be appropriate: relaxation exercises; desensitization for specific fears or sources of anxiety; anger management; time management; assertiveness training; social-skills training; conflict resolution training; training in self-assessment and self-control. A review of research applying relapse prevention methods to difficult recalcitrant substance abuse problems concluded that it was quite successful (Irvin, Bowers, Dunn, & Wang, 1999). It is likely that targeted use of such procedures (e.g., assertiveness training to resist the effects of peer pressure) would improve upon the effectiveness of MST with conduct disorder.

An adaptive learning perspective requires an extensive analysis of an individual’s environmental demands and coping strategies. Whether in the home, the school, or a free-living environment, there may be a mismatch between the demands and the person’s current skill set. Successful treatment provides the necessary skills to not only cope with the current demands, but also to prepare the individual for predictable stressors and setbacks.

Binge drinking and excessive alcohol consumption pose substantial health risks and negatively impact upon class attendance and the academic performance of college students (https://www.alcohol.org/teens/college-campuses/). Based upon his extensive research addressing substance-abuse interventions and relapse prevention, Marlatt developed a comprehensive assessment and intervention program called Brief Alcohol Screening and Intervention for College Students (BASICS): A Harm Reduction Approach (Denerin & Spear, 2012; Dimeff, Baer, Kivlahan, & Marlatt, 1998; Marlatt, 1996; Marlatt, Baer, & Larimer, 1995). The program consists of two 1-hour interviews presented in an empathic, non-judgmental manner. The first interview is followed by an on-line assessment survey designed to enable the prescription of specific behavioral recommendations based on each student’s responses. In order to reduce the likelihood of relapse, the program provides information and develops skills to counter peer pressure, negative emotions, and other triggers for excessive and binge drinking. A review of randomized controlled trials concluded that the BASICS program resulted in a significant reduction of approximately two drinks per week in college students (Fachini, Aliane, Martinez, & Furtado, 2012).

Prevention of Maladaptive Behavior

An ounce of prevention is worth a pound of cure

The Early Riser

The current state of the art in treating conduct disorder appears to be a long-term recidivism rate of 50%. Hopefully, implementation of relapse prevention techniques and forthcoming research will enable us to improve upon this result. Ideally, we would be able to prevent the problematic behavioral excesses and deficits which comprise the disorder from developing in the first place. The Early Risers “Skills for Success” Conduct Problems Prevention Program attempted to achieve this by working with kindergarten children exhibiting high incidences of aggression (August, Realmuto, Hektner, & Bloomquist, 2001). Similar to MST, Early Risers (ER) focuses upon parent training, peer relations, and school performance: parents are instructed in effective disciplining techniques; children meet with “friendship groups” on a weekly basis during the school year and during a six-week summer session; and a family advocate will work with parents on their child’s academic needs with an emphasis on reading. A 10-year follow-up of the results for high-risk children receiving three intensive years of ER training followed by two booster years found fewer symptoms of conduct, oppositional defiant, or major depressive disorders than a randomized control condition. Behavioral and academic improvements were evident in the ER condition as early as the first two years, even for the most aggressive children. The authors concluded that the Early Risers program was effective in interrupting the “maladaptive developmental cascade” in which aggressive children “turn off” parents, peers, and teachers, resulting in a spiraling down of social and academic performance (Hektner, August, Bloomquist, Lee, & Klimes-Dougan, 2014).

The Good Behavior Game

Children arrive at school with different levels of preparedness and skills. This often results in classroom management challenges for the teacher. Many different adaptive learning procedures have been implemented successfully to address such problems. The Good Behavior Game (GBG) is a comprehensive program recommended by the Coalition for Evidence-Based Policy (www.evidencebasedprograms.org), a member of the Council for Excellence in Government. The GBG was developed by two teachers and Montrose Wolf (Barrish, Saunders, & Wolf, 1969), one of the founders of the Journal of Applied Behavior Analysis, a very readable and practical publication. In order to play the game, the teacher divides the class into two or three teams of students (it has been implemented as early as pre-school). The GBG is usually introduced for 10-minute sessions, 3 days a week. Gradually, the session times are increased to a maximum of an hour. A chart is posted in front of the room listing and providing concrete examples of inappropriate behaviors such as leaving one’s seat, talking out, or causing disruptions. Any instance of such a behavior is described by the teacher (e.g., “Team 1 gets a check because Mary just talked out without raising her hand”) and a check mark is placed on the chart under the team’s name. The teacher also praises the other groups for behaving (e.g., “Teams 2 and 3 are working very nicely”). At the end of the day, the members of the team with the fewest check marks receive school-related rewards such as free time, lining up first for lunch, or stars on a “winner’s chart. Usually, it is possible for all groups to receive rewards if they remain below a specified number of inappropriate behaviors. This number may then be reduced over sessions.

The Good Behavior Game has been tested in two major randomized controlled studies in an urban environment. It was demonstrated to reduce aggression (Dolan, Kellam, Brown, Werthamer-Larson, Rebok, & Mayer, 1993) and increase on-task behavior (Brown, 1993) in 1st-graders, and reduce aggression (Kellam, Rebok, Ialongo, & Mayer, 1994) and the initiation of smoking (Kellam & Anthony (1998) in middle school students. A follow-up after the 6th-grade, found that students experiencing the GBG in the 1st-grade had a 60% lower incidence of conduct disorder, 35% lower likelihood of suspension, and 29% lower likelihood of requiring mental health services. Perhaps most impressively, 14 years after implementation, the GBG was found to result in a 50% lower rate of lifetime illicit drug abuse, a 59% lower likelihood of smoking 10 or more cigarettes a day, and a 35% lower rate of lifetime alcohol abuse for 19-21 year-old males (Kellam, Brown, Poduska, Ialongo, Petras, Wang, Toyinbo, Wilcox, Ford, & Windham, 2008)! The GBG has been found to significantly reduce disruptive behavior as early as kindergarten (Donaldson, Vollmer, Krous, Downs, & Beard, 2011). Tingstrom and colleagues reviewed more than 30 years of research evaluating variations of the GBG (Tingstrom, Sterling-Turner, and Wilczynski, 2006). The experimental findings for the effectiveness of the Good Behavior Game have been so consistent and powerful that it has been recommended as an extremely cost-efficient “universal behavioral vaccine” (Embry, 2002).

Health Psychology

It is health that is real wealth and not pieces of gold and silver.

Mahatma Gandhi

We can make a commitment to promote vegetables and fruits and whole grains on every part of every menu. We can make portion sizes smaller and emphasize quality over quantity. And we can help create a culture – imagine this – where our kids ask for healthy options instead of resisting them.

Michelle Obama

It is a truism, consistent with Maslow’s pyramid, that one’s health overrides all other factors in one’s life. If one is not healthy it can be impossible to enjoy any of life’s social and vocational pleasures or achieve one’s potential. For practically all of our time on this planet, by today’s standards, humans had relatively brief lifespans. As shown in Kurzweil’s (2001) graph (see Figure 7.7), human life expectancy has doubled from approximately 39 to 78 years since 1850! The major causes of this increase are improved sanitary conditions and inoculations against infectious diseases. We now live at a time where the major causes of death in industrialized countries relate to our health practices. Our nutrition, as implied by our first lady, exercise routines, sleep habits, protective sex practices, use of seat belts, tooth brushing and flossing, adherence to medical regimens, and avoidance of tobacco and excessive alcohol, all impact upon the quality as well as longevity of our lives (Belloc & Breslow, 1972). A preventive approach emphasizing a prudent lifestyle is the most likely path to continued improvements.

Health psychology has emerged as a sub-discipline of psychology dedicated to “the prevention and treatment of illness, and the identification of etiologic and diagnostic correlates of health, illness and related dysfunction” (Matarazzo, 1980). It is hoped that the knowledge acquired through this discipline will enable the development of lifestyle-related technologies essential to the continuation of the upward trend in human life expectancy. Equally important, it is hoped that the quality of life can be improved, resulting in a greater percentage of individuals realizing their potentials. Health psychologist positions exist for those with master’s as well as doctoral degrees. Often training is linked with other specializations in academic/research (e.g., behavioral neuroscience) or practice (e.g., clinical psychology), or attained after earning the doctorate. Sub-specializations include clinical health psychology, community health psychology, occupational health psychology, and public health psychology.

As mentioned in the previous chapter, it has been found that inclusion and completion of homework assignments is essential to the success of cognitive-behavioral procedures (Burns & Spangler, 2000; Garland & Scott, 2002; Ilardi & Craighead, 1994; Kazantzis, Deane, & Ronan, 2000). Albert Bandura (1977b; 1982; 1986, chapter 9; 1997; Bandura & Adams, 1977; Bandura, Adams, Hardy, & Howells, 1980) coined the term “self-efficacy” to refer to an individual’s expectancy that they are able to perform a specific task. Presumably, successful completion of a homework assignment develops this expectancy. Once acquired, the individual is less prone to discouragement and more likely to act upon the desire to change. In the previous chapter, we observed that in several instances (e.g., depression), even though pharmacological treatment was initially as effective as cognitive-behavioral treatment, the benefits were more likely to be sustained with the learning-based treatment. This can be attributed to the self-efficacy beliefs likely to result from the different approaches. In one instance, the person is likely to attribute success to the effects of the drug. Once it is withdrawn, the person may no longer believe that they can cope. In contrast, after cognitive-behavioral therapy, the person is more likely to believe they have acquired the knowledge and skill to address their problem.

The Health Action Process Approach (see Figure 12.8) emphasizes the importance of different types of self-efficacy in the development of the intent and ability to change health-related behavior (Schwarzer, 2008). We will use smoking as an example. During the motivational stage, even if a smoker perceives it as a problem and expects health to improve as the outcome of quitting, the intent to act requires believing that success is possible. During the volitional stage, even if the smoker formulates an effective plan, success requires believing that quitting can be maintained for an extended period of time and that recovery from any lapses is likely. Relapse prevention techniques should result in increased maintenance and recovery self-efficacy.

https://upload.wikimedia.org/wikipedia/commons/3/3b/The_Health_Action_Process_Approach2.jpg

Figure 12.1 The Health Action Process Approach (adapted from Schweizer, 2008).

“I think I can, I think I can”

The Little Engine that Could by Watty Piper (1930)

Nothing succeeds like success.

Oscar Wilde

There is an extensive research literature documenting the relationship between self-efficacy and successful change in health habits including: smoking (Dijkstra & De Vries, 2000); dietary changes (Gutiérrez-Doña, Lippke, Renner, Kwon, & Scwarzer, 2009) including Michelle Obama’s desired increase in fruit and vegetable consumption (Luszczynska, Tryburcy, & Schwarzer, 2007); and exercise (Luszczynska, Schwarzer, Lippke, & Mazurkiewicz, 2011). For decades, it was believed that improving students’ self-esteem, as opposed to self-efficacy, would improve their school performance. A comprehensive review of the research literature concluded that the relationship was the result of better school performance improving self-esteem rather than the other way around (Baumeister, Campbell, Krueger, & Vohs, 2003). Despite this strong relationship between self-efficacy and successful behavior change, one needs to be careful about concluding cause-and –effect. Self-efficacy can be a seductive pseudo-explanation. Does increased self-efficacy lead to improvement or is it the other way around?

Multisystemic Therapy for Conduct Disorder

It takes a village to raise a child.

African proverb

I chaired the Department of Psychology for 24 years, an unusually long stretch. Often, less-experienced chairs from other departments would ask me about my “administrative style.” Eventually I arrived at the term humanistic ecology to describe my interpretation of the chair’s role (Levy, 2013, pp. 231-232). The same term could be applied to the roles of parent, friend, teacher, mentor, administrator, clergy member, coach, or helping professional. One is even being a “humanistic ecologist” when engaged in a self-control project. From Maslow’s perspective, humanism requires supporting others in their quests to self-actualize. An ecologist studies the relationships between organisms and their environments. Humanistic ecology involves the attempt to identify and create niches in which individuals are able to achieve their self-defined goals and realize their potential while serving the needs of a social group (e.g., family, work colleagues, team, community, nation, etc.).

Sometimes effective treatment for an individual requires coordination between professional psychologists, family, and appropriate community members. We saw that the gains made by autistic children in institutionalized settings could be lost when they returned to their homes. In order to maintain and build upon previously acquired skills, it was necessary to teach family members and significant others to implement direct and indirect learning procedures. In treating conduct disorder, it has been found that successful treatment in one context (e.g., at home) will not necessarily generalize to another context, such as school (Scott, 2002). In Chapter 11, multisystemic therapy (MST) was mentioned as a promising approach to treating severe, intractable cases of conduct disorder. As described, children and adolescents diagnosed with conduct disorder must frequently cope with economic, interpersonal, substance abuse, and criminal justice issues confronting their families and friends. In the same way that treating an individual for malaria would not protect them from contracting the disease when they returned to a mosquito-infested environment, treating a child for conduct disorder would not provide protection from the difficult and discouraging realities they face on a day-to-day basis. Effective long-range treatment requires altering the environmental conditions in order to encourage and sustain desired behavioral changes.

MST is a comprehensive treatment approach to conduct disorder incorporating evidence-based practices in the child’s home, school, and community (Henggeler & Scaeffer, 2010; Scott, 2008; Weiss, Han, Harris, Catron, Ngo, Caron, Gallop, & Guth, 2013). MST targets such serious behavioral excesses as fighting, destroying property, substance abuse, truancy, and running away. Targeted behavioral deficits may include communication (e.g., initiating and sustaining a conversation), social (e.g., sharing and cooperating), and academic (e.g., reading and math) skills. Services are provided in the natural environment as opposed to an office. Treatment is intense, usually consisting of direct contact for about five hours per week for up to six months. Staff members are continuously available at other times to provide assistance and address emergencies. The treatment objective is to transform the child’s environment from one that fosters and sustains the behavioral excesses to one that discourages and eliminates them. This requires a comprehensive, detailed analysis of the antecedents, behaviors, and consequences (i.e., the “ABC”s) within the specific environmental circumstances. Vygotsky’s developmental principles of incorporating zones of proximal development and scaffolding are implemented through the use of prompting, fading, and shaping procedures. A problem-solving process is followed, incorporating the results of continual assessment into an evolving intervention strategy in the different contexts. Family members are taught to systematically monitor the child’s (or adolescent’s) behavior and provide appropriate consequences, including explanations. Communication between parents and consistency in their enforcement of rules is encouraged to prevent a “good cop, bad cop” pattern from emerging. Attempts may be made to monitor and influence choice of friends, encouraging the development of a peer group of positive role models. Regular meetings are scheduled between parents and teachers to discuss the behavioral and academic performance of the child. After-school time is monitored and structured carefully to promote studying and decrease the likelihood of engaging in anti-social activities. Role-playing exercises are designed to prepare the child to resist peer pressure to use drugs or engage in delinquent behavior (Henggeler, Melton, & Smith, 1998; Henggeler, Schoenwald, Borduin, Rowland, & Cunningham, 2009). It may not require a village, but MST typically includes a doctoral-level psychologist supervising three or four master’s-level psychologists, each with a caseload of four to six families. Therapists may initially meet with the family on a daily basis, gradually reducing the frequency to once a week (Henggeler & Schaeffer, 2010). Several randomized outcome studies have found MST effective in reducing rearrests and improving behavioral functioning with youth and adolescents diagnosed with conduct disorder (Butler, Baruch, Hickey, & Fonagy, 2011; Timmons-Mitchell, Bender, Kishna, & Mitchell, 2006; Weiss, et al., 2013). Follow-up studies ranging from two (Ogden & Hagen, 2006) to 14 years (Schaeffer & Borduin, 2005) have found the effects to be long-lasting. Recidivism rates were 50% for individuals receiving MST in comparison to 81% for those receiving standard care. MST treated adults (an average of almost 29 years old at follow-up) were arrested 54% less frequently and confined for 57% fewer days (Schaeffer & Borduin, 2005). Meta-analysis is a statistical procedure used to combine the results of several different research studies to determine patterns of findings and estimates of the size of the effect of independent variables. Meta-analyses have confirmed the short- and long-term effectiveness of MST in treating conduct disorder (Curtis, Ronan, & Borduin, 2004; Woolfenden, Williams, & Peat, 2002).

The proverbial “round peg in a square hole” is a useful metaphor for the human condition and for humanistic ecology. We are all “pegs” doing our best to fit our current environmental circumstances (“holes”). We are born essentially “shapeless”, requiring parents and significant others do their best to “shape us up.” Sometimes, those who care require professional assistance to get us to fit comfortably. Usually, professional assistance consists exclusively of trying to change the shape of the peg to conform to the hole. As we saw with autism spectrum and conduct disorders, this approach can be insufficient. Sometimes it is necessary to also change the shape of the hole. Parent training and the more comprehensive multisystemic therapy are examples of this strategy.

Psychology and Human Potential

Upon completing the discussion of psychological approaches to treating and preventing maladaptive behavior, we have reached the end of our story. We can now consider the implications of what we have learned about the discipline of psychology and human potential. Theoretically, the multi-systemic approach to treatment of conduct disorders could be expanded to achieve al least the first 3 (of 30) articles of the Universal Declaration of Human Rights (https://www.un.org/en/universal-declaration-human-rights/) listed in Chapter 7 (repeated below). Our homes, schools, communities, and nations could collaborate to realize the following:

Article 1.

All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

Article 2.

Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty.

Article 3.

Everyone has the right to life, liberty and security of person.

Article 1 implies all humans are born with the same genome. In chapter 1, we saw how the combination of our large frontal cortex and physical features permitting speech and use of tools enabled us to transform the world and human condition. Our imagination combined with communication, collaboration, and manipulation seem unlimited in their application. They could be directed toward achieving the goals of the first three articles. As described in the first article, this requires communicating, collaborating, and manipulating in a spirit of brotherhood. Tragically, human pyramids of hate interfere with climbing Maslow’s human needs pyramid. By emphasizing the article 2 differences of race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status, we fail to recognize and fulfill the potential of our common humanity. Fulfilling our potential will only be achieved when we insure the opportunity for everyone to satisfy the Article 3 human survival, interpersonal, self-esteem, and self-actualization needs. Then “What a wonderful world it would be!”

Afterward

Mighty oaks from tiny acorns grow. Human potential starts with our genome

The acorn/mighty oak metaphor was used at the beginning of the first chapter to describe how the realization of human potential results from the impact of experience on our genome. Learning to use our abilities to imagine, communicate and manipulate enabled us to understand and change our world. Technological developments enabled us to overcome the limitations of our genome by enhancing those basic abilities. Our senses are seemingly infinitely expanded by devices such as the telescope and microscope. Digitization has resulted in the soon-to-be-reached reality of every individual having the accumulated knowledge of our species at their fingertips combined with the ability to instantaneously communicate with others, no matter the distance. Dobzhansky (1960) described humans as a supraorganic species. Collaboration enables us to magnify the potential that resides within individuals. It was necessary not only to imagine Manhattan, communicate with each other, and use tools. We had to work as a team to produce the final result. It would appear that the potential of our species is unlimited. In the 1700s, the scientists John Michell and Pierre-Simon Laplace imagined gravitational fields so powerful that not even light could escape. For this reason, these imaginary fields were described as “black holes.” In April of 2019, the first image of a black hole appeared in newspapers throughout the world. More than 200 scientists on four continents collaborated, simultaneously using eight radio telescopes to produce the following image:

Blackness of space with black marked as center of donut of orange and red gases

Figure 12.2 The supermassive black hole at the core of supergiantelliptical galaxyMessier 87, with a mass ~7 billion times the Sun’s, as depicted in the first image released by the Event Horizon Telescope (10 April 2019).

Unfortunately, these wonderful abilities to imagine, communicate, manipulate and collaborate evolved under conditions where selfishness, greed, impulsivity, fear of strangers and aggression were all adaptive. Now, these tendencies threaten the very survival of our species. At this time, it is not clear that what appears to be our unlimited potential will be realized. It may be that surviving on earth will be the human species’ greatest challenge. Let us collectively imagine, communicate, manipulate and collaborate to protect ourselves and build a better earth while exploring the universe.

Chapter 11: Problems in the Development of Human Potential

Learning Objectives

  • Describe the different origins of psychiatry and clinical psychology
  • Describe differences between psychiatric (biological) and psychological (experiential) approaches to assessment and treatment of maladaptive behavior
  • Describe some of the behavioral excesses and deficits associated with DSM 5 major developmental (intellectual disability and autism spectrum) disorders
  • Describe examples of the extreme symptoms that characterize schizophrenia and the effects of drugs and experiential treatments in addressing behavioral excesses and deficits
  • Describe some of the symptoms of major depressive and anxiety disorders

Psychiatry and Clinical Psychology

Life is complicated – A Lot Can Go Wrong

Eat, survive, reproduce, and think about the meaning of life. Every human addresses these concerns. We have seen how our genes and experiences interact to enable us to survive, perceive, learn, think, develop, and adapt to the current physical and social environment. Humans have attained varying degrees of success in achieving their potential under extraordinarily different conditions throughout their history on the planet. This is the fourth and final chapter in the nature/nurture section of the book. We have seen how nature and nurture interact during different developmental stages to influence our individual personalities and interpersonal relationships.

Tragically, some fetuses inherit genes or encounter environmental conditions resulting in their not surviving till birth. For most of our history, the birth process itself was extremeley dangerous and many infants did not survive. Until recently, humans lived under harsh geographic and climatic conditions including life-threatening predators. Many perished during childhood and early adulthood. Thankfully, a sufficient number managed to survive long enough to reproduce and sustain our species.

The Nukak survived for thousands of years under some of the least habitable conditions on earth. Over the millennia, it is likely that some of the Nukak inherited characteristics that made it difficult for them to learn to eat, survive, or reproduce. Some children may have exhibited unusual, or annoying, or disturbing behaviors. Such children were dependent upon caretakers for greater investments in time and energy. As mentioned earlier, some Stone-Age nomadic tribes abandoned unwanted children. If caretakers were unsuccessful in efforts to modify problematic behaviors, these children might be subject to shunning, abandonment, or worse. These natural and social selection processes probably resulted in extremely hardy, low-maintenance tribe members.

With the advent of agriculture and animal domestication, human communities increased in size, from dozens, to hundreds, to thousands, to millions. Different social arrangements and institutions became necessary. Governments were formed to create and enforce consensually agreed upon norms for behavior. As a species, we became increasingly tolerant of individual differences and implemented laws to protect infants and children inheriting or developing medical and behavioral problems.

Psychiatry and Clinical Psychology

Some medical and behavioral problems are sufficiently serious to be considered illnesses or disorders. I have described psychology as the science of human potential. Achieving one’s potential is an adaptive process taking place within a specific environmental context. A mental illness or a psychological disorder is usually inferred when a person’s thoughts, emotions, or behavior appear to interfere with or prevent adapting to the current environment and fulfilling one’s potential.

Two different professions emerged to address the wide spectrum of problems that can interfere with adaptation or self-fulfillment. Because of the very different histories, traditions, explanatory models, and professional organizations, there has frequently been confusion and sometimes controversy concerning the appropriate boundaries and relationships between these two professions. Fortunately, both professions have evolved to the point that these boundaries are becoming increasingly clarified and the relationships increasingly collaborative and synergistic.

The medical profession applies the findings of the basic biological sciences to conditions that threaten the health or vitality of individual animals. Veterinarians treat animals other than humans. As disciplines, including sciences, advance and acquire more knowledge, they typically fragment into specialized sub-disciplines. There are a number of such specializations for the medical treatment of humans. Some address problems with specific parts of human structure, such as nephrology (kidneys), ophthalmology (eyes), orthopedics (muscles and bones), and otolaryngology (ear, nose, and throat). Other medical specializations address problems with specific biological functions such as cardiology (circulation), endocrinology (glandular functions), gastroenterology (digestion), and neurology (the nervous system). Some specializations are specific to certain times of one’s life; obstetrics for birth, pediatrics for childhood, and gerontology for the aged. Speaking of the aged, Dr. Seuss (Geisel, 1986) wrote a book for adults entitled You’re Only Old Once! It describes the types of medical doctors one acquires as they get older. I used to think the book was funny! Similar to medicine, professional psychology has evolved and developed specialized, applied, sub-disciplines. The practices of some of these professions will be described in Chapter 12.

Psychiatry and clinicalpsychology are the specializations within professional medicine and psychology that address problems related to adaptation and personal fulfillment. Although both disciplines recognize the importance of nature and nurture in the understanding of human behavior, they have different emphases. Each specialization employs the schematic framework of its parent discipline. Psychiatry assumes the causes and treatment for adaptive problems are based on biological mechanisms. Clinical psychology assumes that although problems may be based on nature/nurture interactions, effective treatment can be entirely experiential. One would expect that each discipline would employ treatment methods exclusively based upon its underlying science. This has not always been the case in the past, which is part of the reason for the confusion regarding the roles and boundaries of the two professions.

The Separate Histories of Psychiatry and Clinical Psychology

Although it is commonly understood that biology and psychology are separate disciplines, the separation between psychiatry and clinical psychology is less familiar. The contributions of the early schools of psychology (structuralism, functionalism, Gestalt psychology, and behaviorism) were reviewed in Chapter 1. Over time, the basic content areas comprising most of the chapters in this book developed and evolved. Contemporary approaches to clinical psychology apply the research findings from these content areas, particularly the principles of direct and indirect learning (Chapters 5 and 6). In general, psychological approaches involve assessing and providing experiences to improve an individual’s ability to adapt to their environmental conditions and realize their potential.

The word psychiatry, initially defined as “medical treatment of the soul”, was introduced by the physician Johann Reil in 1808 (Shorter, 1997). At that time, when families could not provide the necessary care or individuals displayed unusual, self-destructive, or dangerous behaviors, they were often placed in monasteries or jailed. As communities increased in size and the numbers of such individuals overwhelmed existing facilities, asylums were created to house them. The negligent, frequently abusive treatment of individuals in asylums, led to this approach eventually being abandoned. Asylums were replaced in the latter half of the 19th century, for those who could afford them, by more fashionable spas (Shorter, 1997). Over the next century, two distinct psychiatric approaches emerged, one based on advances in the biological sciences; the other on Freud’s personality theory (see Chapters 9 and 12). Advances in the biological sciences and controversies regarding the scientific basis for Freudian theory and treatment led to the emergence of anti-psychiatry initiatives in the 1960s (Cooper, 1967; Szasz, 1960). The development of effective pharmacological treatments and the reluctance of insurance companies to pay for frequent “talk therapy” sessions resulted in eventual rejection of the Freudian model in favor of a purely biological model of psychiatry (Shorter, 1997). A benefit of these developments has been increased clarity concerning the complimentary roles played by psychiatry and clinical psychology. It is necessary to assess the appropriate balance of biological and experiential approaches to treatment for every client. We will now consider the historical, disease-model approach to diagnosis of disorders implemented by the American Psychiatric Association. Toward the end of the chapter, an alternative, psychological approach to assessment of maladaptive behavior will be described.

The Medical Model and DSM 5

Is psychiatry a medical enterprise concerned with treating diseases, or a humanistic enterprise concerned with helping persons with their personal problems? Psychiatry could be one or the other, but it cannot–despite the pretensions and protestations of psychiatrists–be both

Thomas Szasz

A medical model treats adaptive disorders as though they are diseases; thus, the term “mental illness.” The medical model has been enormously successful in the treatment of biological disorders ranging from broken bones, to common colds, to infectious diseases, to heart disease, and cancer. Thomas Szasz (1960), a psychiatrist, wrote an extremely controversial, provocative, and influential book entitled The Myth of Mental Illness. He argued that illnesses result from biological malfunctions but that behavioral disorders do not. Szasz considered the labels for different mental illnesses to be pseudo-explanations; labels for the behaviors they purport to explain. That is, the different labels used for mental illnesses are defined exclusively by a constellation of behaviors as opposed to an underlying biological pathology. Recalling the example from Chapter 1, the disease term “influenza” stands for the relationship between a specific pathogen (virus or bacteria) and a syndrome of symptoms. In comparison, the disease term “schizophrenia” is defined exclusively as a constellation of behaviors (i.e., on the dependent variable side). No cause (i.e., independent variable) is specified. Remember the opening line of this book. There are things I think I know, things I think I might know, and things I know I do not know. Szasz is telling us that we know less than we think when we are given a mental illness label.

If one approaches a disorder as an illness, the function of assessment is to determine a medical diagnosis. A useful diagnosis provides information concerning the etiology (i.e., initial cause and/or maintaining conditions), prognosis (i.e., course of the disorder in the absence of treatment), and treatment of a biological syndrome (i.e., collection of symptoms occurring together). Many disease names (e.g., influenza, malaria, polio, etc.) provide information about the underlying mechanisms that cause and sustain a syndrome of specific symptoms. However, this is not always the case, even for biological disorders. For example, hypertension is exclusively defined by the magnitude of one’s blood pressure readings. Despite the fact that the term does not explain the elevated pressure, it is still useful since it provides information concerning prognosis and treatment. If hypertension is left untreated, blood pressure remains elevated. Treatment usually follows a course from least to increasing levels of invasiveness. It might begin with the recommendation to decrease sodium (e.g., salt) in your diet and to increase exercise. Psychiatric illness labels are also defined exclusively on the dependent variable (in this instance, behavioral) side. One might be diagnosed as schizophrenic based on the report of hallucinations (see below). Even though the term “schizophrenia” provides no information about the cause(s) of hallucinations, it does provide information about prognosis and treatment. Hallucinations will continue in the absence of treatment; anti-psychotic medications will probably help.

The American Psychiatric Association (2013) compiles a comprehensive listing of mental illness disease labels and definitions (i.e., criteria) in the Diagnostic and Statistical Manual of Mental Disorders (DSM). The DSM, published initially in 1952, has undergone periodic revision; DSM-II in 1968, DSM-III in 1980, DSM-III-R (revised) in 1987, DSM-IV in 1994, DSM-IV-TR (text revision) in 2000, and DSM-V in 2013. Starting with DSM-III, the Freudian psychoanalytic influence was reduced and the attempt made to establish consistency with the World Health Organization publication, International Statistical Classification of Diseases and Related Health Problems. Recognition of the overlap between psychiatry and psychology and the limitations of the illness labels are indicated in the DSM-III Task Force quote, “Each of the mental disorders is conceptualized as a clinically significant behavioral or psychological syndrome.” Some considered DSM-III to represent a significant advance from the prior DSMs (Mayes & Horwitz, 2005; Wilson, 1993). It became the international standard for psychiatric classification and for such practical concerns as informing legal decisions (e.g., whether an individual was competent to stand trial) and the determination of health insurance payments.

DSM-5 (American Psychiatric Association, 2013) lists the following types of psychiatric disorders:

  • Neurodevelopmental disorders
  • Schizophrenia spectrum and other psychotic disorders
  • Bipolar and related disorders
  • Depressive disorders
  • Anxiety disorders
  • Obsessive-compulsive and related disorders
  • Trauma- and stressor-related disorders
  • Dissociative disorders
  • Somatic symptom and related disorders
  • Feeding and eating disorders
  • Sleep–wake disorders
  • Sexual dysfunctions
  • Gender dysphoria
  • Disruptive, impulse-control, and conduct disorders
  • Substance-related and addictive disorders
  • Neurocognitive disorders
  • Paraphilic disorders
  • Personality disorders

The next sections provide summaries of these major DSM-5 listings, including suspected causes and current treatment approaches.

DSM 5 – Neurodevelopmental and Schizophrenic Spectrum Disorders

The diagnosis of neurodevelopmental disorders is based on clinical and behavioral observations made during childhood and adolescence. These disorders are suspected to be the result of impairments in the brain or central nervous system resulting from heredity or problems occurring during fetal development. Debilitating neurodevelopmental disorders with known genetic causes are Down syndrome (an intellectual disability disorder) and fragile-X syndrome (an autism spectrum disorder). Down syndrome (Figure 11.1) occurs when a child inherits a fragment or entire third copy of the 21st chromosome (Figure 11.2).

Image result for Down's syndrome

Figures 11.1 and 11.2 Chromosome 21 and Down’s syndrome.

Fragile-X syndrome results when there is a mutation of a known specific gene on the X chromosome (Santoro, Bray, & Warren, 2012). The following videos provide information regarding Fragile-X and autism spectrum disorders in young children.

The goal of psychiatry, to determine the biological mechanisms underlying DSM disorders, is gradually being realized. The initiatives in neuroscience described in Chapter 2, promise to speed up the acquisition of such knowledge. For example, it has recently been determined that autism results from an increase of patches of irregular cells in the frontal and temporal cortexes during fetal development (Stoner, Chow, & Boyle, et al., 2014). These are the parts of the brain involved in complex social relationships and language, both of which are problematic in those suffering from autism spectrum disorders. The parents of 11 autistic children who died donated their brain tissue for analysis. Recently developed imaging techniques detected irregular patches of cells in these areas in 10 of the 11 brains. In comparison, similar patches were observed in the brain tissue of these areas for only 1 of 11 children without autism. No such patches were discovered in the visual cortex for either sample. This is consistent with the fact that autistic individuals do not suffer from visual deficits (Stoner, Chow, & Boyle, et al., 2014). Such findings increase hope that advances in our understanding of the biological mechanisms underlying psychiatric disorders will result in more targeted and effective treatments in the future. At present, these disorders substantially impact a child’s potential intellectual, social, and vocational achievements.

DSM-5 incorporates several significant changes from previous editions. One important change was making the diagnosis of intellectual development disorder based on deficits in intellectual (e.g., language, reading, math), social (e.g., quality of friendships, interpersonal skills, empathy), and practical (e.g., personal grooming, time management, money management) functioning. In the past, scoring below 70 on an IQ-test was the exclusive criterion. Another significant change in DSM-5 is the collapsing of categories that had historically been sub-divided into types. For example, both autism and schizophrenia are now considered “spectrum” disorders requiring determination of severity rather than type. Figure 11.3 portrays the range of debilitation in autism spectrum disorders. Those diagnosed with Asperger’s syndrome or Pervasive Developmental Disorder in prior DSMs, are considered to be on the “mild” part of the spectrum. Those previously diagnosed as autistic are considered to be on the severe end of the spectrum, based on the nature and extent of behavioral symptomology and learning disability.

File:Autism Spectrum Disorders subcategories.png

Figure 11.3 Autism spectrum disorders.

The previous categories of autism and schizophrenia spectrum disorders were collapsed because of the poor reliability of diagnoses for the sub-types of disorders. Reliability refers to the likelihood that two psychiatrists arrive at the same diagnosis for an individual. For example, if two physicians took your temperature, they should both obtain the same reading. Otherwise, the thermometer would have no value. It has been demonstrated that psychiatrists are reliable in their diagnoses of the generic disorders (e.g., autism or schizophrenia) but not the different sub-types that had been listed in prior editions of the DSM (e.g., Asperger’s Syndrome, PDD, etc.).

Throughout this book we have seen the utility of the scientific method in establishing cause-effect relationships in psychology. We have come a long way in understanding how nature and nurture interact to influence feeling, thought, and behavior. Just as success in the basic sciences of physics and chemistry led to technologies enabling transformation of our environmental conditions, success in psychology has resulted in technologies of behavior change. For decades, practice in the helping professions, including psychology, was based on tradition, anecdotal evidence, and case studies. In the early 1990s, an approach to clinical practice based on application of the scientific method known as evidence-based practice emerged in medicine, psychology, education, nursing, and social work (Hjørland, 2011). The discipline of psychology only considers the results from experimentally controlled outcome studies including a plausible baseline control condition as credible evidence (Chambless & Hollon, 1998). The APA issued initial recommendations and later established a Task Force describing psychology’s commitment to evidence-based practice (American Psychological Association, 1995; APA Presidential Task Force on Evidence-Based Practice, 2006). The APA Division of Clinical Psychology maintains a website listing current evidence-based treatments for behavioral disorders (http://www.div12.org/PsychologicalTreatments/index.html). It is an excellent resource for determining the current state-of-the-art in clinical psychology.

Currently, there is no known effective medical treatment for autism spectrum disorders. It is hoped that progress in the neurosciences will produce effective interventions in the future. Until we are able to address the underlying biological mechanisms for the disorder(s), the best we can do is to try to address the behavioral symptoms. The learning-based treatment known as applied behavior analysis (ABA) has been successful in this regard. This approach will be described in depth in the following chapter. For now, it is important to note that even if a behavioral disorder stems from biological mechanisms, it can still be successfully treated with non-biological, learning-based procedures. The reverse may also be true. That is, in some instances it may be possible to address non-biological behavioral disorders medically, for example with drugs.

The most commonly diagnosed DSM-5 neurodevelopmental disorder is attention deficit hyperactivity disorder (ADHD), affecting approximately six percent of children all over the world (Wilcutt, 2012). ADHD is diagnosed when instances of attention-related problems (e.g., distractibility, daydreaming, etc.) occur in multiple settings. Inattentive children tend to have more difficulty in school than at home or with friends whereas the reverse is true for impulsive children who benefit from structure (Biederman, 1998). ADHD is diagnosed three times as often in boys than girls, resulting in controversy (Sciutto, Nolfi, & Bluhm, 2004). Adolescents and adults frequently learn to cope on their own (Gentile, Atiq, & Gillig, 2004).

ADHD is our first example of the concern expressed by Szasz regarding the appropriateness of applying the medical model to behaviorally-defined problems. Attentional problems are inferred from distractibility, inability to maintain focus on a single task, becoming bored with non-pleasurable activities, daydreaming, or not paying attention to instructions. Hyperactivity can be inferred from fidgeting in one’s seat, non-stop talking, blurting things out, jumping up and down, or impatience. All of these examples of attentional and hyperactivity problems are characteristic of practically all children. There is a saying that “if the only tool you have is a hammer, every problem that comes along looks like a nail.” Psychiatrists are trained as physicians to diagnose and treat illnesses. Treatment usually consists of prescribing medication. Many question the validity of considering ADHD a psychiatric disorder and the ethics of prescribing medications for so many children (Mayes, Bagwell, & Erkulwater, 2008; Schonwald & Lechner, 2006; Singh, 2008). Szasz (2001, p. 212) concluded that ADHD “was invented and not discovered.”

Two comprehensive literature reviews of experimental studies found learning-based treatment effective with children diagnosed as ADHD (Fabiano, Pelham, Coles, Gnagy, Chronis-Tuscano, & O’Connor, 2009; Pelham & Fabiano, 2008). A multi-faceted approach including parent training, teacher-parent classroom intervention, and an individualized program addressing independent work habits and social skills, has demonstrated significant improvements in ADHD second- through fifth-graders (Pfiffner, Villodas, Kaiser, Rooney, & McBurnett, 2013). Learning-based approaches to improving school performance will be described in more detail in the next chapter as we consider the role of professional psychologists in enabling individuals to achieve their potential in different environments.

Schizophrenia Spectrum Disorders

Schizophrenia is probably the DSM diagnosis most resembling the stereotype of “mental illness.” It is a disabling disorder characterized by severe cognitive and emotional disturbances. Schizophrenia is most likely to first appear late in adolescence or in early adulthood (van Os & Kapur, 2009). The symptoms are unusual and often bizarre. They may include delusions, hallucinations, disorganized speech, catatonic behavior, or flat affect (American Psychiatric Association, 2013). Delusions are strongly held beliefs having no basis in fact. One common delusion is that one’s behavior is being controlled by external forces (e.g., electric wires or “aliens”). Another is the belief that one has exceptional qualities or talents (i.e., delusions of grandeur). Hallucinations are inferred when an individual behaves as though a non-apparent event is occurring; for example, speaking to someone who is not present. “Word salad”, in which words are spoken in a meaningless fashion, is a common form of disorganized speech. Catatonia is a state of immobilization which can occur for a variety of reasons, including stroke, infection, or withdrawal from addictive substances. Sometimes an individual diagnosed with schizophrenia assumes the posture of a “waxy figure”, remaining still unless manipulated by another person. Flat affect refers to the lacking of emotional expression; the individual does not appear to experience emotions appropriate to the situation. Schizophrenia is commonly misunderstood as referring to a “split personality.” What had been diagnosed as Multiple Personality Disorder in previous editions of the DSM is now considered Dissociative Personality Disorder, and will be discussed below.

Schizophrenia is a chronic disorder with between 80 and 90 percent of the patients retaining the diagnosis over a ten-year period (Haahr, Friis, Larsen, Melle., Johannessen, Opjordsmoen, Simonsen, Rund, Vaglum, & McGlashan, 2008). In extreme forms, schizophrenia can be debilitating. The differences appear to reflect influences by both nature and nurture. The risk of developing schizophrenia increases as a function of the percentage of genes shared (nature) as well as similarity of environment (nurture). Identical twins are almost three times as likely to develop schizophrenia as fraternal twins. Fraternal twins as well as ordinary siblings share half their genes. Fraternal twins are likely to have more similar environments than ordinary siblings and are twice as likely to develop schizophrenia (Gottesman, 1991). Unlike Down and Fragile X syndromes, several genes are thought to be involved in schizophrenia (Picchioni & Murray, 2007).

In 1955, approximately 550 thousand Americans were housed in public psychiatric institutions. Development of anti-psychotic medications and implementation of federally funded treatment programs resulted in a dramatic reduction in this population to 100,00 by 1985 (Torrey, 1991). An unfortunate byproduct of this deinstitutionalization was that many patients diagnosed as schizophrenic were imprisoned or left homeless, without treatment (Eisenberg & Guttmacher, 2010).

The psychological model of maladaptive behavior, described later, distinguishes between behavioral excesses and behavioral deficits. A similar distinction is often made by psychiatrists between positive (i.e., excesses) and negative (i.e. deficits) symptoms of schizophrenia. Reports of delusions, hallucinations, or disordered speech are examples of schizophrenic behavioral excesses (positive symptoms). Behavioral deficits can include flat affect (i.e., little emotionality), poor interpersonal skills, and lack of motivation to succeed. Although behavioral deficits may be less disturbing than behavioral excesses, they actually interfere to a greater extent with daily functioning and are less responsive to medication (Smith, Weston, & Lieberman, 2010; Velligan, Mahurin, Diamond, et al. (1997). Medication suppresses delusions and hallucinations but cannot teach interpersonal skills or motivate an individual to achieve their potential. Although they are limited to positive symptoms, drugs still remain an effective approach to treating those diagnosed as schizophrenic (National Collaborating Centre for Mental Health, 2009).

Prior to deinstitutionalization, token economy procedures based on direct and indirect learning principles were successfully applied to schizophrenic populations within large psychiatric facilities (Ayllon & Azrin, 1968; Kazdin, 1977; Paul & Lentz, 1978). Token economies establish contingencies between a tangible generalized reinforcer (i.e. token) and desirable behaviors. Usually, a type of “store” is established, permitting the exchange of tokens for desirable items or opportunities to engage in pleasurable activities (Martin & Pear, 2011, pp. 305-319). After the drastic decline in inpatients resulting from use of anti-psychotic medications, there was a corresponding decline in the need for token economies in institutional settings. Still, additional treatment was necessary to address the behavioral and motivational deficits typically remaining after schizophrenics were released.

In a seminal study, therapy in which families were taught to manage the symptoms of schizophrenia (e.g., by monitoring compliance with taking medications, reducing stress, providing support) combined with medication, was shown to reduce relapse rates beyond that attained with medication alone (Goldstein, Rodnick, Evans, May, & Steinberg, 1978). Medication alone resulted in a 25% reduction in relapse in comparison to the placebo control and the addition of family therapy reduced relapse by an additional 25% (Dixon, Adams, & Lucksted, 2000; Dixon & Lehman, 1995).

Comprehensive reviews of controlled outcome studies evaluating cognitive-behavioral and family intervention treatments in which schizophrenics were taught to re-evaluate their symptoms, develop coping strategies, and engage in reality testing exercises, concluded these approaches were effective for treating negative as well as positive symptoms of schizophrenia (Jauhar, McKenna, Radua, Fung, Salvador, & Laws, 2014; Pilling, Bebbington, Kuipers, Garety, Geddes, Orbach, & Morgan, 2002; Rector & Beck, 2001; Turkington, Dudley, Warman, & Beck, 2004; Wykes, Steel, Everitt, & Tarrier, 2008). In addition, it has been found that fewer drop out of treatment, or relapse afterward, with the learning-based treatment approaches in comparison to when treated exclusively with anti-psychotic medications (Gould, Mueser, Bolton, et al., 2001; Rathod & Kingdon, 2010).

An example of the synergy between psychology and psychiatry is the relationship between basic research in cognition (see Chapter 7) and our current understanding of the nature of the intellectual deficits characterizing different DSM disorders. For example, the percentage of normal functioning for verbal memory, short-term (working) memory, psychomotor speed and coordination, processing speed, verbal fluency, and executive functioning were compared for low- and high-performing schizophrenics (Bechi, Spangaro, Agostoni, Bosinelli, Buonocore, Bianchi, Cocchi, Guglielmino, et. al, 2019).

DSM 5 – Bipolar, Depressive and Anxiety Disorders

Bipolar and Related Disorders

The importance of nature (heredity and biology) in neurodevelopmental and schizophrenia spectrum disorders is readily apparent. Neurodevelopmental disorders appear too early in life for nurture to have a major influence and the symptoms can be physical as well as behavioral. For example, Down syndrome children have distinct anatomical features making them easy to identify. Autistic (including many fragile X) children and schizophrenic adults are usually physically indistinguishable from their peers; however their defining symptoms are extreme and easily identifiable. Individuals with these diagnoses appear to differ from “normal” individuals qualitatively rather than quantitatively. Although it has been proposed in the past (c.f., Kanner, 1943), there is no evidence to suggest that faulty parenting is the cause of autism or schizophrenia. Rather, the evidence supports attributing these severe disorders to an underlying biological problem (Centers for Disease Control, 2011, p. 7).

Bipolar disorders are not as obviously influenced by hereditary and biological factors as neurodevelopmental and schizophrenia spectrum disorders. Depending upon the severity, those diagnosed with bipolar disorders may appear to differ from others in the extremity, rather than in the type of behavior. The defining characteristic of bipolar disorder is extreme excitability and irritability, referred to as mania. Extreme mania can result in risky life decisions and sleep disorders (Beentjes, Goossens, & Poslawsky, 2012). Figure 11.4 includes sketches of mood changes over a two-month period for individuals displaying the normal pattern as well as those of unipolar depression, bipolar types 1 and 2 as well as cyclothymia. The average person demonstrates relatively mild highs and lows. Unipolar depression is characterized by extreme lows. Bipolar 1 includes extended periods of extreme highs and lows whereas the high is not as extreme in bipolar 2. Cyclothymia is characterized by less severe and more frequent mood swings.

Figure 11.4 Bipolar disorder

Every one experiences “ups” and “downs” in life. Bipolar disorders involve more extreme moods and more frequent mood swings. One’s emotions usually reflect ongoing events in everyday life. The ups and downs of individuals diagnosed as bipolar may be episodic and not dependent upon environmental events. The episodes can be extreme and last as long as six months (Titmarsh, 2013). Evidence suggests there is a genetic component to bipolar disorder. First degree relatives (i.e. parents, offspring, and siblings) are ten times as likely to develop the disorder as the general population (Barnett & Smoller, 2009). Several genes appear mildly to moderately involved (Kerner, 2014). Pharmacologic treatment is often prescribed. Lithium thus far appears to be the most effective drug, particularly for reducing the frequency and intensity of manic episodes (Poolsup, Li Wan Po, & de Oliveira, 2000).

Depressive Disorders

Into every life a little rain must fall.

It is normal to experience sadness and different degrees of depression. Just as manic episodes can be extreme, the same is true for depressive episodes. Depression can be long-lasting and severe in its impact upon everyday functioning. The following are the DSM-5 diagnostic criteria for major depressive disorder:

At least five of the following symptoms have been present during the same 2-week period and represent a change from previous functioning: at least one of the symptoms is either 1) depressed mood or 2) loss of interest or pleasure.

  1. Depressed mood most of the day, nearly every day, as indicated either by subjective report (e.g., feels sad or empty) or observation made by others (e.g., appears tearful)
  2. Markedly diminished interest or pleasure in all, or almost all, activities most of the day, nearly every day (as indicated either by subjective account or observation made by others)
  3. Significant weight loss when not dieting or weight gain (e.g., a change of more than 5% of body weight in a month), or decrease or increase in appetite nearly every day
  4. Insomnia or hypersomnia nearly every day
  5. Psychomotor agitation or retardation nearly every day (observable by others, not merely subjective feelings of restlessness or being slowed down)
  6. Fatigue or loss of energy nearly every day
  7. Feelings of worthlessness or excessive or inappropriate guilt (which may be delusional) nearly every day (not merely self-reproach or guilt about being sick)
  8. Diminished ability to think or concentrate, or indecisiveness, nearly every day (either by subjective account or as observed by others)
  9. Recurrent thoughts of death (not just fear of dying), recurrent suicidal ideation without a specific plan, or a suicide attempt or specific plan for committing suicide (American Psychiatric Association, 2013).

Due to its high rate of occurrence, major depressive disorder is often referred to as “the common cold of mental illness” (Seligman, 1975). A two-to-one female/male ratio of the incidence of depression has been found across nationality, culture, and ethnicity (Nolen-Hoeskema, 1990). In a review of research addressing these gender differences, Nolen-Hoeskema (2001) cited the higher incidence of the following factors for women; sexual assault during childhood, poverty, and greater responsibilities for child and parental care. She also describes differences in the characteristic ways males and females respond to stressful or disappointing situations. Women are more likely to maintain conscious focus on upsetting events (i.e., ruminate) whereas men are more likely to distract themselves or take action to address the situation (Nolen-Hoeskema, 2001). In the next chapter, we will describe the thinking patterns that are characteristic of those diagnosed with major depressive disorder and the cognitive-behavioral psychological treatments designed to modify these self-defeating patterns. Cognitive-behavioral treatments have been found to be as effective as pharmacological treatments for the short-term treatment of depression and to be more effective in maintaining treatment effects once drugs are withdrawn (Dobson, 1989). Anti-depressive medications may be prescribed for long-lasting episodes or when there are signs of suicidal thinking.

There is consensus among psychologists and psychiatrists that a nature/nurture model is necessary for understanding depression. The popular diathesis-stress model proposes that individuals vary in their susceptibility to depression based on interactions between their genetics and experiences, particularly during childhood (National Institute of Mental Health, 1999). In support of this model, it has been found that variation in the 5-HTT gene influencing the neurotransmitter serotonin increases the likelihood of becoming depressed after experiencing stressful life events (Caspi, Sugden, & Moffitt, 2003). The most popular anti-depressant medications are serum serotonin reuptake inhibitors (SSRIs) that affect the balance of the neurotransmitters serotonin, dopamine, and norepinephrine (Nutt, 2008). By inhibiting the reuptake of serotonin, its level is increased in the synaptic cleft enabling it to bind with other neurotransmitter receptor cells (see Figure 11.5).

File:Reuptake both.png

Figure 11.5 How SSRIs work.

Anxiety Disorders

The transition from high school to college can be very stressful. It requires adapting to a different environment with a host of new responsibilities. If you are not commuting from home, it may be the first time in your life you are living on your own. Your parents are not waking you up in the morning and making sure you are on time for all your scheduled activities. They are not preparing your meals or checking to make sure you did your homework. You probably are experiencing more autonomy and perhaps more problems to solve on your own than ever before.

As we saw in Chapter 5, adaptive learning involves acquiring the ability to predict and where possible, control environmental events. After one is able to predict events, they no longer are surprised or anxious. Anxiety is the name for the feeling that one experiences in anticipation of a possible aversive event. When extreme, anxiety can be accompanied by activation of the autonomic fight-or-flight response including increases in one’s heart rate and rapid breathing. Once one becomes confident they can control events they no longer feel anxious and these physical responses subside. Do you remember your first days on campus when everything was new? How about your first exams? Did it matter what courses your exams were in or did you feel the same about all of them? How did you feel about approaching and speaking to your professors? Do you still feel the same way? I hope you have successfully adjusted to the rhythms and responsibilities of college life. If so, you probably feel a lot less anxious than you did those first days on campus.

The major anxiety disorders listed in DSM-5 include the following:

  • Generalized Anxiety Disorder
  • Specific Phobia
  • Social Anxiety Disorder (Social Phobia)
  • Panic Disorder

Just as we all experience ups and downs, we all experience anxiety and fear. Once again, only when it reaches the point where it interferes with our daily functioning and ability to enjoy life on an ongoing basis, does extreme anxiety or fear become diagnosed as a psychiatric condition. Generalized anxiety disorder occurs across many situations and is chronic. Anxiety and fear are modulated by the primitive part of the brain called the limbic system that was described in chapter 2. General anxiety disorder is thought to be the result of faulty processing of fear between the amygdala and the hypothalamus, brain stem, and cerebellum (Etkin, Prater, Schatzberg, Menon, & Greicius, 2009). These areas are involved in determining the threat level of a stimulus and relaying the information to the cortex for higher-level processing and formulation of a response. Functional MRI imaging was conducted on normal subjects and those suffering from general anxiety disorder. The connections between the amygdala and other brain areas were significantly less distinct in those suffering from the disorder whereas there was increased cortical connectivity. The authors concluded that these results supported a model of general anxiety disorder in which those affected with malfunctioning amygdala were forced to compensate at higher levels of the cortex (Etkin, Prater, Schatzberg, Menon, & Greicius, 2009). Consistent with this cognitive interpretation of general anxiety disorder, as with depression, an extensive review of experimental research evaluating cognitive-behavioral and pharmacologic treatment approaches found them comparable in their short-term effects. Cognitive-behavioral procedures, however, once again demonstrated long-lasting effects whereas pharmacologic improvements disappeared once medication was terminated (Gould, Otto, Pollack, & Yap, 1997).

Unlike general anxiety disorder, which occurs in many situations, the diagnosis of specific phobia applies to an extreme and irrational fear occurring in a specific situation. Common examples of easily acquired fears related to our evolutionary history include spiders, snakes, height, open spaces, confined spaces, strangers, and dead things (Seligman, 1971). Social anxiety disorder (social phobia) refers to extreme and irrational anxiety related to real or imagined situations involving other people. It often involves circumstances in which one is being assessed or judged (e.g., in school, at a party, during interviews, etc.). As described in Chapter 5, desensitization and reality therapy procedures are very effective in treating anxiety and fear disorders. Self-help techniques based on cognitive behavioral strategies have been found effective for some individuals (Lewis, Pearce, & Bisson, 2012).

Panic attacks are unpredictable and can be debilitating. Physical symptoms may include a rapid pulse, shortness of breath, perspiration, and trembling. Symptoms can be so severe as to be interpreted as a heart attack. The person can feel as though they are losing control, going crazy, or dying. Pharmacologic treatment and cognitive behavioral techniques have both been found to be more effective than placebos for the treatment of panic disorders, with the combination producing the best results (van Apeldoorn, van Hout, Mersch, Huisman, Slaap, Hale, & den Boer, 2008).

Obsessive-Compulsive and Related Disorders

Obsessions are thoughts that repeatedly intrude upon one’s conscious experience. Compulsions are behaviors one feels the need to repeat despite their interfering with achievement of other tasks. Historically, there has been confusion regarding the different ways in which these terms are used in DSM diagnoses. The distinction is still made in DSM-5 between obsessive-compulsive disorder (OCD) and obsessive-compulsive personality disorder (OCPD); the latter will be treated separately under personality disorders. OCD and OCPD can include repetitious behaviors such as hoarding or placing things in neat piles. The OCD individual recognizes these behaviors as problematic whereas the OCPD individual sees them as being appropriate and desirable. OCD is sometimes considered an anxiety disorder with the ritualistic behaviors maintained by stress-reduction (i.e., negative reinforcement). Similar to depression and other anxiety-related disorders, OCD has been successfully treated with SSRIs. The cognitive behavioral technique, exposure and response prevention has been found highly effective in the treatment of OCD (Huppert & Roth, 2003). For example, if a person constantly checks to see if a door is locked, they are permitted to check only once (i.e., they are exposed to the lock and prevented from making the response a second time). A major research study found that exposure and response prevention was as effective alone as when it was combined with medication for OCD (Foa, Liebowitz, Kozak, Davies, Campeas, Franklin, Huppert, Kjernisted, et al., 2005).

Trauma- and Stressor-Related Disorders

Posttraumatic stress disorder (PTSD) may be acquired through a direct or indirect learning experience. One can experience a traumatic event such as sexual assault, severe injury, or threat of death; or one can observe any of these events occur to someone else, particularly a close friend or relative. Diagnosis of PTSD is usually made when a person reports experiencing recurrent flashbacks of a traumatic event more than a month after it happened. The person may avoid talking about or approaching any reminder of the event (American Psychiatric Association, 2013).

Similar to major depressive disorder, a diathesis-stress model appears to apply, there being evidence for individual differences in susceptibility to PTSD. In this instance, genes effecting the neurotransmitter GABA were found to be related to the likelihood that individuals experiencing severe trauma as children were diagnosed with PTSD as adults (Skelton, Ressler, Norrholm, Jovanovic, & Bradley-Davino, 2012). Similar findings were obtained with adults who were abused as children. Those having a particular gene were more likely to later develop PTSD (Binder, Bradley, & Liu, 2008). Cognitive-behavior therapy is considered the treatment of choice for PTSD by the United States Departments of Defense (Hassija & Gray, 2007) and Veteran Affairs (Karlin, Ruzek, Chard, Eftekhari, Monson, Hembree, Resick, Foa, & Patricia, 2010).

DSM 5 – Other Disorders

Dissociative Disorders

Dissociative disorders have captured the public’s imagination as the result of several popular books and movies. The disorders are characterized by a disconnect between an individual’s immediate experience and memory of the past. The major dissociative disorders listed in DSM-5 include dissociative identity disorder, dissociative amnesia, and depersonalization disorder.

Dissociative identity disorder is characterized by two or more distinct, integrated personalities appearing at different times. Each personality can exist in isolation from the others, with little or no memory of the other’s existence. Dissociative amnesia is usually a temporary disorder affecting episodic (i.e., autobiographical) memory. It is the most common of the dissociative disorders. Depersonalization disorder is often described as an “out-of-body experience.” You realize it is not true, but feel as though you are watching yourself.

Until the transition to emprically-validated procedures in medicine, Freud’s psychodynamic model was prevalent in psychiatry and still influential in clinical psychology. Multiple personality disorders, amnesia, and out-of-body experiences make wonderful plot lines. The Bird’s Nest (Jackson, 1954), The Three Faces of Eve (Thigpen & Cleckley, 1957), and Sybil (Schreiber, 1973) describe the lives of individuals that fit the DSM criteria for dissociative identity disorder. These books and the movies they spawned (Lizzie, for The Bird’s Nest) were released when the Freudian influence on psychiatry was at its peak. Each told the story of relentless and insightful psychiatrists probing the early childhood experiences of individuals appearing to have different personalities at different times. Eventually, some source of childhood abuse was identified, the person was “cured” and lived happily ever after.

At best, Freud and others, basing their explanations of the causes of maladaptive behavior on uncontrolled case history evidence, offer hypotheses to be tested. No one ever tested Freud’s oedipal conflict interpretation of Little Han’s fear of horses by having a father threaten a child to see if the child projected fear onto another animal. In Chapter 5, we described Watson’s demonstration of the classical conditioning of a fear response to white rats in Little Albert. Watson felt that known, basic learning principles, could account for fear acquisition. He questioned the plausibility of Freud’s interpretation of the development of Little Hans’ fear. Direct and indirect classical conditioning procedures have been found to be effective in producing and eliminating anxiety, fears, and phobias.

Despite the convincing and exciting portrayals of dissociative disorder patients and the therapeutic process, there is now reason to question the Freudian assumptions underlying the narratives. After a comprehensive literature review, it was concluded that there was no credible data supporting the conclusion that dissociative disorders or amnesia result from childhood trauma as opposed to injuries to the brain or disease (Kihlstrom, 2005). The actual person that Sybil was based upon admitted that she faked the symptoms. Analyses of the transcripts of the therapeutic sessions resulted in a very different interpretation of her case and dissociative identity disorders in general (Lynn & Deming, 2010). Recent experimental evidence suggests that sleep deprivation may be an underlying cause of dissociative symptoms. It has been demonstrated that extreme dissociative symptoms can result from a single night’s deprivation of sleep (Giesbrecht, Smeets, Leppink, Jelicic, & Merckelbach, 2007). In another study, half of the patients meeting the criteria for dissociative disorders improved after normalization of their sleep patterns (van der Kloet, Giesbrecht, Lynn, Merckelbach, & de Zutter, 2012; Lynn, Berg, Lilienfeld, Merckelbach, Giesbrecht, Accardi, & Cleere, 2012). It may not be fascinating or provocative, but an effective way to avoid or treat dissociative disorders may be to get a good night’s sleep.

Somatic Symptom and Related Disorders

The diagnosis of somatic symptom disorder is based on the presence of severe medical symptoms (e.g., blindness, loss of the ability to move a hand, etc.) with no indication of a biological cause. The diagnosis of illness anxiety disorder is based on debilitating anxiety resulting from real or imagined health concerns. Health becomes the focus of one’s existence. This often results in spending long periods of time conducting research on a symptom or disease. In the past, such symptoms were described as psychosomatic or hypochondriasis, but these terms are now considered trivializing and demeaning and have been dropped from recent DSM editions. The problem of diagnosing a disorder based on the absence of biological symptoms is recognized as problematic (Reynolds, 2012). DSM-5 emphasizes the presence of behavioral symptoms such as repeated verbalizations or reports of obsessive thinking about health concerns or excessive time spent conducting research regarding medical concerns. As with anxiety and depressive disorders, the most effective treatment for somatic symptom disorders is cognitive-behavior therapy; medications may be prescribed in extreme or unsuccessful cases (Sharma & Manjula, 2013).

Feeding and Eating Disorders

The developmental distinction between feeding and eating disorders has been relaxed somewhat in DSM-5. The feeding disorders, rumination, the regurgitation of food after consumption, and pica, the consumption of culturally disapproved, non-nutritious substances (e.g., ice, dirt, paper, chalk, etc.), are now recognized as occurring in all age groups (Blinder, Barton, & Salama, 2008). The previous diagnosis, feeding disorder of infancy and childhood, has been renamed avoidant/restrictive food intake disorder, a broad category applying across the age span (Bryant-Waugh, Markham, Kreipe, & Walsh, 2010).

Direct learning techniques have been used to successfully treat rumination and pica disorders. Rewarding normal eating and punishing the initiation of rumination by placing a sour or bitter tasting substance on the tongue has been found effective in suppressing rumination (Wagaman, Williams, & Camilleri, 1998). Another procedure found to be effective is to teach individuals to breathe from their diaphragm while eating (Chitkara, van Tilburg, Whitehead, & Talley, 2006). Pica disorders have been treated using classical conditioning by pairing the inappropriate substance with a sour or bitter taste. Differential reinforcement procedures in which appropriate eating is followed by presentation of toys but inappropriate eating is not, in addition to time-out procedures, have also been effective treatments with normal individuals (Blinder, Barton, & Salama, 2008) and those with developmental disabilities (McAdam, Sherman, Sheldon, & Napolitano, 2004).

In past DSM editions, anorexia nervosa and bulimia were the only diagnosable psychiatric eating disorders. Women between the ages of 15 and 19 comprise 40% of those diagnosed with these two disorders (Hoek, & van Hoeken, 2003). Binge eating disorder was added as a diagnosable disorder in DSM-5 (Wonderlich , Gordon, Mitchell, Crosby, & Engel, 2009).

Anorexia nervosa is defined in DSM-5 by: 1) low body weight relative to developmental norms; 2) either the expressed fear of weight gain or presence of overt behaviors designed to interfere with gaining weight (e.g., excessive under-eating or extremely intense exercise sessions); and 3) a distorted body image (American Psychiatric Association, 2013; Attia & Walsh, 2007).

Anorexia is an insidious and potentially life-threatening disorder. A comprehensive review of 32 studies evaluating the effects of cognitive behavioral procedures alone, medicine alone, and both combined, was inconclusive. Although cognitive behavioral procedures appeared effective for those attaining a normal weight (i.e., preventing relapse), it was not clear that they were effective in helping individuals gain weight in the first place (Bulik, Berkman, Brownley, Sedway, & Lohr, 2007). The treatment results for bulimia nervosa are clearer and better. Bulimia is a disorder characterized by consumption of large quantities of food in a short time (i.e., binging) followed by attempts to lose weight through extreme measures such as induced vomiting or consuming laxatives (i.e., purging). Those diagnosed with binge eating disorder do not engage in purging. In comparison to anorexia, the diagnosis of bulimia or binge eating disorder can be more difficult because the majority of individuals remain close to their recommended weight (Yager, 1991). A review of randomized, controlled studies evaluating cognitive-behavioral therapy, interpersonal therapy, and medical treatment concluded that although not effective with all individuals, cognitive-behavioral procedures are still the initial treatment of choice (Walsh, Wilson, Loeb, Devlin, Pike, Roose, Fleiss, & Waternaux, 1997). Follow-up studies attempted to determine the predictive factors for an effective treatment outcome. Those exhibiting more severe symptoms and greater impulsivity were more likely to drop out of treatment. In addition, it was found that individuals remaining in treatment and requiring more than six sessions to reduce purging were unlikely to profit from additional cognitive-behavioral treatment. Those with a prior history of substance abuse were also less likely to profit from treatment. Medications were sometimes helpful in treating those not successful with cognitive-behavioral treatment approaches (Agras, Crow, Halmi, Mitchell, Wilson, & Kraemer, 2000; Agras, Walsh, Fairburn, Wilson, & Kraemer, 2000; Wilson, Loeb, Walsh, Labouvie, Petkova, Liu, & Waternaux, 1999).

Promising preliminary results have been obtained with the direct learning procedure, cue-exposure. Binging and purging were reduced in 22 adolescents diagnosed with bulimia who were resistant to other procedures. This was achieved by systematically exposing the adolescents to the specific environmental cues which triggered their binging and purging (Martinez-Mallén, Castro-Fornieles, Lázaro, Moreno, Morer, Font, Julien, Vila, & Toro, 2007). Cue-exposure is an extinction procedure. By repeatedly presenting the specific stimuli without permitting eating to occur, the strength of the cravings produced by these cues is reduced. A recent review describes innovative variations on cue-exposure including virtual reality procedures (Koskina, A., Campbell, L., & Schmidt, U. (2013). Virtual reality techniques have been found effective in simulating idiosyncratic cues, eliciting strong cravings for food in individuals diagnosed with bulimia. Such realism improves the effectiveness of cue-exposure procedures conducted in clinical settings (Gutierrez-Moldanado, Ferrer-Garcia, & Riva, G., 2013; Ferrer-Garcia, Gutierrez-Moldanado, & Pla, 2013).

Sleep–Wake Disorders

We previously saw that a poor night’s sleep could lead to severe psychiatric symptoms. Research has also demonstrated that good sleep habits result in improved health and psychological functioning (Hyyppa & Kronholm, 1989). Worldwide, approximately 30 percent of adults report difficulty initiating or maintaining sleep or experiencing poor sleep quality. Six percent meet the DSM-IV-TR criteria for insomnia disorder of having such symptoms occur at least three times a week and last for at least one month (Roth, 2007).

A National Institute of Health Conference concluded that cognitive-behavioral procedures were at least as effective as medications for treating insomnia and had the advantage of improvements continuing after the procedures were terminated. In addition, learning-based procedures do not pose the risk of undesirable side-effects (NIH, 2005). These conclusions are consistent with the results of several experimental studies and literature reviews (Edinger & Means, 2005; Jacobs, Pace-Schott, Stickgold, & Otto, 2004; Morin, Colecchi, Stone, Sood, & Brink, 1999). A review of six randomized, controlled trials concluded that the computerized self-help administration of cognitive- behavioral procedures was mildly effective and worthy of consideration as a minimally invasive initial approach to treatment (Cheng & Dizon, 2012).

Richard Bootzin (1972) developed an early learning-based approach to the treatment of insomnia based on the principles of stimulus control. Now known as the Bootzin Technique, it requires implementing the following procedures:

  • Go to bed only when you are sleepy
  • Use the bed only for sleeping
  • If you are unable to sleep, get up and do something else; return only when you are sleepy; if you still cannot sleep, get up again. The goal is to associate your bed with sleeping rather than with frustration. Repeat as often as necessary throughout the night.
  • Set the alarm and get up at the same time every morning, regardless of how much or how little sleep you’ve had.
  • Do not nap during the day (Bootzin, 1972).

There is a substantial amount of empirical support for the effectiveness of stimulus control procedures in addressing insomnia (Bootzin & Epstein, 2000, 2011; Morin & Azrin, 1987, 1988; Morin, Bootzin, Buysse, Edinger, Espie, & Lichstein, 2006; Morin, Hauri, Espie, Spielman, Buysse, & Bootzin, 1999; Riedel, Lichstein, Peterson, Means, Epperson, & Aguillarel, 1998; Turner & Ascher, 1979). Given the documented success of self-help approaches, the Bootzin Technique is certainly worth trying if you ever experience sleep problems.

Sexual Dysfunctions

In Chapter 4, we described Masters and Johnson’s (1966) proposed four-phase human sexual-response cycle consisting of an excitement phase, followed by a plateau, then orgasm, and then a calming phase in which the ability to become excited again is gradually reinstated (see Figure 4.4). A sexual dysfunction refers to a consistent problem occurring during one of the first three phases of normal sexual activity. When occurring during the first stage, problems are defined as sexual desire disorders. When occurring during the second stage, they are defined as sexual arousal disorders, and when during the third phase, as orgasm disorders (e.g., erectile dysfunction and premature ejaculation in men).

DSM-5 diagnoses of sexual disorders require durations of at least six months. Sexual desire and performance can be influenced by a multitude of factors including: other psychiatric conditions (e.g., anxiety, depression, etc.); hormonal irregularities (estrogen for women, testosterone for men); aging; fatigue (one more reason for maintaining good sleep habits); medications (e.g, SSRI anti-depressants); and relationship problems. Treatment for sexual dysfunctions can include individual or couples counseling, hormone replacement therapy, prescription of medications, or in extreme cases, implantation of surgical devices.

Gender Dysphoria

It is rare, but sometimes an individual believes their actual gender is different from what they appear to be. This can result in aversion to one’s own body, anxiety, and extreme unhappiness (i.e., dysphoria). A subtle change in DSM-5 was from the term, gender identity disorder, to gender dysphoria. The initial term implied that a problem existed when one felt they were a different gender than the sex assigned at birth. The latter term indicates that this is a problem only when it causes extreme unhappiness and interferes with daily functioning. The evidence suggests that once one establishes a sexual identity as male or female, whether or not it is consistent with one’s hormonally-determined sex, it cannot be altered through counseling (Seligman, 1993, pp. 149-150). All that can be done to reduce psychological distress is to perform sexual reassignment surgery and hormone replacement therapy in accord with the individual’s self-defined sex (Murad, Elamin, Garcia, Mullan, Murad, Erwin, & Montori, 2010). DSM-5 indicates that subsequent to successful reduction in dysphoria, there may still be the need for treatment to facilitate transition to a new lifestyle.

Disruptive, Impulse-Control, and Conduct Disorders

The chapter on disruptive, impulse-control, and conduct disorders is new to DSM-5. It integrates disorders involving emotional problems and poor self-control that appeared in separate chapters in prior DSM editions, including; oppositional defiant disorder, intermittent explosive disorder, conduct disorder, kleptomania, and pyromania. Diagnosis of oppositional defiant disorder depends upon the frequency and intensity of behaviors frequently characteristic of early childhood and adolescence including: actively refusing to comply with requests or rules; intentionally annoying others; arguing; blaming others for one’s mistakes; being spiteful or seeking revenge. Learning-based procedures have been found to be the most effective treatment for oppositional defiant disorder (Eyberg, Nelson, & Boggs, 2008).

The DSM-5 lists the following criteria for intermittent explosive disorder in children at least six years of age:

  • Recurrent outbursts that demonstrate an inability to control impulses, including either of the following:
    • Verbal aggression (tantrums, verbal arguments or fights) or physical aggression that occurs twice in a weeklong period for at least three months and does not lead to destruction of property or physical injury, or
    • Three outbursts that involve injury or destruction within a year-long period
  • Aggressive behavior is grossly disproportionate to the magnitude of the psychosocial stressors
  • The outbursts are not premeditated and serve no premeditated purpose
  • The outbursts cause distress or impairment of functioning, or lead to financial or legal consequences (American Psychiatric Association, 2013).

There is evidence for the effectiveness of SSRIs in alleviating some of the symptoms of intermittent explosive disorder (Coccaro, Lee, & Kavoussi, 2009). Overall, however, experimental outcome studies indicate that cognitive-behavioral treatment including relaxation training, cue-exposure to situational triggers, and modifying problematic thought patterns is generally more effective than medication (McCloskey, Noblett, Deffenbacher, Gollan, & Coccaro, 2008).

Conduct disorders are more serious and problematic than the other impulse-control disorders. They are defined by “a repetitive and persistent pattern of behavior in which the basic rights of others or major age-appropriate societal norms or rules are violated” (American Psychiatric Association 2013). Diagnostic criteria require that three or more of the following occur within the span of a year:

Aggression to people and animals

  1. often bullies, threatens, or intimidates others
  2. often initiates physical fights
  3. has used a weapon that can cause serious physical harm to others (e.g., abat, brick, broken bottle, knife, gun)
  4. has been physically cruel to people
  5. has been physically cruel to animals
  6. has stolen while confronting a victim (e.g., mugging, purse snatching, extortion, armed robbery)
  7. has forced someone into sexual activity

Destruction of property

8. has deliberately engaged in fire setting with the intention of causing serious damage

9. has deliberately destroyed others’ property (other than by fire setting)

Deceitfulness or theft

10. has broken into someone else’s house, building, or car

11. often lies to obtain goods or favors or to avoid obligations (i.e., “cons” others)

12. has stolen items of nontrivial value without confronting a victim (e.g., shoplifting, but without breaking and entering; forgery)

Serious violations of rules

13. often stays out at night despite parental prohibitions, beginning before age 13 years

14. has run away from home overnight at least twice while living in parental or parental surrogate home (or once without returning for a lengthy period)

15. is often truant from school, beginning before age 13 years(American Psychiatric Association 2013).

The diagnosis for conduct disorder distinguishes between childhood-onset and adolescent-onset types. The former requires that at least one of the criteria occur prior to the age of ten. The distinction is also made between severity levels. The conduct disorder is considered mild if there are few problems beyond those required to meet the criteria and only minor harm results. The disorder is considered moderate if the number of problems and harm done is between the levels required for mild and severe. Severe conduct disorder consists of many problems beyond those required to meet the criteria resulting in extreme harm (American Psychiatric Association, 2013).

Individuals diagnosed with conduct disorder can be extremely destructive and dangerous. In a review of research, rates of conduct disorder ranged from 23% to 87% for incarcerated youth or those in detention facilities (Teplin, Abram, McClelland, Mericle, Dulcan, & Washburn, 2006). It is important to identify the predisposing factors and initiate treatment for conduct disorder as soon as possible. Deficits in intellectual functioning, verbal reasoning, and organizational ability are common (Lynam & Henry, 2001; Moffit & Lynam, 1994). Children and adolescents diagnosed with conduct disorder often live in dangerous neighborhoods under poor financial conditions with a single (possibly divorced) parent and deviant peers. Parental characteristics frequently include: criminal behavior; substance abuse; psychiatric disorders; unemployment; a negligent parenting style with low levels of warmth and affection, poor attachment, and inconsistent discipline (Granic & Patterson, 2006; Hinshaw & Lee, 2003).

Truancy or poor performance in school, frequent fights, or incidents of bullying can be early warning signs for conduct disorder. Medications have been unsuccessful as a treatment approach (Scott, 2008). The results of behavioral training, in which parents are taught to implement the basic principles of direct and indirect learning through instruction, observational learning, and guided practice, have been encouraging (Kazdin, 2010). Necessary skills include systematic and accurate observation of behavior, effective use of prompting, fading, and shaping techniques, and consistent administration of reinforcement, punishment, and extinction procedures (Breston & Eyberg, 1998; Feldman & Kazdin, 1995). Follow-up research has demonstrated treatment effects lasting as long as 14 years (Long, Forehand, Wierson, & Morgan, 1994). Despite the successes of behavioral parent training, it is severely under-utilized due to the lack of availability of a sufficient number of parent-trainers and logistic problems during its implementation. There is the hope that these needs can be addressed by taking advantage of different technologies. Videotapes of expert practitioners can be provided to assist in training. Videotapes of interactions between parents and children may be used to provide feedback on usage of behavioral techniques. Cell phones can be used to maintain communication between the professional staff and parents between meetings (Jones, Forehand, Cuellar, Parent, Honeycutt, Khavou, Gonzalez, & Anton, 2014). In the following chapter, we will describe a comprehensive, “multisystemic” approach to treating conduct disorder (Caron, Catron, Gallop, Han, Harris, Ngo, & Weiss, 2013; Henggeler, Melton, & Smith, 1992) as well as early intervention procedures designed to prevent the disorder from developing in the first place (Hektner, August, Bloomquist, Lee, & Klimes-Dougan, 2014).

Substance-Related and Addictive Disorders

The following drugs are considered addictive in DSM-5: alcohol, caffeine, cannabis (marijuana), hallucinogens (e.g., LSD), inhalants, opioids (pain killers), sedatives (tranquilizers), hypnotics (sleep inducers), stimulants (e.g., methamphetamine), cocaine, and tobacco (American Psychiatric Association, 2013). Our evolving understanding of the reward mechanisms involved in addictive disorders is an excellent example of the synergy between psychology and psychiatry. For example, the fact that gambling appears to activate the same brain reward mechanisms as drugs, resulted in its inclusion under substance use disorder in DSM-5.

Olds and Milner (1954) discovered that electrical stimulation of certain areas of a rat’s brain served as a powerful reinforcer for a rat’s bar-pressing behavior. Later it was discovered that manipulating the pulse of electrical stimulation produced behavioral effects similar to those resulting from different drug dosages; higher pulse rates acted like higher dosages. The effects were so powerful that rats preferred electrical stimulation to food and would continue to press the bar despite being starved (Wise, 1996)! In this respect, electrical brain stimulation acts in a manner similar to other addictive substances; the individual craves the substance despite self-destructive consequences. The same parts of the brain mediate the reinforcing effects of electrical stimulation and different drugs through the neurotransmitter dopamine (Wise, 1989, 1996).

Similar to autism and schizophrenia, the DSM-5 collapses across previous distinctions between types of substance abuse and addictive disorders and provides criteria for indicating severity. The diagnosis of substance use disorder encompasses the previous diagnoses of substance abuse and substance dependence. The severity of the disorder is based on the number of symptoms identified from the following list (2-3 = mild, 4-5 = moderate, six or more = severe):

  1. Taking the substance in larger amounts or for longer than the you meant to
  2. Wanting to cut down or stop using the substance but not managing to
  3. Spending a lot of time getting, using, or recovering from use of the substance
  4. Cravings and urges to use the substance
  5. Not managing to do what you should at work, home or school, because of substance use
  6. Continuing to use, even when it causes problems in relationships
  7. Giving up important social, occupational or recreational activities because of substance use
  8. Using substances again and again, even when it puts the you in danger
  9. Continuing to use, even when the you know you have a physical or psychological problem that could have been caused or made worse by the substance
  10. Needing more of the substance to get the effect you want (tolerance)
  11. Development of withdrawal symptoms, which can be relieved by taking more of the substance (American Psychiatric Association, 2013).

The Society of Clinical Psychology website for evidence-based practices lists behavioral marital (couples) therapy as having strong research support for alcohol use disorders. Cognitive therapy and contingency management procedures, in which individuals receive “prizes” for clean laboratory samples, have been effective in treating mixed substance use disorders. A separate website of evidence-based practices for substance use disorders is maintained by the University of Washington Alcohol and Drug Abuse Institute (http://adai.uw.edu/ebp/). It lists behavioral self-control training and harm reduction approaches as being effective with adults (including college students) experiencing drinking problems. The Brief Alcohol Screening and Intervention for College Students (BASICS) harm reduction approach will be described in the following chapter (Denerin & Spear, 2012; Dimeff, Baer, Kivlahan, & Marlatt, 1998; Marlatt, 1996; Marlatt, Baer, & Larimer, 1995).

Neurocognitive Disorders

The diagnosis of neurocognitive disorders is based on clinical and behavioral observations made during adulthood. Due to the similarity in names, it is inevitable that neurodevelopmental and neurocognitive disorders will be confused. Although these disorders are both suspected to be the result of impairments in the brain or central nervous system, their effects are at opposite ends of the lifespan and are in opposite directions. Neurodevelopmental disorders, occurring early in life, interfere with normal, age-appropriate cognitive and social development. Neurocognitive disorders are diagnosed later in life when there is deterioration in healthy cognitive functioning impacting on customary daily activities. When deterioration is extreme, as in Alzheimer’s disease, the individual may be unable to maintain an independent lifestyle.

Unlike prior DSM editions, DSM-5 includes the diagnosis of mild as well as severe versions of different neurocognitive disorders based on the underlying medical condition (when known). The listed medical conditions include: Alzheimer’s disease;frontotemporal disorder;disorder with Lewy bodies; vascular disorder;traumatic brain injury; substance or medication-induced disorders; HIV infection; Prion disease; Parkinson’s disease; and Huntington’s disease. The neural and brain damage resulting from the major neurocognitive disorders result in discouraging prognoses and limited to non-existent treatment options. For example, currently existing medications can only slow down, not halt the worsening symptoms of Alzheimer’s disease (e.g., severe memory loss). It is not possible to reverse the physical damage and learning-based approaches have not proved effective in improving cognitive functioning.

Paraphilic Disorders

A paraphilia is the experience of intense sexual arousal under non-normative conditions. The DSM-5 diagnosis of paraphilic disorder represents a change from how paraphilia was treated in the past. In prior editions, diagnosis was based on the occurrence of non-normative feelings and actions. The DSM-5 criteria require that, in addition, the feelings and behavior must cause distress or harm to oneself or others. The eight listed disorders include: exhibitionistic disorder (i.e., exposing oneself to strangers), fetishistic disorder (i.e., sexual arousal to unusual objects such as shoes); frotteuristic disorder (i.e., rubbing oneself against another individual without their consent); pedophilic disorder (i.e., sexual attraction to children); sexual masochism disorder (i.e., sexual behavior resulting in bodily harm to oneself); sexual sadism disorder (sexual behavior resulting in bodily harm to another non-consenting individual); transvestic disorder (i.e., sexual arousal resulting from dressing in the clothes of the opposite sex); and voyeuristic disorder (i.e., spying on individuals engaged in private activities).

The World Federation of Societies of Biological Psychiatry published guidelines for the biological treatment of paraphilia (Thibaut, De La Barra, Gordon, Cosyns, & Bradford, 2010). The goals of treatment included control of paraphilic fantasies, urges, behaviors, and distress. Cognitive-behavioral therapy was recommended along with six stages of pharmacologic treatment based upon the intensity of the individual’s fantasies, the level of success attained with a less powerful drug, and the risk for potential harm. It has been found that the combination of learning-based procedures and drugs was more effective than either alone (Hall, & Hall, 2007). In extreme instances, it is recommended that drugs or surgery that totally suppresses sexual urges be considered (Thibaut, De La Barra, Gordon, Cosyns, & Bradford, 2010).

Personality Disorders

We have reached the end of the DSM-5 list of psychiatric disorders. To some extent, we can consider the list as progressing from disorders with a strong nature (i.e., underlying genetic) component, such as Down and Fragile X syndromes, to personality disorders that, as described in Chapter 9, are based on nature/nurture interactions. The DSM-5 describes ten specific diagnosable personality disorders divided into three clusters as follows:

Cluster A (odd disorders)

  • Paranoid personality disorder: characterized by a pattern of irrational suspicion and mistrust of others, interpreting motivations as malevolent
  • Schizoid personality disorder: lack of interest and detachment from social relationships, and restricted emotional expression
  • Schizotypal personality disorder: a pattern of extreme discomfort interacting socially, distorted cognitions and perceptions

Cluster B (dramatic, emotional or erratic disorders)

  • Antisocial personality disorder: a pervasive pattern of disregard for and violation of the rights of others, lack of empathy
  • Borderline personality disorder: pervasive pattern of instability in relationships, self-image, identity, behavior and affects often leading to self-harm and impulsivity
  • Histrionic personality disorder: pervasive pattern of attention-seeking behavior and excessive emotions
  • Narcissistic personality disorder: a pervasive pattern of grandiosity, need for admiration, and a lack of empathy

Cluster C (anxious or fearful disorders)

  • Avoidant personality disorder: pervasive feelings of social inhibition and inadequacy, extreme sensitivity to negative evaluation
  • Dependent personality disorder: pervasive psychological need to be cared for by other people.
  • Obsessive-compulsive personality disorder (not the same as obsessive-compulsive disorder): characterized by rigid conformity to rules, perfectionism and control (American Psychiatric Association, 2013).

Each distinct personality disorder consists of a characteristic style of rigid, maladaptive thinking and behaving. As such, personality disorders are less specific than other DSM disorders in their behavioral symptomology. It is relatively straightforward and non-controversial to describe the behaviors from which one infers hallucinations, delusions, mania, depression, anxiety, and so on, including even conduct disorder and paraphilic disorder. It is more subjective and controversial to describe an individual as paranoid, antisocial, histrionic, narcissistic, avoidant, or dependent.

There is an old joke about a medical student that “died from a misprint.” We probably all can admit to sometimes behaving in paranoid, antisocial, histrionic, or narcissistic, and so on, ways. Does that mean we have a “mental illness?” This highlights the point Szasz made with his phrase “the myth of mental illness.” Saying someone “has” paranoid personality disorder provides none of the valuable information of a medical diagnosis. It does not tell us about the etiology of the behavior, its prognosis, or an effective treatment strategy.

Figure 11.6 summarizes research evaluating the evidence for brain dysfunction and the response to biological and psychosocial (i.e., learning-based) treatments for individuals diagnosed with personality disorders (Tasman, Kay, Lieberman, First, & Maj, 2008). There is very little evidence for biological pathology underlying any personality disorder unrelated to schizophrenia. Only drugs related to other disorders (e.g., antipsychotics, antidepressants, and mood stabilizers) are recommended for treatment. Personality disorders have also proven resistant to learning-based approaches, including the usually effective cognitive-behavior therapy. This should not be surprising, given the pervasiveness of the disorder which by definition affects all aspects of a person’s functioning.

Cluster Evidence for Brain Dysfunction Response to Biological Treatments Response to Psychosocial Treatments
A Evidence for relationship of schizotypal personality to schizophrenia; otherwise none known Schizotypal patients may improve on antipsychotic medication; otherwise not indicated Poor. Supportive psychotherapy may help
B Evidence suggestive for antisocial and borderline personalities; otherwise none known Antidepressants, antipsychotics, or mood stabilizers may help for borderline personality; otherwise not indicated Poor in antisocial personality. Variable in borderline, narcissistic, and histrionic personalities
C None known No direct response. Medications may help with comorbid anxiety and depression Most common treatment for these disorders. Response variable

Figure 11.6 Response of patients with personality disorders to biological and psychosocial treatments (adapted from Tasman, Kay, Lieberman, First, & Maj, 2008).

Aaron Beck, a psychiatrist and one of the first practitioners of cognitive-behavioral therapy, modified and lengthened the procedures to improve their applicability to personality disorders. He adopted some of the features of Freud’s traditional psychodynamic therapy, addressing thinking patterns developed during childhood as well as interpersonal styles resulting from relationships with one’s parents. Treatment was conducted over extended time periods and could last more than a year. In contrast to psychodynamic therapy, the individual would be expected to play a more active role in defining the nature of the problem, formulating treatment goals, and assessing treatment effectiveness. Homework was assigned to practice thinking and behavioral skills, developed during meetings with the therapist, in the home and work environments (Beck & Freedman, 1990). There is some evidence for the effectiveness of long-term psychodynamic and cognitive-behavioral approaches to treating personality disorders (Leichsenring & Leibing, 2003). Early cognitive-behavioral approaches focused on a narrow range of thinking patterns (e.g., specific thoughts related to one’s depression or anxiety). Jeffrey Young employed cognitive procedures to address schemas (see Chapter 7), the more organized and expansive thought patterns characteristic of personality disorders (McGinn & Young, 1994). Relationships between maladaptive schemas, the Big Five personality factors, and perceived parenting styles in adolescents have been identified (Muris, 2006; Young, Klosko, & Weishaar, 2003). A recent multi-center outcome study found schema-focused therapy more effective than clarification-oriented cognitive therapy or non-cognitive therapy for the treatment of personality disorders (Bamelis, Evers, Spinhoven, & Arntz, 2014).

Nature/Nurture and Maladaptive Behavior

Acceptance and application of the scientific method was responsible for the technological transformations we have witnessed in our physical environment over the past four centuries. Our changing understanding of the complimentary roles psychiatry and psychology can play in the treatment of behavioral disorders stems from the acceptance of scientific evidence-based practice in both disciplines. Research findings point to the limitations of biologically-based treatments (e.g., drugs) and for the need for “talking therapies” (i.e., cognitive behavioral therapies for specific and stylistic thinking patterns). Talking can achieve just so much, however. It has been found that inclusion and completion of homework assignments is essential to the success of cognitive-behavioral procedures (Burns & Spangler, 2000; Garland & Scott, 2002; Ilardi & Craighead, 1994; Kazantzis, Deane, & Ronan, 2000). In the following chapter we will discuss the role of self-efficacy, the belief that one can accomplish a task, in the success of behavioral interventions (Bandura, 1977b).

The hope is that research will uncover the specific neurological underpinnings of cognitive, emotional, and behavioral symptoms. Psychiatric researchers are recommending transition from DSM, symptom-based diagnosis, to classifying disorders based on findings in neuroscience and genetics (Insel, Cuthbert, Garvey, Heinssen, Pine, Quinn, Sanislow, & Wang, 2010). The National Institute of Mental Health has launched the Research Domain Criteria (RDoC) project with the goal of transitioning to a diagnostic system incorporating genetics, imaging, and cognitive science. Psychiatry would then more resemble other medical subfields which define pathological conditions on the basis of their etiology as opposed to symptomology.

There have already been surprising and important findings changing our understanding of DSM disorders. Five disorders have unexpectedly been found to share common genes. High genetic correlations exist between schizophrenia and bipolar disorder. Moderate correlations exist between schizophrenia and major depressive disorder, bipolar disorder and major depressive disorder, and ADHD and major depressive disorder (Cross-Disorder Group of the Psychiatric Genomics Consortium, 2013). There is the possibility that shared genes result in similar pathological mechanisms, having implications with respect to treatment. For example, one of the shared genes known to be involved in the regulation of calcium affects emotion, thinking, attention, and memory (Cross-Disorder Group of the Psychiatric Genomics Consortium, 2013).

Behavioral neuroscience offers another promising approach to understanding and treating DSM disorders. It has been found that those with diagnosed disorders perform poorly on the Iowa Gambling Task (IGT) in comparison to non-diagnosed individuals (Mukherjee & Kable, 2014). The IGT is a standardized task in which individuals must select from four different decks of cards. Payoffs are higher when selections are only made from two of the decks. Over extended trials, non-diagnosed individuals eventually adopt this strategy whereas those with DSM disorders do not (Bechara, Damasio, Damasio, & Anderson, 1994). There do not appear to be differences between types of disorders, suggesting that difficulties in making value-based decisions are fundamental to psychiatric disorders. The parts of the brain (frontal cortex and amygdala, among others) involved in performing the IGT appear to be the same as those impaired in diagnosed populations. Performing well on the IGT requires long-term processing of the results of decision-making. It is hoped that future research may be able to determine if specific components of the decision-making process are problematic in different disorders, potentially leading to more prescriptive psychological and psychiatric treatments (Mukherjee & Kable, 2014).

Chapter 10: Social Influences on the Development of Human Potential

Learning Objectives

  • Describe the procedures Asch and Milgram used to study conformity and obedience
  • Describe Zimbardo’s study demonstrating the impact of social roles on behavior
  • Describe the bystander apathy effect
  • Describe the procedures used by Sherif to reduce conflict and promote cooperation between groups
  • Describe an experiment evaluating Festinger’s cognitive dissonance theory

Compliance, Conformity and Obedience

Humans are not the only social animals living among members of their own species. Humans are not the only social animals dependent upon parents to survive. Humans have pets. Therefore, humans are not the only species dependent upon humans to survive! In fact, some animals not dependent upon humans to survive, still find them helpful.

This is the third chapter in the Nature/Nurture section. In Chapter 8, we saw how starting from the time of conception, nature and nurture interact, influencing your physical, cognitive, and moral development. In the previous chapter, we considered how nature and nurture interact in the development of human personality. I asked you to consider how you would describe your own personality as well as that of a potential partner in life. This raises the question, why is personality important?

From the moment you are born, the most important part of your world is other people. Think of the extent to which you relied on others to eat and survive. Consider the extent to which your answer to “What’s it all about” includes a life partner, family (including potential children), friends, colleagues, and others. Social psychology studies the effects of the presence, or imagined presence, of other people on one’s thoughts, feelings, and actions.

One’s social world starts at birth. Immediately, reciprocal determinism feedback loops (Bandura, 1986) will be established between the newborn and other people. The newborn’s temperament and behavior influences how the environment (including caretakers) respond, which then impacts upon the development of the infant’s skills and knowledge, which then influences how others react, and so on (see video).

Indirect effects of the newborn’s and caregivers’ personalities will occur soon after birth during feeding and whenever the infant communicates being uncomfortable (e.g., by crying). Whatever sex and temperamental factors the newborn inherits, they will influence interactions with the mother and caregivers. The personalities of the mother and caregivers will influence how they react to the newborn.

Previously cited research (Rovee & Rovee, 1969) demonstrated that young infants are sensitive to the consequences of their actions (i.e., they learned to manipulate a mobile by moving their leg). The most important consequences in the newborn’s life are administered by other people. It is not inaccurate to suggest that very early in life, an infant must learn to influence the behavior of other people. These interactions represent the infant’s first experiences in social influence. Examples of social influence occurring later in life include compliance, peer pressure to conform and obedience to authority.

Compliance

From birth, infants are learning the ABCs. No, not the alphabet, the control learning ABCs. Infants are learning under what environmental conditions (i.e., Antecedents, specific Behaviors are followed by events that feel good or bad (i.e., Consequences). If the combination of the rooting and sucking reflexes do not result in the ideal nursing position, the infant will soon learn the necessary movements to maximize the flow of milk. One may debate whether it satisfies Hockett’s (1960) definition of speech, but early in life infants emit different sounds that are influenced by their consequences (e.g., different cries for food, discomfort or attention).

Early in life, parents and caretakers are not concerned with compliance by their newborns. They assume the responsibility of serving the needs and whims of their little bundle of joy. This one-way expectation of compliance eventually ends, with the parent or caretaker making the first requests or demands. Within developed nations, this often occurs when toilet training is initiated. This may be the first time there are unpleasant consequences for a child’s behavior. This may also be an early opportunity to establish the meaning of “no.” If successful, this will inevitably result in a two-way, double-edged sword. The parent may gain the ability to use a word to replace the necessity of delivering an unpleasant consequence to the young child. The downside is the inevitable “terrible twos!” In fact, it is the beginning of the necessary interactions between an individual and parents, siblings, friends, colleagues and acquaintances to influence and respond to the requests of others.

If you are reading this book, you probably started attending school by the time you were five years of age. Prior to then, most of your social interactions were with family and neighbors, including other children. Once you started school, much of your waking time was spent in or preparing to go to school. School represented a totally different set of ABCs. School was something like home: it was indoors; adults asked for compliance and administered reinforcers and punishers. School was different from home in an important respect: you were required to spend a lot of time with people your own age that were not your family or friends. If not learned previously, you needed to acquire the ability to “play well with others.” The others could be very different from those at home and in your immediate neighborhood. In addition to requiring that you acquire interpersonal skills with those your own age, school required that you continue to advance in your abilities to read, write and perform quantitative operations. Freud’s observation that love and work are the most fundamental and important components of life implied the objectives of a school system. It should provide you with the knowledge, skills and motivation to succeed in your social relationships and eventual career.

If you think back upon the role school played in your life, I suspect you will agree that it was essential to your current future aspirations. School required that you conform to and obey consensually agreed upon rules of conduct. Sometimes rules of conduct were established by teachers and other adults. Sometimes, different rules of conduct were consensually agreed upon by your classmates. The pressure to conform has been systematically studied by social psychologists.

Conformity

Peer pressure is especially pronounced in adolescence and can involve risky, sometimes dangerous, behaviors (Ferguson & Meehan, 2011). Peer pressure can create a reciprocal determinism feedback loop in which an individual acts in a risky way. If others display the same behavior, it becomes a social norm within the group. An individual can be placed in conflict, wishing to keep (or make) friends while being threatened by violating a social norm and, feeling they should resist pressure to violate the teachings of their parents. The following video describes effective ways to resist peer pressure.

There are different types of conflicts: approach-approach (i.e., having to choose between two “good” things); avoidance-avoidance (i.e., a dilemma requiring “choosing between a rock and a hard place”); approach-avoidance (i.e., having to make a cost-benefit analysis weighing the positive and negative aspects of a situation); and double approach-avoidance (i.e., having to choose between two things, each having positive and negative features). A teenager facing peer pressure to smoke or to drink does not want to lose friends. The teenager may be aware of the health consequences of smoking and the dangers associated with excessive drinking. This is a complicated double approach-avoidance conflict requiring weighing the potential short- and long-term consequences of complying with the friends request or resisting their pressure.

As a college student, you are not far removed from your middle-school and high-school experiences. You can remember the cliques, the in-groups and out-groups that formed and had so much influence among your friends and classmates. You can remember how teenagers can be insensitive to the feelings of others and sometimes cruel. It is the rare individual who can join social groups without experiencing conflict or who can go it alone. Peers generally dress alike, groom themselves alike, talk alike, and share the same values. Such conformity is usually harmless. However, as described, such risky acts as smoking, excessive drinking, reckless driving, and sexual behaviors, can also occur as the result of peer pressure (Spear & and Kulbok, 2001). Fortunately, so can studying, helping others, and performing community service. One has to choose their friends carefully. There is a well known saying: Show me your friends and I will show you your future.

Asch’s conformity research

Susceptibility to peer pressure does not end after adolescence. Classic social psychological research conducted with college students has examined the conditions under which conformity is likely to occur with adults.

Solomon Asch (1951, 1952; 1956) told male college students that they were being administered a vision test. Students were asked to judge which of three lines was the same height as a comparison stimulus on eighteen trials (see Figure 10.3). There were other students in the room, all of whom were actually part of the experimental manipulation. These confederates each gave their answer and the actual subject went last. On six of the trials, the confederates unanimously chose the (rather obviously) correct stimulus. On the other 12 trials, they unanimously chose the same incorrect stimulus. One of the variables manipulated was the number of confederates. As seen in the graph, Subjects practically never conformed (i.e., chose an incorrect stimulus) if there was only one other student. The percentage of conforming responses increased as a function of the number of confederates, leveling off at about one-third of the trials with three confederates. Additional confederates hardly increased the extent of conformity. If only one confederate gave the correct answer, this dramatically lowered the extent of conformity, even with unanimity among the others. If the non-conforming confederate went first, it was more effective than going last (Morris & Miller, 1975). Asch found that if a confederate giving the correct answer left in the middle, the subject’s level of conformity increased substantially. This result may remind you of the multiple schedule example with the aunts and uncle. In this instance also, the college student’s behavior changed as a function of who was present.

Figure 10.1 Stimuli used in Asch’s conformity study.

The role of deception in psychology research

Asch’s experiments involved deception. Subjects were misled by being told that they were involved in a vision test rather than a task to assess conformity. Deception is essential if certain psychological issues are to be studied. If Ash’s subjects were told that the purpose of the study was to see if they would conform to what others did, this certainly would have changed the results. Subjects would have been alerted to the fact that others were trying to influence them. In this instance, the deception was relatively benign.

Ash’s subjects did not display serious anguish or disturbing symptoms afterward. The American Psychological Association has strict guidelines for conducting research with human subjects. After the session is completed, there must be a debriefing session in which the nature of and necessity for deception is explained. Often, subjects are interviewed to try to determine if there are concerns. Also, they may be asked why they responded the way they did as a way of gaining clarity with respect to the data. During their debriefing, some of Asch’s non-conforming subjects expressed more confidence in their judgments than others. Despite feeling uncomfortable, however, the non-confident subjects stuck with their (correct) response. Some of the conforming students actually believed the perceptions of the confederates were accurate; others knew they were wrong but did not want to offend the other students. We will now review other examples of the necessary use of deception to experimentally investigate important social psychological phenomena.

Obedience

The disappearance of a sense of responsibility is the most far-reaching consequence of submission to authority.

Stanley Milgram

Milgram’s experiments investigating obediance to authority, are among the most famous and controversial ever conducted in social psychology. Some of the infamy and controversy stems from the nature of the deception involved in conducting the studies. Some subjects were severely disturbed during the actual procedures, some after being debriefed, and some subsequent to the study. Some of the controversy also stems from the disturbing findings and implications regarding “human nature.”

Milgram’s Obedience Research

Stanley Milgram was a Jewish psychologist interested in questions of concern to many after the events of World War 2 and the Holocaust. How could human beings inflict such pain and suffering on others? Under what conditions do people passively display obedience to authority figures commanding that they behave cruelly? On the first page of his excellent book, Obedience to Authority, Milgram states

“It has been reliably established that from 1933 to 1945 millions of innocent people were systematically slaughtered on command. Gas chambers were built, death camps were guarded, daily quotas of corpses were produced with the same efficiency as the manufacture of appliances. These inhumane policies may have originated in the mind of a single person, but they could only have been carried out on a massive scale if a very large number of people obeyed orders” (Milgram, 1974, p. 1).

These seem like monumental existential issues that could never be investigated scientifically, let alone experimentally. How can the demands of internal and external validity be satisfied? Sciences attempt to establish cause-and-effect relationships between independent and dependent variables that apply under naturalistic (i.e., “real world”) conditions. This requires either creating laboratory conditions which capture the essence of “the real world” or manipulating independent variables in a controlled fashion in the field. Asch successfully implemented the first strategy by developing experimental laboratory procedures permitting the study of conformity with respect to perceptual judgments. Milgram became familiar with Asch’s work when serving as his research assistant while completing his doctoral studies. His doctoral thesis used a variation of Asch’s procedure to study conformity in different cultures.

How could laboratory conditions be created to study obedience resulting in the administration of pain to another person? Milgram built upon Asch’s work, developing an ingenious set of deceptive procedures leading individuals to believe that they were administering a painful stimulus to another person. The subject was assigned the role of “teacher” in a supposed verbal learning study evaluating the effectiveness of punishment. The teacher was instructed to deliver an electric shock whenever the “learner” made a mistake. The learner was actually an actor and never shocked. This deception enabled the experimental study of variables influencing obedience to an authority figure. Milgram indicated, “I was trying to think of a way to make Asch’s conformity experiment more humanly significant. I was dissatisfied that the test of conformity was about lines. I wondered whether groups could pressure a person into performing an act whose human import was more readily apparent, perhaps behaving aggressively toward another person, say by administering increasingly severe shocks to him” (Milgram, 1977).

Milgram Experiment.png

Figure 10.2 Milgram’s Obedience Study.

Figure 10.2 portrays the placement of the participants in Milgram’s original study conducted at Yale. The experimenter provided instructions to the actual subject and the confederate (actor). They were told that one would randomly be designated the teacher and the other the learner. The assignment was rigged such that the subject was always designated the teacher (i.e., the person administering the shock). The subject received a mild 45-volt shock to establish the credibility of the shock generator and appreciate what the learner would be experiencing. The experimenter (indicated by the E in the Figure) and teacher (indicated by the T) were seated in the same room. The learner (indicated by the L) was in an adjoining room.

The dependent variable was the level of intensity of a shock the person was willing to administer. The shock generator included 30 switches ranging from 15 to 450 volts in 15-volt increments. There were descriptive labels spaced among the switches, ranging from “Slight” (15-60 volts) to “Danger: Severe” (375-420) and “XXX” (435 and 450 volts). The learner responded correctly or incorrectly to the different test items according to a pre-arranged script. The teacher was instructed to move to the next switch each time the learner made an error, supposedly increasing the intensity of shock by 15 volts. When the intensity reached 150 volts, the learner convincingly started to scream and bang on the wall, requesting the teacher to stop. At a later point, the learner remained silent. If the teacher requested to stop, the experimenter replied with four graded requests from “please continue” to “you must go on.” The experiment ended when the teacher refused to proceed after the fourth request or administered the 450-volt shock three consecutive times.

Subjects were clearly disturbed by the task. Every one of them stopped the procedure at some point to question the experimenter. They displayed such signs of distress as sweating, stuttering, and nervous laughter. Milgram was concerned about the effects of his research on his subjects and surveyed them at a later date. Perhaps surprisingly, 84% indicated they were “glad” or “very glad” to have participated, 15% reported feeling neutral, and only 1% reported negative feelings (Milgram, 1974, p. 195).

Milgram followed up his original study, trying to identify variables influencing the propensity toward obedience (see Figure 10.3). Conducting the research at a workplace rather than a university reduced the percentage of teachers administering the highest intensity shock from 65% to 48%. If the learner was in the same room as the teacher, the level was reduced to 40%. Requiring that the teacher hold the learner’s hand on the shock plate reduced obedience by an additional 10%. If the experimenter gave orders by phone or someone else took over, this further reduced obedience. In one counter-intuitive experiment, Milgram examined whether a conformity manipulation similar to Asch’s research could be used to counteract obedience. Indeed, he found that only 10% of the participants administered the highest intensity shock if they observed two confederate teachers refuse to continue. When teachers were permitted to set their own shock levels, on average they stopped after the third switch (45 volts), with only 3% administering the most severe shock (Milgram, 1974, p. 70). This was the type of behavior predicted for the original study before it was conducted.

Image result for Milgram's obedience study results

Figure 10.3 Milgram’s research findings.

The reactions to Milgram’s findings were widespread and intense, ranging from disbelief to outrage. The horrors occurring during the Holocaust were often attributed to a small number of evil individuals having the ability to command obedience among the members of a passive authoritarian culture. It was assumed that such widespread obedience to authority would never occur in the proudly individualistic United States. However, in Milgram’s words

“This is perhaps the most fundamental lesson of our study: Ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process. Moreover, even when the destructive effects of their work become patently clear, and they are asked to carry out actions incompatible with fundamental standards of morality, relatively few people have the resources needed to resist authority” (Milgram, 1974, p. 6).

Toward the end of his book, Milgram concludes, “It is not so much the kind of person a man is as the kind of situation in which he finds himself that determines how he will act” (Milgram, 1974, p. 205). This may remind you of the person-situation debate described in the previous chapter. Heider (1958) differentiated between attributing another person’s behavior to a personality trait (i.e., an internal attribution) or to environmental circumstances (i.e., an external attribution). We are all subject to what social psychologists refer to as the fundamental attribution error. This is the self-serving tendency to explain the behavior of others in terms of their internal personality traits while attributing our own behavior to external factors. Milgram’s extensive research program identified several external variables influencing the likelihood of obedience. There appeared to be a dimension of psychological distance whereby proximity to the learner or removal of the experimenter reduced obedience. Reducing the prestige of the setting or the experimenter also reduced obedience. The fact that 65% of the subjects in the role of the teacher administered the highest shock intensity refutes any attribution of evil to an individual.

Milgram’s findings have been replicated across a variety of cultures suggesting that obedience to authority figures appears to be built in to the human genome. He reflects upon this possibility, offering suggestions consistent with evolutionary psychology. In an observation that could apply to the dual-sided picture of Manhattan, Milgram states “We look around at the civilizations men have built, and realize that only directed, concerted action could have raised the pyramids, formed the societies of Greece, and lifted man from a pitiable creature struggling for survival to technical mastery of the planet” (Milgram, 1974, p. 124). It is true that single individuals made enormous intellectual and artistic contributions to the transformation of Manhattan. Manhattan, however, could not be built by a single individual. It required the coordinated talents and efforts of an enormous number of individuals.

Milgram concluded his discussion of the evolutionary advantages resulting from a propensity toward obedience with the following thoughts regarding the roles of nature and nurture:

“Indeed, the idea of a simple instinct for obedience is not what is now proposed. Rather, we are born with a potential for obedience, which then interacts with the influence of society to produce the obedient man. In this sense, the capacity for obedience is like the capacity for language: certain highly specific mental structures must be present if the organism is to have potential to language, but exposure to a social milieu is needed to create a speaking man. In explaining the causes of obedience, we need to look both at the inborn structures and at the social influences impinging after birth. The proportion of influence exerted by each is a moot point. From the standpoint of evolutionary survival, all that matters is that we end up with organisms that can function in hierarchies” (Milgram, 1974, p. 125).

Social Roles and Bystander Apathy

Social Roles

Gradually it was disclosed to me that the line separating good and evil passes not between states nor between social classes nor between political parties, but right through every human heart, through all human hearts.

Aleksandr Solzhenitsyn

The line between good and evil is permeable and almost anyone can be induced to cross it when pressured by situational forces.

Philip Zimbardo

Zimbardo’s Prison Study

A second famous and controversial social psychology research project demonstrating the power of the situation over the power of the person was conducted by Philip Zimbardo. It is often referred to as “The Stanford Prison Experiment” (Haney, Banks, & Zimbardo, 1973). Like Milgram, Zimbardo started with an important existential question: “What happens when good people are put into an evil place? Do they triumph or does the situation dominate their past history and morality?” (http://www.prisonexp.org/). Like Milgram, Zimbardo addressed the combined issues of internal and external validity by attempting to bring the essential features of the natural environment into the laboratory. Like Milgram, this was accomplished through an ingenious deception strategy implemented with enormous attention to detail. Although this research took place over 45 years ago, it remains important and, unfortunately as you will see, prescient.

Newspaper advertisements offered male college students money to participate in an all day, two-week study taking place before the beginning of the fall semester. The study was described as a “psychological study of prison life.” Approximately 100 students responded to the advertisement. Those with prior arrest records, medical, or psychological problems were eliminated. Of the remaining students, 24 were selected. Eighteen would eventually be randomly divided into groups of nine guards and nine prisoners. The other six constituted replacements in the event anyone dropped out over the two-week period.

The study began for the nine original “prisoners” with a surprise arrest in their homes! The students went through the humiliating process of being searched and handcuffed before being read their Miranda right to remain silent. They were driven in a police car to the station where they were formally arrested with mug shots and fingerprints being taken. The simulated prison was situated in the basement of the Stanford University Psychology building. Prisoners were strip searched, issued smocks and stocking caps, and assigned ID numbers prior to being placed in their cells by the guards. Each cell held three prisoners and there was a separate cell to be used for solitary confinement. Those students assigned the role of guard were simply instructed to maintain law and order, refrain from violence, and not let any of the prisoners escape. They were told to refer to the prisoners by their ID numbers and not their names. The guards were issued military styled uniforms, darkened sunglasses, whistles, and night sticks. These procedures and details were designed to foster a sense of powerlessness in the prisoners while empowering the guards.

I suspect you will agree that just as Milgram captured the essence of being placed in a situation where obedience to authority could occur, Zimbardo captured the essence of the experience of being arrested and going to jail with his procedures. That is, both researchers manipulated the independent variable in such a way as to permit determining cause and effect relationships under controlled laboratory conditions that are likely to apply outside the laboratory. There is an important difference, however, in how their independent variables were manipulated. Part of Milgram’s manipulation included the responses of the confederate learner. These were scripted and could be controlled. For example, the learner could report having a heart condition or not. The learner could bang on the wall and scream or remain silent throughout. An important part of the independent variable manipulation in Zimbardo’s study was the behavior of the guards and prisoners toward each other. In Zimbardo’s research, there were no confederates, making scripting and control impossible. He placed students in an unbalanced relationship with their assigned roles determining their behavior. The prison study was unusual in this way; it relied upon an independent variable manipulation involving reciprocal determinism. The behavior of the prisoner affected the behavior of the guard which affected the behavior of the prisoner, and so on. There are also important differences in the ways in which the two investigators measured their dependent variables. Milgram developed a sensitive and precise measure of obedience with the graded switches on the shock generator. Zimbardo’s dependent variables were not assessed in a systematic way. Zimbardo could not know how the guards and prisoners would react to their roles. He videotaped the entire experiment, informing the subjects that their behavior was being recorded.

Whereas the first day was relatively calm, on the second day several of the prisoners started to rebel. Guards used fire extinguishers to quell the rebellion. All the prisoners had been instructed that they were allowed to leave at any point. One of the prisoners displayed severe signs of distress on the second day, left, and was replaced. Four more would leave before circumstances resulted in early termination of the experiment after only six days. The guards were becoming increasingly brutal and the experimenters feared for the safety and psychological well-being of the prisoners. Personality tests were administered to all applicants as part of the screening process. Results on these tests were not predictive of which guards became the most abusive or, in some instances, cruel. None of the guards left the experiment.

Zimbardo personally conducted debriefing sessions. He emphasized how the subjects were selected because of their initial physical and mental health. They should not feel their behavior was indicative of any psychological disturbance; it resulted from their assigned roles in the Stanford “prison.” Zimbardo took advantage of the debriefing session to discuss how they interpreted their roles as guards and prisoners, the choices they made, and how they might have done things differently. Although many of the participants reported being severely distressed during and immediately after the experiment, subsequent comprehensive interviews indicated no lasting disturbances. In their final follow-up interviews, the majority of the students indicated that the experiment proved to be a valuable learning experience. (Zimbardo, 2007, p. 239).

Forty-five years after the Stanford Prison Experiment, Zimbardo (2007) wrote a provocative book entitled The Lucifer Effect:Understanding How Good People Turn Evil. In the preface, he reaches the same conclusion as Milgram: “One of the dominant conclusions of the Stanford Prison Experiment is that the pervasive yet subtle power of a host of situational variables can dominate an individual’s will to resist.” Zimbardo compares the behavior of the guards in the prison experiment with the behavior of American soldiers in the Abu Ghraib prison thirty years later. Photographs of the two events are eerily and disturbingly similar. Zimbardo served as an expert witness on behalf of one of the perpetrators of the violence in the Iraqi prison. He argued that “The allegation that these immoral deeds were the sadistic work of a few rogue soldiers, so called bad apples, is challenged by examining the parallels that exist in the situational forces and psychological processes that operated in that prison with those in our Stanford prison” (Zimbardo, 2007, Preface). Rather, he concludes “These reports, chaired by generals and former high-ranking government officials, made evident that the military and civilian chain of command had built a “bad barrel” in which a bunch of good soldiers became transformed into “bad apples” (Zimbardo, 2007, Preface).

Bystander Apathy

All that is necessary for the triumph of evil is that good men do nothing.

Edmund Burke

During the debriefing sessions, Zimbardo expressed his displeasure with his own behavior during the Stanford Prison Experiment. “I had tried to contain physical aggression, but I had not acted to modify or stop the other forms of humiliation when I should have. I was guilty of the sin of omission- the evil of inaction-of not providing adequate oversight and surveillance when it was required” (Zimbardo, 2007, p. 181). He considered himself guilty of bystander apathy, the failure to assist an individual in need.

Zimbardo was almost definitely aware of the then recent social psychological research conducted by James Darley and Bibb Latané. They demonstrated that the likelihood of helping someone was related to the number of others present at the time (Darley& Latané, 1968; 1970; Latané & Darley, 1968) but not related to personality (Darley& Latané, 1970). In a laboratory experiment involving deception, subjects heard another student apparently undergoing an epileptic seizure. Subjects were told that they were one of two or one of six subjects involved in the research. That is, they were the only one that could help or there were four others who could also provide assistance. When they thought they were the only one, 85% of the subjects offered help; only 31% did when they thought there were four others available (Darley& Latané, 1968). The inverse relationship between the number of people present and the likelihood of providing assistance was described as the diffusion of responsibility effect. A comprehensive review found that this relationship has been repeatedly replicated since the original findings (Hudson & Bruckman, 2004).

The likelihood of providing assistance to a person experiencing an emergency may be described as a flowchart (Darley& Latané, 1970). First, the individual has to attend to the event. When there are many others present, it might not even be noticed. For example, the same authors previously found that smoke coming from the vent was noticed in 5 seconds when a subject was alone but took 20 seconds when two or three other subjects were present (Latané & Darley, 1968). Second, even if the event is noticed, it might not be interpreted as an emergency. This might be an example of the type of conformity displayed in Asch’s studies. That is, if others do not act as though the situation is an emergency, this could influence one’s own interpretation of the circumstances. Third, if the situation is considered an emergency, the number of others present will influence one’s perceived responsibility (i.e., diffusion of responsibility). Fortunately, it has been found that subjects are likely to respond to serious emergencies even if others are present and not responding (Fischer, Greitemeyer, Pollozek, & Frey, 2006). Fourth, If one feels personally responsible, it is necessary to consider courses of action and act accordingly. This flowchart may remind you of the five problem-solving stages described in Chapter 7: (1) general orientation; (2) problem definition and formulation; (3) generation of alternatives; (4) decision making; (5) verification (Goldfried and Davison, 1976, p. 187). One may act directly or indirectly by notifying the appropriate authorities.

Group Cohesiveness, Attitudes and Prejudice

Group Cohesiveness

A group consists of two or more individuals sharing a social relationship. The relationship can be based upon any shared characteristic (e.g., age, sex, grade-level, etc.) or interest (e.g., sports, academics, politics, etc.). A Nukak band can be considered a nomadic group consisting of a few families that live and move together. Consider what is likely to happen during the three steps of the bystander apathy flowchart if the person requiring assistance is a relative or friend. In the first stage, the relative will almost definitely be noticed, no matter how many other bystanders there are. In the second stage, the event will almost definitely be interpreted as requiring assistance. In the third stage, the person will almost definitely take direct action.

The prediction that one is more likely to help a fellow group member than a stranger probably does not surprise you. What may be surprising is how easy it is to establish group cohesiveness, a meaningful connection with another individual or several individuals. For example, one is more likely to help an injured person if they are wearing the football jersey of a shared favorite team (Levine, Prosser, Evans, & Reicher, 1968). In an experiment, college students were divided into low- and high-cohesive groups of two or four individuals. Cohesion was established by simply having the students discuss their likes and dislikes with respect to school and other activities. As expected, in the low-cohesiveness conditions, there was a social diffusion effect whereby subjects were more likely to assist another student when no one else was available. However, in the high-cohesiveness conditions, it did not matter if others were available to help; the subject felt personally responsible simply based on their prior conversations (Rutkowski, Gruder, & Romer, 1983).

Interestingly, the opposite of the social diffusion effect occurs with friends. Increasing the number of bystanders increases coming to the aid of friends, in comparison to strangers (Levine & Crowther, 2008). Tragically, the reverse can also be true. Increasing the number of bystanders can increase the likelihood of inflicting harm on members of a defined group. This is a necessary component of Milgram’s original questions regarding the Holocaust. How could human beings inflict such pain and suffering on others? Under what conditions do people passively display obedience to authority figures commanding that they behave cruelly? To answer these questions we must understand the formation of attitudes, stereotypes, and prejudice.

Attitudes, Stereotypes, and Prejudice

You’ve got to be taught
To hate and fear,
You’ve got to be taught
From year to year,
It’s got to be drummed
In your dear little ear
You’ve got to be carefully taught.

You’ve got to be taught to be afraid
Of people whose eyes are oddly made,
And people whose skin is a diff’rent shade,
You’ve got to be carefully taught.

You’ve got to be taught before it’s too late,
Before you are six or seven or eight,
To hate all the people your relatives hate,
You’ve got to be carefully taught!

South Pacificby Rodgers & Hammerstein

Rogers and Hammerstein set to music how classical conditioning principles described in Chapter 5 to help us understand emotional responding and the acquisition of word meaning can also be applied to the formation of prejudice and stereotypes. If a child’s parents pair emotionally toned words with members of a particular race or ethnic group (e.g., immoral, dirty, lazy, etc.), the child can learn to fear and/or dislike members of that group. Razran (1938, 1940) demonstrated that ratings of political slogans could be affected in opposite directions by pairing them with either food or noxious odors. Similarly, Staats and Staats (1958) showed that attitudes toward national names (e.g., Dutch, Swedish) or even personal names (e.g., Tom, Bill) could be influenced by pairing them with positively or negatively charged words. Scapegoating is a particularly pernicious form of stereotyping. It involves selecting an individual or group (e.g., a sex, race, ethnicity, nationality, etc.) for negative treatment. Often this individual or group is inaccurately blamed for unfortunate events or circumstances (e.g., loss of jobs, income inequality, etc.). It is important to recognize the potency of these procedures, since they are so frequently used in an attempt to influence your behavior. For example, advertisers pair their products with attractive images (see Figure 5.3) and political candidates frequently “dress themselves in the flag” and “sling mud” at opposing candidates.

These examples are attempts to affect attitudes toward their products and candidates. An attitude consists of one’s emotional, cognitive, and behavioral reactions to a person, place, object, or event (Allport, 1935; Rosenberg & Hovland, 1960). Classical conditioning can account for two of the three components of attitudes (including discriminatory attitudes); the affective (prejudice) and cognitive (stereotype) components. The behavioral component is the target of the advertiser and candidate. Their objective is to convince you to purchase the product or vote for the candidate. The behavioral component, unfortunately, is also the target of the child’s relatives in the song. The objective is to have the child discriminate against an out-group (Allport, 1954; Duckitt, 1994; Whitley & Kite, 2010). Fortunately, at least in this case, there is an extensive literature indicating that the emotional and cognitive components of attitudes are not necessarily predictive of overt behavior (c.f., Rosenberg & Hovland, 1960; Wicker, 1969). The likelihood of behaving in a manner consistent with one’s beliefs is influenced by the following: comparative strength of each belief, perceived consistency with social norms, perceived ability to carry out the behavior, and motivation for complying (Ajzen, 2002).

Muzafer Sherif believed that prejudices and stereotypes were especially likely to develop when there was competition between groups for scarce resources. This position has become known as Realistic Conflict Theory. He conducted the Robbers Cave Experiment (Sherif, Harvey, White, Hood, & Sherif, 1961), a classic demonstration of conflict and cooperation between experimentally established in- and out-groups. Each group consisted of eleven, randomly assigned normal, well-adjusted fifth grade boys attending a summer camp. The groups were randomly assigned to two cabins at different locations and not initially aware of each other’s existence.

During the first of the three stages of the research, the campers engaged in activities designed to foster group identity and camaraderie such as hiking, swimming, and a treasure hunt with a monetary prize which they could spend together. The groups named themselves the “Eagles” and the “Rattlers” and developed their own behavioral norms and leadership structures.

In the second stage of the research, a tournament of competitive games including baseball, tug-of-war, and touch football was scheduled. The winners would receive a trophy and individual prizes. As soon as the games began, the teams started calling each other names. This escalated into flag burning and dormitory raiding. A fight was on the verge of breaking out when the counselors (actually members of the research team) stepped in and broke it up.

During the third stage of the research, two different approaches were implemented to try to reduce hard feelings and promote cooperation between the groups. The first approach, described as “mere contact”, involved having the groups attend meals and movies together. Other than a few “food fights”, there was practically no interaction between the groups. The group members continued to stick together. The second approach introduced superordinate goals, tasks affecting the members of both groups and requiring their cooperation. In one instance, they needed to determine if a water tank serving the entire campsite was damaged and if the faucet needed to be repaired or replaced. Working together to address common concerns in this way succeeded in breaking down barriers between the two groups.

Figure 10.4 shows the changes in the friendship patterns occurring between the end of the second and third stages of the study. Especially for the Rattlers, there was a substantial increase in the percentage of Eagle friends in comparison to friends from their own cabin. The same pattern occurred for the Eagles but was not as pronounced. Thus, engaging in superordinate tasks requiring cooperation between competing groups appears to be an effective procedure for breaking down stereotypes and enhancing cooperation. More recent research has confirmed the effectiveness of creating an environment of forced interdependence in reducing prejudice (Fiske, 2000).

Image result for robbers cave experiment results

Figure 10.4 The Robber’s Cave Experiment.

Other procedures, besides creating interdependence through superordinate tasks, have been found to reduce stereotyping and prejudicial behavior. A related strategy is to have groups try to define their boundaries in a more inclusive manner, for example by describing themselves as sharing objectives and being on the same team (Dovidio, Kawakami, & Gaertner, 2000). Having pairs of individuals disclose facts about themselves reduces prejudice toward out-groups (Ensari & Miller, 2002). Similarly, practice in assuming the perspective of others can reduce stereotypes and prejudice (Galinsky & Moskowitz, 2000). For example, imagine what it is like being told that “people like you don’t live in this neighborhood.”

The fundamental attribution error underlies many stereotypes. That is, an out-group member’s failings are attributed to personal dispositional factors (e.g., the person is lazy, stupid, etc.) whereas in-group members’ failings are attributed to situational factors (e.g., it is a hot day, the problem is difficult, etc.). In a procedure designed to counteract this tendency, White adults were taught to consider situational explanations for negative stereotypical Black behaviors (Stewart, Latu, Kawakami, & Myers, 2010). This procedure was found to reduce racial stereotyping in comparison to control subjects not receiving the training.

Cognitive Dissonance Theory

Imagine how members of the “Eagles” and “Rattlers” felt after working together to achieve common goals. They probably started to develop beliefs about members of the other group which were inconsistent with what they previously believed. The conflict felt when holding contradictory beliefs or when there is an apparent discrepancy between one’s beliefs and behavior was described by Leon Festinger (1957) as cognitive dissonance. An example of the former might be, “I thought all Rattlers were jerks but this guy seems nice.” An example of the latter might have occurred if a member of the Rattlers engaged in an enjoyable conversation with a member of the “evil Eagles.” Festinger believed that cognitive dissonance was aversive and would motivate individuals to attain consistency between their beliefs and behavior. This could be achieved by changing either a belief or a behavior. For example, the Eagle might conclude “some Rattlers are nice” and the Rattler might conclude “not all Eagles are evil.”

In a test of cognitive dissonance theory (Festinger & Carlsmith, 1959), college students were asked to repeatedly perform a boring task for an hour (e.g., turning pegs a quarter turn at a time). After finishing, the subjects were asked to do a favor for the experimenter by telling another subject (who was actually part of the experiment) that the task was enjoyable. Subjects were randomly divided into three groups: one was paid $1 (currently approximately $10) to lie; the second was paid $20 ($200) to lie; and a control condition was not requested to lie. Try to place yourself in the situation of the subjects who received money for lying. You have complied with the request of the experimenter to say something you do not believe to be true. Would you feel differently about performing the boring task after receiving $1 for lying about it in comparison to $20? In the study, participants who received only $1 for lying reported enjoying the task more than subjects receiving $20. Why do you think this occurred?

According to cognitive dissonance theory, the subjects that were paid to lie should experience dissonance resulting from believing one thing (the task is boring) and saying another (it is enjoyable). Those receiving the lower amount of money should have difficulty feeling that they lied in order to receive the money. Subjects receiving the higher amount should have less difficulty attributing their lying to being paid. That is, the lower paid group should experience a higher level of dissonance than the group receiving the higher amount.

The way for the lower paid group to reduce the dissonance would be to change their belief about how much they enjoyed the task. That is, to conclude that they did not really lie since the task was enjoyable. The results were consistent with this cognitive dissonance analysis. At the end of the study, the lower paid group did in fact report liking the task more than the higher paid group or the control group not paid for lying.

The Pyramid of Hate

The results of social psychology research studying conformity, obedience to authority figures, the power of social roles, bystander apathy, group cohesiveness, prejudice, stereotyping, scapegoating, and cognitive dissonance enables us to address Milgram’s question of how we can understand the Holocaust. Figure 10.5 shows the Anti-Defamation League’s Pyramid of Hate.

https://upload.wikimedia.org/wikipedia/commons/thumb/b/b3/Pyramyde_of_hate_%28fr%29.svg/500px-Pyramyde_of_hate_%28fr%29.svg.png

Figure 10.5 Pyramid of Hate

The Pyramid portrays a stage theory in which individuals are first “carefully taught” prejudicial attitudes and stereotypes, very much consistent with the Rodgers and Hammerstein song. Then, depending on their social groups, one might escalate to overt acts of prejudice including name calling based on personal characteristics (e.g., race, ethnicity, sexual preference, etc.), social avoidance and bullying. The next stage, involves discriminatory policies requiring systematic collaboration among members of a group. As indicated in the Pyramid, the groups could be as ubiquitous and respectable as businesses, real estate associations, and private schools. The linkage between where one lives and the quality of the education they receive is a particularly pernicious form of societal discrimination. The likelihood of a child going to college can be predicted from a zip code!

The two “highest” levels of the Pyramid involve violent acts by individuals or groups. It is at these levels, that individuals might experience severe cognitive dissonance resulting from the discrepancy between their beliefs and behaviors. For example, “How can I be a religious, moral person if I participated in a violent act against someone?” Frequently, such dissonance is dissipated through dehumanization and scapegoating. For example, “those people are shiftless, lazy, and often criminals” or “they are unpatriotic”, or “here illegally”, or “practice immoral acts”, or …… In its most extreme form such as the Holocaust, genocide is practiced against an entire ethnic group.

The Pyramid of Hate has been used as an educational tool by the Anti-Defamation League to try to prevent such atrocities from occurring. Educational materials for high-school students include an exercise “Have you ever …?” Students are asked to indicate whether or not they ever experienced or practiced different prejudicial or stereotyping activities such as being called a name or being the target of name-calling, etc. This is followed by discussion of the impact of prejudice on individuals and on society (Anti-Defamation League, 2003).

It seems possible to base out-group membership on practically any characteristic and then target an individual or group for discrimination. The day after the assassination of Dr. Martin Luther King in 1968, Jane Elliott, an Iowa schoolteacher, devised an exercise for her third-grade students (Peters, 1987). Most children in Iowa at that time had never seen a Black person. She asked them if they would like to participate in a lesson on what it feels like to be a person of color in the United States. They agreed and she divided them into two groups; those with blue eyes and those with brown eyes. Blue-eyed students were told they were superior and seated in the front rows of the class and brown-eyed students sat in the back rows. Blue-eyed children were instructed to only play with each other and to ignore brown-eyed children. The two groups were not permitted to drink from the same water fountains. Sure enough, similar to the results obtained by Zimbardo in the Stanford Prison Experiment, the students quickly adapted to their roles. Blue-eyed children behaved in a bossy and arrogant manner whereas brown-eyed children became passive and submissive.

There have been multiple examples of genocide resulting in the loss of millions of lives in the past century (e.g., Bosnia-Herzegovina, Cambodia, Darfur, and Rwanda). The social psychology research helps us explain how such inhumane behaviors can occur. We can describe the types of parenting practices and experiences likely to result in prejudice, stereotyping, scapegoating, obedience, bystander apathy, discrimination, and violent role playing. Every one of these acts has been demonstrated under controlled and realistic conditions. Every one of these acts is a component of genocide. Every one of these acts is taught. Every one of these acts can be prevented and discouraged.

Teaching Heroes

Heroes are those who can somehow resist the power of the situation and act out of noble motives, or behave in ways that do not demean others when they easily can.

Philip Zimbardo

Zimbardo (2007, pp.2, 289) begins and ends The Lucifer Effect with a discussion of M. C. Escher’s fascinating reversible image (see video). At the end, what do you see? It will depend on what you perceive as the figure and what as the ground. If blue is the background, you will see a bunch of white angels in the foreground. If white is the background, you will see a bunch of blue devils. Zimbardo suggests we are all like Escher’s print; we all have it in us to be devils or angels. We all have the possibility of undergoing a reversal from one to the other.

The title of the last chapter of Zimbardo’s book (2007, pp. 444-489) is “Resisting Situational Influences and Celebrating Heroism”. It is in this chapter, that he considers the implications of the knowledge we have acquired from social psychological research to address “our better angels” (Dickens, 1841). In the same way that Zimbardo (and Milgram) argued for rejecting the attribution of evil deeds to an evil disposition, he argued for rejecting the attribution of heroic deeds to a heroic disposition. Zimbardo supports a situational model, providing multiple examples of how the very same experiences that produce obedience to authority and conformity to anti-social roles can result in pro-social attitudes and behaviors. We can build a Pyramid of Love.

Chapter 9: Personality and Human Potential

Learning Objectives

  • Describe the Big Five personality dimensions
  • Give examples of direct and indirect genetic influences on personality
  • Describe how adaptive learning relates to socialization, culture, and personality development

Trait Theories of Personality

’cause you got personality,
Walk, personality
Talk, Personality
Smile, Personality
Charm, personality
Love, personality
And of Cause you’ve got
A great big heart

From the song Personality by Lloyd Price

As the Lloyd Price song describes, do you think you “got” (i.e., have (a) personality)? How would you describe your personality? As a follow-up to the developmental psychology chapter and introduction to the chapter on personality, it might be fun to engage in a bit of romantic fantasy and develop your priorities for a partner in life. Try to construct an ordered list of those personality qualities you consider most important for your partner. Research with college students surveying the desirable qualities college students sought in a potential mate resulted in the following list: ‘‘kind and understanding,” ‘‘religious,” ‘‘exciting personality,” ‘‘creative and artistic,” ‘‘good housekeeper,” ‘‘intelligent,” ‘‘good earning capacity” (i.e., provider), ‘‘wants children,” ‘‘easygoing,” ‘‘good heredity,” ‘‘college graduate,” ‘‘physically attractive,” and ‘‘healthy” (Buss & Barnes, 1986).

Would you list similar qualities to those of the college students in the research study? Many of those qualities can be considered traits; an individual’s patterns of behavior (including thoughts and feelings) that are consistent across time and situations. Traits often refer to extreme points on a continuum; for example, kind vs. unkind (mean), intelligent vs. unintelligent (slow), easygoing vs. serious, exciting vs. unexciting (dull), creative vs. uncreative (rigid), and religious vs. irreligious in the list above. Other terms in the research study refer to extreme points on a continuum of physical characteristics (e.g., attractive vs. unattractive, healthy vs. unhealthy, good heredity vs. poor heredity) or to specific behavioral outcomes (e.g., college graduate vs. not, good provider vs. not, wants kids vs. not, good housekeeper vs. not). How many of these qualities would you include on your list? Do you agree with including each of them? What important qualities do you think were left out? Would you consider these qualities to be a description of his/her personality? Do you think your parents would have the same priorities as you?

In a study, college students and their parents were asked to rank these 13 different traits for desirability in a potential long-term life partner for themselves or their children (Perilloux, Fleischman, & Buss, 2011). The traits were scored such that the value (between 1 and 13) indicated importance of the trait. The average rankings for sons, daughters, fathers, and mothers are listed in Figure 9.1. Before you look at the findings, you might find it interesting and informative to rank order these 13 traits based on your own priorities. Assign the score of 13 to your highest priority characteristic and 1 to your least important. By ranking the items yourself, you can see how your priorities compare to the college students of your own sex, the opposite sex, and fathers and mothers in the study.

Sons (Mean) Daughters (Mean) Fathers (Mean) Mothers (Mean)

Attractive (10.70) Kind (11.57) Kind (11.50) Kind (11.62)

Intelligent (10.40) Intelligent (10.40) Intelligent (10.56) Intelligent (10.36)

Kind (10.00) Personality (8.97) Healthy (9.22) Healthy (9.00)

Personality (8.88) Attractive (8.08) Provider (7.43) Easygoing (7.79)

Healthy (8.79) Healthy (7.97) Easygoing (7.29) College graduate (7.29)

Easygoing (8.63) Easygoing (7.59) College graduate (6.85) Provider (7.28)

Creative (5.66) Provider (7.42) Personality (6.75) Religious (6.64)

Wants kids (5.94) College graduate (6.99) Religious (6.52) Wants kids (6.40)

College graduate (5.46) Wants kids (6.09) Attractive (5.91) Personality (6.07)

Heredity (5.16) Creative (4.64) Heredity (5.31) Attractive (5.66)

Provider (4.25) Heredity (4.58) Wants kids (5.09) Heredity (5.24)

Housekeeper (3.98) Religious (3.93) Creative (4.99) Creative (4.37)

Religious (3.48) Housekeeper (2.81) Housekeeper (3.58) Housekeeper (3.27)

Figure 9.1. Overall rankings of desirable traits for a mate by male and female college students and their parents (adapted from Perilloux, Fleischman, & Buss, 2011).

The items are listed with the highest priority (i.e., score closest to 13) at the top and lowest priority (i.e., score closest to 1) at the bottom. If you are a woman, did you rate kindness as most important, followed by intelligence, an “exciting” personality, attractiveness, health, and being easygoing? If you are a man, did you consider attractiveness most important, followed by intelligence, kindness, “exciting” personality, health, and being easygoing? Are you surprised or not by the fact that male and female college students included the same six items as most important? Are you surprised by the differences in their rankings of these six qualities? It is also informative to compare the means as well as the rankings. For example, even though an exciting personality is rated third by women and fourth by men, the means are almost the same. If you are seeing someone, would you be interested in how they rank the different traits? It might be fun, not to mention informative, to compare and discuss the results. If you are not seeing someone, would you be interested in the priorities of a potential date? How about sitting them down and giving them the list as an “ice breaker?” At least you can find out if they are easygoing! Good luck!

Are you at all surprised by the differences between parents and their children? Religiousness, health, earning capacity, and housekeeping seem a lot more important to parents than to college students. In contrast, attractiveness and an exciting personality were more important to the students. How do you think your parents would rank the traits if they were considering your potential partner in life? Let’s make you a pretend parent again. If you were ranking the items for your pretend child, would you score them the same as you did for yourself or would your rankings be similar to the parents in the study? Would things change if your pretend child was the other sex? Can you relate to where your parents may be coming from? Do you have a little brother and/or sister? How would you rank the different traits if you were choosing a life partner for them? Would the rankings be more similar to the ones you use for your children or for yourself? Do you remember the marshmallow test from Chapter 1? Choosing a partner in life may be the most important “marshmallow test” you ever take. I mentioned that you are more likely to purchase healthful foods if you shop from a list. Make sure you have the right list with you when you go shopping for a partner in life! I hope this exercise helps.

Personality as Temperaments

Speculation regarding the structure and causes of human personality date back to the time of Hippocrates (460-370 BC). The Greek physician, best known for medicine’s Hippocratic Oath, proposed that our moods and behavior were influenced by the relative amounts of four body liquids (“humors”): blood, black bile, yellow bile, and phlegm. Later, Galen (AD 131–200) expanded upon Hippocrates, suggesting excessive quantities of any of these liquids would lead to a specific, inherited temperament: “sanguine” (outgoing, impulsive, and creative) for blood, “melancholic” (shy, cautious, and moody) for black bile, “choleric” (aggressive, goal-directed, and ambitious) for yellow bile, and “phlegmatic” (kind, relaxed, and reliable) for phlegm (Kagan, 1998).

Figure 9.2 The Four Temperaments by Charles La Brun.

The idea expressed by Hippocrates and Galen, that human personality can be categorized into distinct types, survived through the centuries and served as the basis for trait theory. This 2500 year history is an enormously long-lasting example of “the more things change, the more they stay the same.” Hippocrates and Galen proposed four basic temperaments. As you will see, decades of research on traits has sometimes seen this number substantially expanded, sometimes shrunk, and more recently, increased by one to five (see below). Eventually, theorists returned to considering innate differences in temperaments. We’ve come a long way in two-and-a-half millennia but do not appear to have moved too far!

Hippocrates and Galen thought that four different human temperaments (i.e., personalities) resulted from the relative concentrations of blood, black bile, yellow bile, and phlegm in the body. Today, this might strike you as absurd, and indeed, we now know that it is incorrect. Hippocrates and Galen’s belief might be considered a theory of personality. A theory is a scientific schema used to provide a cohesive network of relationships between independent and dependent variables. As such, theories attempt to explain as much of nature as possible (be it physical, chemical, biological, or psychological) with as few assumptions as possible. Hippocrates and Galen proposed a very simple theory of personality based on the four humors. The most important characteristic of a theory is that it is testable. It must make specific predictions concerning the relationships between independent and dependent variables. In that regard, Hippocrates and Galen proposed a useful theory since it was testable. We now have the necessary tools and technologies to measure the precise quantities of the four humors and whether they relate to the tendencies to behave in outgoing or shy, aggressive or passive, or other ways. In terms of theory construction, it is better to be wrong than vague. Knowledge and understanding are advanced when we know that something is not true. Knowledge and understanding are not advanced when a theory is so vaguely stated that it can account for any result. We will describe three different theoretical approaches to the understanding of human personality: Trait theory, Freud’s psychodynamic theory, and learning theory.

Personality as Traits

The psychological study of personality can be said to have originated with Gordon Allport’s dictionary search resulting in a listing of 4500 trait-related words (Allport & Odbert, 1936) and his publication of Personality: A psychological interpretation (Allport 1937). Raymond Cattell (1943; 1946) shrunk Allport’s list to 171 by combining synonyms (e.g., extroverted, outgoing, boisterous, etc.) and eliminating extremely rare examples (e.g., obscurant and obsequious among the “obs”s, etc.). He then scored a large sample of adults on these 171 items. Cattell used the statistical technique, factor analysis, to determine the extent to which the effects of the different items overlapped. Based on the results, Cattell was able to reduce the number of primary human personality factors to sixteen. Figure 9.3 lists the 16 primary factors along with descriptions of the types of behaviors characteristic of low and high scorers.

Descriptors of Low Range Primary Factor Descriptors of High Range
Impersonal, distant, cool, reserved, detached, formal, aloof Warmth
(A)
Warm, outgoing, attentive to others, kindly, easy-going, participating, likes people
Concrete thinking, lower general mental capacity, less intelligent, unable to handle abstract problems Reasoning
(B)
Abstract-thinking, more intelligent, bright, higher general mental capacity, fast learner
Reactive emotionally, changeable, affected by feelings, emotionally less stable, easily upset Emotional Stability
(C)
Emotionally stable, adaptive, mature, faces reality calmly
Deferential, cooperative, avoids conflict, submissive, humble, obedient, easily led, docile, accommodating Dominance
(E)
Dominant, forceful, assertive, aggressive, competitive, stubborn, bossy
Serious, restrained, prudent, taciturn, introspective, silent Liveliness
(F)
Lively, animated, spontaneous, enthusiastic, happy go lucky, cheerful, expressive, impulsive
Expedient, nonconforming, disregards rules, self-indulgent Rule-Consciousness
(G)
Rule-conscious, dutiful, conscientious, conforming, moralistic, staid, rule bound
Shy, threat-sensitive, timid, hesitant, intimidated Social Boldness
(H)
Socially bold, venturesome, thick skinned, uninhibited
Utilitarian, objective, unsentimental, tough minded, self-reliant, no-nonsense, rough Sensitivity
(I)
Sensitive, aesthetic, sentimental, tender minded, intuitive, refined
Trusting, unsuspecting, accepting, unconditional, easy Vigilance
(L)
Vigilant, suspicious, skeptical, distrustful, oppositional
Grounded, practical, prosaic, solution oriented, steady, conventional Abstractedness
(M)
Abstract, imaginative, absent minded, impractical, absorbed in ideas
Forthright, genuine, artless, open, guileless, naive, unpretentious, involved Privateness
(N)
Private, discreet, nondisclosing, shrewd, polished, worldly, astute, diplomatic
Self-Assured, unworried, complacent, secure, free of guilt, confident, self-satisfied Apprehension
(O)
Apprehensive, self-doubting, worried, guilt prone, insecure, worrying, self- blaming
Traditional, attached to familiar, conservative, respecting traditional ideas Openness to Change
(Q1)
Open to change, experimental, liberal, analytical, critical, free thinking, flexibility
Group-oriented, affiliative, a joiner and follower dependent Self-Reliance
(Q2)
Self-reliant, solitary, resourceful, individualistic, self-sufficient
Tolerates disorder, unexacting, flexible, undisciplined, lax, self-conflict, impulsive, careless of social rules, uncontrolled Perfectionism
(Q3)
Perfectionistic, organized, compulsive, self-disciplined, socially precise, exacting will power, control, self-sentimental
Relaxed, placid, tranquil, torpid, patient, composed low drive Tension
(Q4)
Tense, high energy, impatient, driven, frustrated, over wrought, time driven.

Figure 9.3 Cattell’s 16 primary personality factors.

Cattell went one step further, conducting a factor analysis on these 16 primary traits, and identified “the Big Five” global, secondary trait dimensions: introversion/extraversion; low anxiety/high anxiety; receptivity (i.e., open-mindedness/tough- (i.e., closed-) mindedness); accommodation (i.e., dependence)/ independence; and lack of restraint (i.e., impulsive)/self-control. Figure 9.4 shows how the “Big Five” relate to the 16 primary factors.

Introversion / Extraversion Low Anxiety / High Anxiety Receptivity / Tough-Mindedness Accommodation / Independence Lack of Restraint / Self-Control
A: Reserved / Warm C: Emotionally Stable / Reactive A: Warm / Reserved E: Deferential / Dominant F: Serious / Lively B: Problem-Solving
F: Serious / Lively L: Trusting / Vigilant I: Sensitive / Unsentimental H: Shy / Bold G: Expedient / Rule-Conscious
H: Shy / Bold O: Self-Assured / Apprehensive M: Abstracted / Practical L: Trusting / Vigilant M: Abstracted / Practical
N: Private / Forthright Q4: Relaxed / Tense Q1: Open-to-Change / Traditional Q1: Traditional / Open-to-Change Q3: Tolerates Disorder / Perfectionistic
Q2: Self-Reliant / Group-Oriented

Figure 9.4 Relationship between Cattell’s “Big Five” secondary global traits and the 16 primary factors.

In 1949, Cattell published the very popular 16 Personality Factor Questionnaire (16PF), currently in its fifth edition (Cattell, R. B., Cattell, A. K., & Cattell, H. E. P., 1993). The latest edition includes 185 multiple-choice items designed to assess the 16 primary and five global traits. The items ask about the occurrence of concrete, everyday behaviors, rather than asking the person to simply rate themselves on the traits. Examples include such items as:

  • When I find myself in a boring situation, I usually “tune out” and daydream about other things. True/False.
  • When a bit of tact and convincing is needed to get people moving, I’m usually the one who does it. True/False.

The Big Five

Figure 9.5 lists the characteristics of low and high scorers on the Big Five personality dimensions. How do you think you would score on these five dimensions?

big five personality traits peats

Figure 9.5 The Big Five personality dimensions.

The British psychologist, Hans Eyesenck, felt that the combination of introversion/extraversion and neuroticism/emotional stability was adequate to capture the personality of normal (i.e., non-psychotic) individuals. Do you agree? Do you think the “Big Five” factors are adequate to describe your personality or do you need all 16 items? Which do you prefer, the items listed in the Perilloux et al. (2011) study, the 16 primary traits, the Big Five, or Eysenck’s two dimensions, as a tool for assessing and describing your personality? Returning to thinking about the qualities you would like your life partner to possess; which of these approaches would you use? How would you like him/her to score on each of the items?

Contemporary personality theorists have concluded that Eysenck’s two (or three) dimensions are inadequate and Cattell’s 16 factors unnecessarily overlapping. The Big Five has consensually emerged as being adequate to describe the basic components of human personality (Cattell, 1996; Digman, 1989; Goldberg, 1981; McCrae & Costa, 1987; McCrae & John, 1992). McCrae and his colleagues translated a Big Five self-report inventory into 29 languages and administered it to almost 18,000 individuals belonging to 56 nations. The data supports the applicability of the Big Five personality dimensions across these diverse cultures (Costa, Terracciano, & McCrae, 2001; McCrae, 2002; McCrae & Allik, 2002; McCrae, Terracciano, et al, 2005). Interestingly, consistent sex differences have been found across 55 cultures (Schmitt, 2005; Schmitt, Realo, Voracek, & Allik, J., 2008). In most countries, women were found to score higher than men in neuroticism, extraversion, agreeableness, and conscientiousness. Differences in the sexes were greater in technologically-enhanced, egalitarian cultures than more primitive, paternalistic cultures. It is under more egalitarian educational and economic conditions that biology does not constitute destiny for women and they are better able to fulfill their potential by expressing their individuality.

Personality and Nature/Nurture

Freud’s Psychodynamic Theory of Personality: Nurture against Nature

Image result for sigmund freud

Figure 9.6 Sigmund Freud

Arguably, one of the most influential and controversial figures in the history of Psychology is the Viennese physician, Sigmund Freud (Figure 9.6). Certain facts must be kept in mind when evaluating Freud’s contributions to psychology. First, Freud’s model of personality predates Allport’s introduction of trait theory. Second, Freud did not contribute to or interact with the early schools of psychology; structuralism, functionalism, Gestalt psychology, and behaviorism. Second, he was a practitioner, not a scientist. He was a physician, who would today be considered a psychiatrist, treating individuals suffering from psychological and/or psychiatric complaints. An astute observer of human behavior, he proposed general models of human personality and abnormal psychology based on the case history material obtained from a small sample of non-representative individuals. Based on his clinical observations, Freud concluded that humans are unaware of most of the factors that influence their thoughts, feelings, and behavior. This perceptive and important assumption is often portrayed as an iceberg (see Figure 9.7).

https://upload.wikimedia.org/wikipedia/commons/thumb/b/be/Structural-Iceberg.svg/1200px-Structural-Iceberg.svg.png

Figure 9.7 Freud’s theory of personality.

Freud distinguished between those things of which we are currently aware (i.e., the contents of short-term memory); those things which we can voluntarily retrieve (i.e., the contents of long-term memory); and those things we ordinarily cannot voluntarily retrieve. Freud felt thoughts related to our essential sexual and aggressive natures created conflict and must ordinarily be prevented from reaching consciousness. He postulated the existence of different defense mechanisms to accomplish this objective. The assumption of unconscious psychic determinism is perhaps Freud’s greatest lasting contribution. The assumption argues for the appropriate use of the scientific method to study psychology. There are presumed causes which may be difficult to study, since they are “below the surface” of consciousness. However, with the same ingenuity demonstrated by other natural sciences, there is the potential to explain the totality of human experience. As we will see in Chapter 11, Freud’s approach to the assessment and treatment of human disorders is essentially the attempt to bring unconscious material to consciousness.

Perhaps best known, is the three-part psychic apparatus Freud (1920; 1923) created to “explain” human personality and behavior. He described the human condition as a “tug of war” between one’s genetically determined and entirely unconscious drives (called the Id) and one’s mostly unconscious conscience (the superego). The mostly conscious ego was the component of the psychic apparatus that must reconcile these opposing objectives and the demands of reality. According to Freud, “The poor ego has a still harder time of it; it has to serve three harsh masters, and it has to do its best to reconcile the claims and demands of all three… The three tyrants are the external world, the superego, and the id (Freud, 1933, lecture 31).”

Although Freud never attempted to do so, it is possible to relate his three-part psychic apparatus to the basic psychological content areas and research findings of psychology. In fairness, we now know a lot more than we did at the time Freud proposed psychodynamic theory. Often, not only was he perceptive, but in many instances he was prescient (e.g., proposing that we are often not aware of factors influencing our behavior; describing self-control problems stemming from the power of short-term as opposed to long-term consequences; making distinctions between short- and long-term memory; emphasizing the important role of observational learning and language in moral development, etc).

The advantage of relating Freudian theory to the basic content areas of psychology is that by doing so, one avoids the common tendencies of reification and pseudo-explantion. Reification refers to describing hypothetical structures as though they were physical structures. Freud intended the id, superego, and ego to represent related processes taking place at different locations in the body. That is, they were intended to be functional units (akin to the digestive system), not anatomical units (e.g., the stomach). It is tempting and convenient to attribute one’s failure to resist temptation to a defective id. How do you know your id is defective? I gave in to temptation. Why did you give in to temptation. My id is defective.

Freud’s id, operating according to the pleasure principle, is a variation on the Greek hedonic model of motivation described in Chapter 4. Postulating a pleasure principle is one thing; providing details regarding how it works is eventually required. Chapter 4 describes findings related to deprivation of appetitive substances, the physiology of the sex drive, the need for sleep, and such human needs as curiosity, achievement, and self-actualization. These findings bring us much closer to understanding the effects of many motivational independent variables (e.g., amount of food deprivation, intensity of shock, etc.) on human thought, emotion, and behavior.

Freud believed the superego was formed through identification with one’s parents. The section of Chapter 6 describing observational learning directly relates to Freud’s process of identification. The two most powerful classes of variables influencing the likelihood of attending to a model are perceived similarity to self and reinforcemnt value. Throughout infancy, practically all of the people one comes into contact with live in one’s home and are probably family members (most often one’s parents and siblings). The infant has limited opportunities to meet children outside the home that may be perceived as being similar. Naturally, the caregivers (usually parents) provide food and comfort, controlling the most powerful reinforcers in the infant’s world. As the child grows up, plays with other children, and goes to school, peers become increasingly influential. The Moral Development section of the previous chapter directly relates to Freud’s superego. We saw how Baumrind’s different parental styles could impact the development of the concepts of right and wrong throughout childhood, perhaps producing the behaviors characterizing Kohlberg’s pre-conventional, conventional, and post-conventional stages.

Whereas Freud described the id as operating according to the hedonic pleasure principle, the ego was described as operating accoring to the reality principle. Much of the material in the Direct Learning (Chapter 5), Indirect Learning (Chapter 6), and Cognition (Chapter 7) chapters relates to ego functioning. The ego’s attempt to satisfy three harsh masters (id, supergo, and reality) may be described as an exercise in problem-solving. One’s experiences interacting with the world, observing others, and acquiring information through the use of language, result in a personal understanding of how the world works (i.e., reality). This understanding may be applied to situations in which one must determine and assess the possible short- and long-term consequences of following different courses of action.

Freudian theory and nature/nurture

Part of the reason for the influence of Freudian theory is his fascinating and exciting description of the human condition as a conscious struggle (conducted by the ego) between our inherited drives and needs (the id) and the moral demands placed on us by civilized society (the superego). Freud was a gifted writer with an authoritative and engaging style. His portrayal of the human condition not only influenced the disciplines of psychiatry and psychology, but immediately captured the imagination of scholars in other disciplines, writers, and the general public. Freud describes the human condition as requiring our conscious (and learned) moral codes and understanding of reality to counteract our unconscious (and unlearned) animalistic and impulsive drives. The idea of primitive urges creating difficulties coping with the modern world is plausible and compelling. It is also consistent with themes of this book regarding our having biologically evolved to survive under naturalistic conditions but currently living in a human constructed world.

Freud’s three-part psychic apparatus adheres to a nurture (learning) against nature (heredity) model of personality. The discipline of psychology assumes a complimentary relationship between nature and nurture. Different research strategies have been developed for disentangling and weighing the influences of nature and nurture in the development of human personality. It has always been possible to systematically study experiential effects on behavior in a correlational manner. For example, one can ask individuals about the parental styles they were exposed to as children and assess the relationships with performance on personality tests, moral dilemmas, interpersonal interactions, etc. Psychology has a rich history of studying experimental manipulations of experience on perception, learning, and cognition, as documented in previous chapters. We live at a time when it is becoming possible to experimentally manipulate specific genes in order to determine their influences on specific behaviors. Previously, it was necessary to resort to much cruder and less powerful correlational methods to try to determine the effects of heredity. Correlational research strategies include cross-cultural research, twin studies, and adoption studies.

Cross-Cultural Research

How can we understand the variations and consistencies in human personality? Cross-cultural research provides a type of “natural experiment” relating to such nature/nurture issues. If one sees similarities across cultures, it is likely that hereditary factors are involved since experiential factors are so varied. Similarly, if one sees substantial cultural differences, it is likely that experiential variables are responsible. The contemporary field of evolutionary psychology assumes that we can understand cultural similarities as resulting from psychological adaptations conferring survival and/or reproductive advantages (Buss, 2011). We have seen previous attempts to understand surprising behavioral findings with other animals by considering possible survival advantages. For example, we attributed the counter-intuitive finding that animals become more, rather than less active when deprived, to the fact that activity increases the likelihood of discovering the needed substance. Some have attempted to extend this same type of reasoning to the understanding of complex human behavior.

As an example, all known human cultures speak. Thus, it appears that genetics plays an important role in speech. In Chapter 1, we saw how specific DNA is dedicated to the formation of words in the mouth, and In Chapter 2, we described how considerable “brain space” is dedicated to the parts of the body associated with speech. Despite the fact that all human cultures speak, they speak different languages. In Chapter 5, we described the role of classical conditioning procedures in establishing word meaning and in Chapter 6, we saw how learning principles apply to the acquisition of speaking, reading, and writing. From the perspective of evolutionary psychology, the ability to speak increases the likelihood of human survival and reproduction. However, the specific words and grammar that an individual acquires reflects the adaptive learning requirements of the immediate social environment.

Evolutionary psychology has attempted to explain the sex differences found in the previously described research assessing the differences between males and females in the qualities they desire in a mate. Buss (1989) reasoned that for thousands of years, men and women faced different adaptive needs. Women bore children and were primarily responsible for nurturing and rearing them. They required the time and resources necessary to achieve these objectives. Men needed to impregnate women in order to enable survival of the species. They were concerned with paternity uncertainty and the possibility of providing time and resources to another man’s children. According to evolutionary psychology, it is these different adaptive needs that resulted in women preferring older men possessing higher social status along with the ability to protect them and provide necessary resources. Men are attracted to young, attractive mates likely to possess the ability to conceive, and strength to survive multiple pregnancies and rear several children. Consistent with these interpretations, research findings indicate different types of jealousy in females and males. Women were found to be more concerned with emotional than sexual infidelity whereas the reverse was true for men (Pietrzak, Laird, Stevens, & Thompson, 2002). A man becoming emotionally attached to another woman could result in loss of social status and resources. A woman’s sexual activity with another man could result in pregnancy and dedicating resources to another man’s child.

Twin Studies

Twins have a special claim upon our attention; it is, that their history affords means of distinguishing between the effects of tendencies received at birth, and those that were imposed by the special circumstances of their after lives.

Sir Francis Galton

Sir Francis Galton was an extremely influential British scientist and statistician. He developed correlational research and statistical procedures and coined the term “nature versus nurture” (Galton, 1883). In the absence of the ability to manipulate heredity, Galton suggested that the next best thing was to study twins. By observing the behavior of identical and fraternal twins raised in similar or different environments, it was possible to make inferences about the possible roles of heredity and experience in human individual differences.

Figure 9.8 portrays the results for such a study. Correlations are shown for the IQ scores of individuals having different family relationships, some reared together and some reared apart. Since identical twins reared together share both their genes and environments, they would be expected to be most similar and the results bear this out. The lower correlation for fraternal, than identical twins reared together, indicates that heredity also exerts an influence. As indicated previously, intelligent behavior requires that genetic potential be realized through appropriate experience. In the absence of the potential or the appropriate learning experience, the behavior will not appear.

File:Correlation coefficient of intelligence quotients compared with genetic similarity.png

Figure 9.8 Correlation between family relationship and IQ.

Nature/Nurture and the Big Five

Based on McCrae’s previously cited cross-cultural research findings, there is every reason to believe that members of the Nukak tribe differ in regard to the Big Five personality dimensions. Cross-cultural research indicated the Big Five personality factors are prominent all over the globe. Whenever one observes such similarities across diverse cultural conditions, it suggests a genetic influence. Correlations are stronger for identical (monozygotic) than fraternal (dizygotic) twins on each of the Big Five personality traits, suggesting a genetic influence (Jang, Livesley, & Vemon, 1996). Buss (1995) has suggested that the variety of interpersonal relationships created by combinations of the Big Five personality traits is adaptive from an evolutionary perspective. The potential for such variability in social groups permits adapting to diverse cultural norms and contingencies of reinforcement and punishment. Thus, even though individual personalities can be described in terms of the same five traits for every culture, there will be overall cultural differences relating to specific adaptive demands. For example, extroversion may be more characteristic of individualistic cultures such as the United States, and introversion more characteristic of more collectivist cultures, such as China.

Attempts have been made using advances in MRI technology to determine if biological factors underlie the Big Five personality factors. Differences in the volume of brain regions were found for four of the five factors: conscientiousness, extroversion, agreeableness, and neuroticism. Conscientiousness was related to the brain region associated with impulse control, planning, and problem-solving. Extraversion is associated with a brain region related to adaptive learning requiring the determination of the likelihood of reward and punishment in different circumstances. Agreeableness was associated with regions involved in processing information and drawing inferences regarding the behavior of others. These regions appear related to such desirable qualities as empathy and altruism. Neuroticism was associated with brain regions involved in self-evaluation and the experience of negative emotions, often in dangerous and punishing situations (DeYoung, Hirsh, Shane, Papademetris, Rajeevan, N., & Gray, (2010).

Contemporary Temperament Theory

Do you have or know a pet dog? Do you think dogs have different personalities or temperaments? As mentioned previously, contemporary psychologists have returned to considering the possibility that humans display different temperaments similar to the Big Five personality factors very early in life (Bates, & Wachs, 1994; Kagan,1994; Komsi, Räikkönen, Pesonen, Heinonen, Keskivaara, & Järvenpää, et al., 2006; Rowe & Plomin, 1977). This recent interest appears to have been stimulated by the suggestion that other animals display temperaments similar to humans. Behaviors suggesting fearfulness, affiliativeness, aggressiveness, and impulsivity can be observed in mice, cats, dogs, and chimpanzees, as well as humans (Diamond, 1957; Jonas & Gosling, 2005). Whenever one observes similar behaviors in humans and other animals, there is the likelihood that the behavior is transmitted genetically and has adaptive or reproductive value. Simple reflexes such as eye blinks and withdrawal responses are obvious examples. Other examples described previously include the tendency to become active when deprived of food or water, acquired taste aversion, and the tendency to be more influenced by short-term as opposed to long-term rewards. Temperament theory suggests that certain aspects of human personality are inherited and have evolutionary survival value.

Buss & Plomin (1975) expanded upon Diamond’s work, attempting to clarify and delineate the criteria for defining temperament. To heritability, they added the requirement that the behavior appear early in life and display continuity throughout development. Following is a list of additional, more recently suggested criteria for determining temperamental differences in children (Zentner & Bates, 2008).

  • Individual differences in normal behaviors pertaining to the domains of affect, activity, attention, and sensory sensitivity
  • Typically affects response intensities, latencies, durations, thresholds, and recovery times
  • Appearance in the first few years of life (partial appearance in infancy, full expression by pre-school age)
  • Counterpart exists in primates as well certain social mammals (e.g., dogs)
  • Closely, if complexly linked to biological mechanisms (e.g., neurochemical, neuroanatomical, genetic)
  • Relatively enduring and predictive of conceptually coherent outcomes (e.g., early inhibition predicting internalizing, early difficultness externalizing disorders)

Buss and Plomin (1975) initially concluded that emotionality (similar to Diamond’s fearfulness), activity (similar to aggressiveness), sociability (similar to affiliativeness), and impulsivity fulfilled their three criteria. Subsequently, they observed that impulsivity often does not appear until the child enters school, and reduced their list of basic temperaments to three. Jerome Kagan, one of the founders of American developmental psychology, conducted longitudinal research, assessing the same children over extended periods of time. He concluded that infants may be described more simply as having inhibited or uninhibited temperaments. Inhibited and uninhibited infants display the behaviors characteristic of the avoidant and secure attachment styles described by Ainsworth and Bell (1970) in the previous chapter. The inhibited child appears timid and fearful and is unlikely to explore the environment. The uninhibited child appears more secure and calm and is more likely to explore, even in the absence of the parent or caretaker. In an initial study (Kagan, Reznik, & Snidman, 1988), it was found that two-year-olds who were timid or secure in the presence of strangers (i.e., inhibited), displayed the same tendencies at seven years of age. These behavioral patterns were linked to sympathetic arousal of the cardiovascular system for the inhibited (fearful child). The uninhibited, calm child displayed more relaxed, parasympathetic cardiovascular activity. A follow-up study (Kagan and Snidman, 2004) followed four-month-olds until they were seven-years-old. Initially, the infants’ motor activity and crying were observed when they were presented with unfamiliar visual stimuli, sounds, and smells. Approximately 20 percent of the children were “high reactive” (i.e., displayed excessive motor behavior and crying) and 40 percent were “low reactive (i.e., displayed little agitation or crying).

Describing children as having inhibited or uninhibited temperaments fulfills the expanded list of criteria. A benefit of the distinction between inhibited and uninhibited temperaments is that it can be diagnostic for potential developmental problems and prescriptive for remedial interventions. For example, high-reactive infants were found to be three times as likely as low-reactive infants to develop anxiety disorders by the age of seven (Kagan, Snidman, Zentner, & Peterson, 1999). As we saw in Chapter 5, there are effective direct and indirect learning procedures for the treatment of fear and anxiety, including in vivo and imaginal desensitization. In more severe instances, it may be necessary to prescribe medications (see Chapters 11-12).

It should not be concluded that a particular temperament necessarily results in a specific personality profile. Kagan and Snidman (2004) describe how socioeconomic status, culture, and parental styles can result in many variations, within constraints, for children with inhibited or uninhibited temperaments. In fact, often problems arise, not so much because of a specific temperament but because of a mismatch between the child’s temperament and environmental demands. It has been found to be effective to train parents and teachers to adjust their approaches to be more consistent with a child’s temperament (McClowry, Rodriguez, & Koslowitz, 2008). Kagan and Snidman (2004) also suggest being careful about assuming that all temperamental effects are genetic. For example, it has been concluded that the prenatal environment of the fetus may impact upon a child’s developing temperament (Werner, Myers, Fifer, Cheng, Fang, & Allen et al., 2007).

Indirect genetic Influences

You are familiar with the concept of self-fulfilling prophecy. It is an example of an indirect influence on behavior. In this instance, labeling an individual results in others treating the individual in accord with the label. There are other powerful examples of indirect influences on behavior, including those resulting from genetic characteristics. You might be asking, how can a genetic influence be both direct and indirect? Perhaps the most obvious and most important example is your sex. If you inherited a Y-chromosome, you have a higher level of the male hormone testosterone. Testosterone levels are related to activity level and aggression in both human males and females (Pasterski, Hindmarsh, Geffner, Brook, Brain, & Hines, 2007). Testosterone effects are examples of a direct influence of genetics on behavior. How active and aggressive one is will affect how others react. These reactions constitute indirect effects on behavior.

There are many other indirect effects of being born female or male. Parents treat girls and boys differently from birth (Witt, 1997). Girls are likely to be dressed in pink or soft pastel colors and boys in blue or other dark colors. Parents are gentler with girls and more likely to play roughly with boys. Girls are frequently given dolls and kitchen-related toys whereas boys are given action-oriented toys such as balls and cars.

Personality as Adaptive Learning

Let’s return to talking about the qualities you consider important for your ideal mate. Congratulations! You have discovered someone whom you consider ideal on the Big Five or Cattell’s 16 Personality Factors. There is one concern, however. The person is allergic to hops and uncontrollably sneezes into your beer. Is that a deal breaker? Temperaments and traits describe personality in terms of broad (perhaps vague) patterns of behavior. Often, we are interested in much more specific behaviors or qualities in our friends and/or romantic interests. Does the person dance well, drive a car, read “good” books, enjoy a good joke, keep from sneezing in your beer? Those emphasizing the role of learning in the development of personality do not consider trait (or temperament) labels to be sufficiently specific to be helpful. In addition, trait labels frequently serve as pseudo-explanations for behavior. Someone is described as sitting alone at a party as the result of being introverted or as the life of the party as the result of being extroverted. Someone performs well in school as the result of being conscientious or poorly as the result of being unconscientious. Introversion and conscientiousness are not independent variables (i.e., potential causes). Rather, trait labels are generic categories of dependent variables (i.e., behaviors to be explained). For example, introversion can describe sitting alone at a party, not speaking up in school, being quiet when with one’s friends and family, etc. The term “introversion” provides no information concerning why these specific behaviors occur. Psychology always looks to heredity (nature) and experience (nurture) for its explanations.

We previously mentioned the ABCs of control learning: antecedents, behavior, and consequences. Learning theory assumes that complex human behavior, including one’s personality, is acquired through direct and indirect experience. Personality, consistent with the assumptions of evolutionary psychology, consists of adaptations to one’s environmental demands. Adaptation requires learning what the consequences of one’s acts will be under different conditions. Let’s use your pretend child (generically named Jamie) from last chapter to provide an example of the application of learning theory. Your family is visiting relatives for a holiday get-together. Jamie has attended similar get-togethers in the past and is familiar with the newspaper reading habits of two aunts and an uncle. Aunt Lucy is sometimes very resistant to Jamie’s wanting to play with her while at other times she gives in right away. Aunt Rose has a strategy whereby she reads an article from beginning to end and waits for the next incidence of pestering that she knows is coming. After playing with Jamie for a while, she reads another article from beginning to end. Uncle Harry is hard of hearing, which in dealing with Jamie is not altogether a bad thing. He turns down the volume in his hearing aids and reads the newspaper without ever paying attention.

Do you think Jamie will behave the same way toward the aunts and uncle? Or, do you expect different patterns of behavior to emerge resulting from the different ways in which they respond to Jamie? In Chapter 5, we described different intermittent schedules of reinforcement. Aunt Lucy, by providing attention after different amounts of pestering , was implementing a variable ratio (VR) schedule. Aunt Rose’s strategy of waiting until she completed an article, resulted in a variable interval (VI) schedule with the length of each interval determined by how long it took her to finish each article. Uncle Harry, by not paying attention at all, was applying the extinction procedure. After repeatedly being exposed to these contingencies, Jamie would be expected to respond with the characteristic pattern associated with each reinforcement schedule: continuously pestering Aunt Lucy at a very high rate; consistently pestering Aunt Rose at a moderate rate; and not pestering Uncle Harry at all. The described scenario with Jamie is an example of a multiple-schedule, whereby different reinforcement contingencies are reliably associated with distinct antecedent stimuli (the aunts and uncle in this case).

Culture and Socialization

Children behaving similar to Jamie frequently become identified as “pests.” This could result in the circular reasoning characteristic of pseudo-explanations. One might reason that Jamie bothers adults because of being a pest rather than being labeled a pest because of bothering adults. This could result in the failure to consider alternative explanations for Jamie’s behavior (i.e., the different patterns of attention provided by adults) and in a self-fulfilling prophecy whereby Jamie is treated in a particular way because of being considered a “pest.” The example of Jamie indicates that trait labels, in addition to not being sufficiently specific to be helpful, are not explanatory (i.e., they provide no information about genetic or experiential causes of behavior).

In addition, trait labels are inaccurate and misleading as descriptions of behavior. Describing an individual as a “pest”, or as “aggressive”, or as “extroverted” implies cross-situational consistency. That is, in the same way that disease related symptoms occur wherever you are, characteristic trait behaviors would be expected under all conditions. Being a “pest” implies that Jamie is always a pest. Yet, this is not the case. Jamie’s pestering varies depending upon who is sitting in the easy chair. Similarly, college students agree that they can be the life of the party with one group of friends while being quiet and withdrawn with others. Does it make sense to describe an individual as extroverted or introverted if both behavior patterns occur? The example of Jamie reveals how a multiple-schedule, adaptive learning analysis of human personality can accurately describe and explain situational inconsistencies of conduct. It also helps us understand what we mean by culture and socialization. Culture refers to consensually agreed-upon rules relating situations (i.e., antecedents), behaviors, and consequences. Socialization is the implementation of these rules in parenting, schooling, and other inter-personal relations. For example, in some cultures children are encouraged to look adults in the eye when they are talking. In other cultures, this same behavior is considered rude.

<

Chapter 8: Lifespan Development of Human Potential

Learning Objectives

  • Describe how Piaget’s overarching principles of assimilation, accommodation, and schema development can serve to integrate the cumulative interactive effects of heredity and experience
  • Describe the tasks Piaget developed to study acquisition of conservation of number, mass, and liquid volume
  • Relate adaptive learning principles to Baumrind’s parental styles and Kohlberg’s stages of moral thinking

Fetal and Infant Development

Nature/Nurture and The Development of Human Potential

Since the scientific revolution, we have acquired considerable information about the most mysterious and wonderful phenomenon on earth, life itself. The progression of knowledge regarding the mechanisms of heredity including genes, chromosomes, and DNA has recently culminated in the mapping of the human genome. We are on the cusp of discovering what is genetically unique about the human being and perhaps, what is unique about our biological potential. This knowledge, however, will not be sufficient to determine our potential as individuals or as a species. As described in Chapter 7, the pace of scientific discovery and technological advancement is accelerating with no limit in sight. Kurzweil has made the seemingly impossible prediction that in the not too distant future humans will become immortal! Unless things change dramatically, immortality will not occur for the Nukak. Tragically, extinction appears the more likely fate. “It is the best of times. It is the worst of times” (Dickens, 1859). Never has the human species been in a better position to consider the meaning of life. Never has our species possessed such power to create or destroy.

The following are the first three articles from the United Nations General Assembly Universal Declaration of Human Rights (December, 1948):

Article 1.

  • All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

Article 2.

  • Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty.

Article 3.

  • Everyone has the right to life, liberty and security of person.

Americans will recognize these sentiments as being consistent with Thomas Jefferson’s most famous words from the Declaration of Independence: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the Pursuit of Happiness.” The United Nations would apply these same ideals to the rest of the world. In order to achieve these noble objectives it is essential and strategic to focus our attention on children.

https://upload.wikimedia.org/wikipedia/commons/thumb/6/6d/One_Laptop_Per_Child_-_Ferrenafe_%28by%29.jpg/320px-One_Laptop_Per_Child_-_Ferrenafe_%28by%29.jpg

Figures 8.1 and 8.2 Amazonian rainforest children.

Brazil and Peru border on the Amazon. The photographs above, are of Ashaninka children in the Brazilan rainforest and Peruvian mainland children in school. I think you would agree, they might as well be on opposite sides of the world. The Ashaninka children are living similar to the ways their ancestors lived for thousands of years. Their environment is almost entirely natural. The Peruvian children’s environment was created by humans and includes laptop computers. These stark environmental differences enable us to appreciate Wechsler’s emphasis upon the adaptive and multi-faceted nature of human intelligence. The Ashaninka children must acquire the knowledge and skills accrued over thousands of years by previous generations of their tribe in order to survive. Peruvian children’s survival needs are met. They have the opportunity to explore the digitized accumulated knowledge of all the humans who have ever lived! A test to determine “intelligence” for one of these children would have to take their environmental conditions into consideration. Neither child could survive in the other’s world. The opportunities for the Ashaninka and Peruvian children to achieve their potential were determined by factors beyond their control at birth; where they were born and who their parents were. In this chapter, we will consider how nature and nurture interact to influence the course of human development from conception to adulthood.

Fetal Development

We are playing pretend. Congratulations! You have graduated, are romantically involved with a significant other, and expecting a child. The video below portrays the timeline for growth of a human fetus, be it in the rainforest or a city. During the first two months, the embryo consists of the layers of cells from which all organs and body parts will eventually develop. At three months, the fetus is about three inches long and weighs about an ounce. At four months, it is about five inches long and five ounces; the hardened skeleton is starting to form. At five months, the fetus reaches about ten inches in length and by six months, it weighs approximately one and a half pounds. At seven months, the fetus is about 15 inches in length and weighs about three pounds. The average birth statistics, after 39 weeks of pregnancy for children born in developed countries, are a length of 19 inches and weight of seven and a half pounds. Newborns vary considerably, even when taken to full term.

Problems are least likely to develop during pregnancy for mothers between 16 and 35 years of age. Under modern conditions, the mother’s health and lifestyle can have a significant effect on the fetus; diseases can be transmitted through the placenta. Substance abuse (drugs or alcohol) or smoking can result in premature birth, lower birth weight, and greater risk of birth defects, miscarriages, or stillbirths. The mother’s diet can also affect the fetus: a lack of iron can produce anemia; a lack of calcium can affect the formation of teeth and bones; a lack of protein can reduce size and increase the likelihood of cognitive deficits.

Infant Development

Once again, congratulations! You are the proud parent of a healthy infant. The term “infant” is derived from the Latin word for “speechless.” It is generally applied to children up till three years of age although they typically start to speak earlier. Sometimes this period is sub-divided with separate “newborn” (between birth and one month) and “toddler” (between one and three years) stages. As we will see in the next chapter, some of your baby’s personality will be influenced by heredity. Since we are playing pretend, you get to choose whether your child is a girl or a boy and you can choose whether she/he is active and curious or quiet and relatively passive. Your new job is to insure achievement of the goals expressed in the UN Declaration of Human Rights. You want to make sure your bundle of joy eats, survives and does well in school.

Let’s start with eating. Breastfeeding is generally considered to be healthier for the child than bottle feeding (Gartner et al., 2005). As described in Chapter 5, infants are born with rooting and sucking reflexes, facilitating the nursing process for mother and child. After birth, the mother’s breasts swell as they fill with milk. Nursing reduces the swelling, providing a sense of relief along with other enjoyable feelings stemming from holding and nurturing one’s baby. For breast feeding to occur, the infant needs to grasp and hold onto its mother. In the same way that infants possess reflexes facilitating eating, they possess reflexes facilitating grabbing and holding (Schott & Rosser, 2003). If something is placed in an infant’s palm, a strong grasping reflex occurs. Should the infant sense a sudden loss of support, the Moro reflex occurs; the child first spreads its hands and then restores them to a holding position. Usually, between four and six months of age, it is possible to begin the weaning process, transitioning from liquid to solid foods. When the child’s baby teeth start to appear, usually around ten to twelve months of age, it is possible to introduce soft finger-sized foods.

This early nursing experience was considered crucial to establishing the important role of the mother. It was thought the mother became a conditioned reinforcer through classical conditioning by being paired with food. Research conducted with other species suggested that other factors besides feeding were important. Harry Harlow (1958), studying rhesus monkeys, was the first to demonstrate the important role of touch in infant development (see also Field, 2002). Harlow, using a controversial procedure, reared the monkeys in isolation from their mothers. He constructed two types of “surrogate mother” dolls. One had a wire cylinder for a body and the other was covered with soft terrycloth.

In order to measure the degree of preference for the dolls, Harlow used a procedure similar to the one to assess self-control in pigeons described in Chapter 1. You may recall that the pigeons had to choose between a small, immediate reward, and a larger, delayed reward. Preference was measured by the percentage of times the pigeon pecked the associated keys. Harlow measured the amount of time monkeys spent with the two types of “surrogate mothers.” He found that when hungry, the monkeys went to the doll with the bottle. At other times, the monkeys had a strong preference for the terrycloth doll. The terrycloth doll also served as a secure “home base” from which the monkey would explore novel items or return to when afraid. In the absence of the cloth doll, the monkey frequently cowered and sucked its thumb. In its presence, the monkey would usually cling to the cloth doll initially and then explore the new stimulus. If a fearful stimulus was presented (e.g., a teddy bear that made a loud sound), the monkey would often run and cling to its “mother” before working up the courage to once again explore the environment (Harlow, 1958).

You might be thinking Harlow’s research is interesting but questioning whether the findings relate to human children. This is an external validity issue and an empirical question. Mary Ainsworth developed the Strange Situation procedure to study attachment and exploratory behavior in children between 12 and 18 months of age (Ainsworth & Bell, 1970; Ainsworth, Blehar, Waters, & Wall 1978). The children were observed playing, with and without their mother present, while strangers walked in and out of the room according to the protocol described in the following video.

Separation anxiety was often observed when the caregiver (usually the mother) left the room. Stranger anxiety might be displayed toward the unknown adult. The infant’s exploratory behavior, as well as its behavior when being reunited with the caregiver, was also assessed. Ainsworth and Bell (1970) divided the children into Secure (70% of their sample), Insecure-Ambivalent (15%), and Insecure-Avoidant (15%) attachment styles based on their performance during the different episodes.

  • The Secure attachment style applied to children displaying low levels of anxiety and avoidance. They played with toys, explored the environment, and interacted with strangers when the caregiver was present. A secure infant might be upset and cry when the caregiver left the room but would appear happy when she returned. The child was still considered secure even if he/she refused to interact with the stranger when the caregiver was not present.
  • The Insecure-Ambivalent attachment style applied to children displaying inconsistent emotionality. They were resistant to exploration and strangers, even when the caretaker was present. These children became severely upset when the caretaker left the room but did not seem overly happy when she returned.
  • The Insecure-Avoidant attachment style applied to children who did not appear emotionally attached to the caregiver. They did not appear upset when she left the room or happy when she returned. These children appeared passive no matter who was present.

In longitudinal studies, individuals are studied over extended periods of time. Correlations between Strange Situation infant attachment styles and the quantity and quality of subsequent peer relationships have been found in major longitudinal studies (National Institute of Child Health and Human Development Study of Early Child Care 1991-1995, 1996-1999, 2000-2004, 2005-2008); Minnesota Study of Risk and Adaptation from Birth to Adulthood (Sroufe, Egeland, Carlson, & Collins, 2009)). Secure children have more friends, enjoy more positive relationships, and are more likely to become leaders than insecure children. Insecure-ambivalent children are frequently anxious and unsuccessful in seeking friends. Insecure-avoidant children may become aggressive, thereby discouraging friendships.

Consistent with Harlow’s research findings with monkeys, Ainsworth & Bell (1970) felt that different infant attachment styles were the result of different caregiver (usually maternal) characteristics:

  • Warm and consistently responsive caregiving (often involving holding and touching) was correlated with the secure attachment style.
  • Inconsistent and unemotional caregiving was associated with the insecure-ambivalent style.
  • Unresponsive caregivers, who frequently ignored the child, were associated with the insecure-avoidant style.

Parenting Styles

Can you remember how your parents treated and influenced you at different stages of your life? Do you wish to become a parent? If so, what type of parent do you wish to become? Contemporary parenting styles have been categorized on the dimensions of demandingness and responsiveness (Baumrind, 1968, 1971; Maccoby & Martin, 1983). Demanding parents specify clear rules of conduct and require their children to comply. Responsive parents are affectionate and sensitive to their children’s needs and feelings. he following video describes the four parenting styles resulting from different combinations of high and low demandingness and responsiveness.

We may consider the implications of the learning principles described in Chapters 5 and 6 for these different parenting styles. Uninvolved, indifferent parents (low demandingness, low responsiveness) do not specify codes of conduct or respond to their children’s needs. If parents do not provide rules and/or consequences, the children will most likely be influenced by others (e.g., siblings, other adults, and eventually other children). From the perspective of the parents, This may result in the acquisition of undesirable behaviors.

Permissive, indulgent parents (low demandingness, high responsiveness) do not specify codes of conduct but are affectionate and responsive. They provide “unconditional positive regard” (Rogers, 1957), eliminating the contingency between desired behavior and consequence. We saw in Chapter 5, that the absence of a contingency between responding and consequences can result in learned helplessness. Non-contingent appetitive consequences (e.g., praise, gifts, etc.) can result in “spoiling” and a sense of entitlement. This could create problems for the children in other contexts (e.g., school, playgrounds) when others place demands and react differently to their behavior.

Authoritarian parents (high demandingness, low responsiveness) specify strict codes of conduct in a non-responsive manner. If the children ask for reasons, they may reply “because I say so!” In their parents’ absence, the children would seek other sources of authority.

Authoritative parents (high demandingness, high responsiveness) specify strict codes of conduct within a context of warmth and sensitivity to the children’s needs. They are likely to provide reasons for their codes of conduct, listen to their children’s perspective, and sometimes negotiate alternative codes. Authoritative parenting is most likely to result in secure attachment between parent and child.

Infant Skill Development

Now that you have thought about what type of parent you would like to become, you are most likely to concentrate on survival-related behaviors during the first years of your pretend child’s life. As we have seen, your child is equipped with nursing and grasping reflexes that facilitate an attached, dependent relationship with a caregiver. As a parent, you will begin walking the difficult tightrope balancing the need to nourish and protect your child with the need to foster independence. Early on, your child is immobile and her/his behavior seems sporadic and perhaps random. You might wonder if voluntary control learning (i.e., instrumental conditioning) is possible.

In a classic program of research, Carolyn Rovee-Collier (Rovee and Rovee, 1969) studied learning and memory in very young infants using a mobile suspended over their crib. A ribbon from the mobile could be attached to the infant’s leg. An active kicking motion would cause the colorful attachments to move. A reversal design (see Chapter 1) was employed in which the baseline frequency of kicking was assessed without the leg attached to the mobile, followed by a phase in which it was attached, and then a return to the detached baseline condition. The frequency of kicking in infants as young as eight weeks old, increased dramatically during the middle phase relative to the initial and subsequent baseline conditions. This is an early indication of the intrinsic motivation provided by controlling one’s environment.

Child Development

Maturation

Whether a child grows up in the rainforest or a city, its thumb, tongue, and cortex will facilitate its adapting to its environment. That combination of physical characteristics has enabled us to survive as a species under a vast range of human conditions. During the first year, infants learn to use the forefinger and thumb to form the precision grip, permitting the grasping and manipulation of objects. As soon as children are sitting up they often enjoy solitary play. In the rainforest, children are given sticks and stones to manipulate. Favored toys for children growing up in contemporary homes include blocks of different shapes. Gradually, infants learn the futility of trying to put round blocks in square holes. Besides shapes, blocks are also excellent devices for teaching colors and even letters of the alphabet. As they develop, children may experiment by piling blocks on top of each other and creating different structures. Blocks can even eventually help children learn the “ABC”s.

Maturation refers to those developmental processes that occur as the result of aging. Your child will grow and its body proportions will change with age. Its head will become proportionally smaller and its limbs proportionally larger. As different parts of the body develop, new behaviors become possible.

Fundamental to adaptation and independence is walking on one’s own. Whether growing up in the rainforest or a large city, the child must sit up before crawling and stand before walking (see video). As a concerned parent, you will be interested in assisting your child achieve their continually increasing potential. The figure indicates the significant ranges children display in the ages at which they first demonstrate each behavior. Keep such individual differences in mind so that you do not become overly concerned if your infant appears a little slow in acquiring their first baby steps or other essential skills.

In addition to walking, there are other survival-related skills children must acquire during early infancy. They must learn to use their hands to eat, reach for, grasp, and manipulate objects. Gradually, children must learn to communicate with their caregivers using gestures and eventually language. Favorite games of infants include pointing to the parts of the face and peek-a-boo.

Humans are social animals born dependent on others; the course of their development will be significantly influenced by their interpersonal skills. Children must learn to interact with other family members as well as strangers, including other children.

The noted Russian psychologist, Lev Vygotsky (1962, 1978), described child development as the transition from socially shared activities to internalized processes. Through interactions with parents and other adults within a community, the child acquires speech. The child speaks out loud during the language acquisition phase. Eventually, speech is internalized (i.e., becomes silent) and serves as the basis for thought and action. Happy birthday! Your pretend baby will soon be celebrating her/his first birthday. You are eager to help your child along its way to achieving their potential. You would like her/him to walk and come to you when requested to do so. Teaching your child is essentially a problem-solving exercise as described in the previous chapter. How does your child behave now? How would you like your cild to behave in the future? What do you need to do to develop this behavior? Vygotsky proposed that a “zone of proximal development” existed, whereby teaching should only commence when it is certain the child is ready. We will assume your child is ready once standing without support and responding appropriately to a few words.

Vygotsky also introduced the term “scaffolding” to refer to effective adult support when teaching a child. At one year of age, you will not be able to rely entirely on language to teach walking. In Chapter 5, we described the shaping, prompting, and fading teaching techniques. Shaping is a scaffolding technique that continuously relies upon Vygotsky’s concept of a zone of proximal development. You start with a behavior already in the child’s repertoire and proceed only when the child is ready for the next step. Prompting and fading are also scaffolding techniques that can facilitate and speed up the learning process. You could teach your child to walk using shaping, prompting, and fading similar to the process described in Chapter 5 to train a dog to roll over. Unlike teaching a dog, however, it is not necessary (nor a good idea) to use food as a reinforcer. Recalling the distinction between extrinsic and intrinsic rewards from the previous chapter, your child will probably find covering ground and moving faster intrinsically motivating. You can establish feedback and praise as conditioned reinforcers by pairing success with words describing the accomplishment as well as words of affection. Remember the control learning ABC’s when starting out. When standing, say “Come to me” (an antecedent) while softly holding your child and trying to get her/him to take an initial step (a behavior). As the child moves, you could say “You are walking. What a big boy/girl” (a consequence). Once the child starts moving slightly when you say “Come to me”, you can release your grip a little. This would proceed until the child takes an initial step without your assistance. Now you could request “Come to me” while kneeling or standing a little further away and holding out your arms. You can close your arms and hug the child when he/she reaches you. By proceeding “step by step” in this fashion, your child will eventually come to you from any location upon request. Once walking, you could use the same procedures to request that the child bring you different objects. In addition to being a fun bonding and vocabulary development exercise, this process includes all sorts of benefits. For example, you can teach, “Please bring me my slippers, newspaper, TV remote”, etc. I don’t suggest trying “Bring me my coffee” for a while though.

Your child’s motor and cognitive abilities will continue to improve and expand during the second year. Imitation of the behaviors of other adults and older children will become more frequent. At the end of the first year, you will probably try to teach your child to attend to you and stop what they are doing when you say the word “No.” You may recall that this has the benefit of reducing the need to use physical punishment. Unfortunately, this may be an instance of “be careful what you wish for.” Teaching the word “no” may seem like a great success at the time, but eventually prove to be a two-edged sword. Don’t be surprised if, as your child starts the “terrible twos”, she/he says “No” every time you request something. Probably toward the end of your child’s second year, unlike parents in the rainforest, you and your significant other will try to teach control of a natural biological function; excretion of waste materials in liquid and solid form. In the rainforest, the Nukak child, who similar to your child, learned to walk during the first year, will be shown where to walk (or move a little faster when necessary) when “nature calls.” In your home, and as preparation for visiting the homes of others as well as nursery or pre-school, you will show your child where and how to “go potty.” Most likely, your child will say “No!” Once again, you may be wondering how to proceed with this latest exercise in problem solving. Vygotsky’s concepts again provide helpful guidance. You first need to determine whether your child is ready for toilet training (i.e., the zone of proximal development) and then determine the most appropriate teaching procedures (i.e., scaffolding).

There are many recommended “recipes” for toilet training available on Google and YouTube. How do you decide which to implement? Fortunately, there are empirically based assessment and teaching strategies. In Chapter 2, we saw that conducting an experiment in which an independent variable is manipulated is the only way to determine cause and effect. In this instance, we need to search the research literature to see if there is an experiment demonstrating the effectiveness of a toilet training procedure. Nathan Azrin was the 1975 recipient of the American Psychological Association Award for Distinguished Contributions for Applications in Psychology. What is startling and impressive about Azrin’s body of research is the consistent experimental demonstration of success of his intervention procedures with some of the most serious and intractable behavioral problems. These include his classic token economy procedures with chronic adult schizophrenics (Ayllon and Azrin, 1968), community reinforcement procedures with problem alcohol abusers (Hunt and Azrin, 1973), and social reinforcement procedures for individuals experiencing long-standing difficulties finding a job (Jones and Azrin, 1973). In addition, Azrin wrote the best-selling (and still available) Toilet Training in Less than a Day (Azrin and Foxx, 1974) based on prior successful research results (Azrin & Foxx, 1971; Foxx and Azrin, 1973).

Unlike walking, toilet training involves establishing a lengthy sequence of unrelated behaviors. It would be a time-consuming and painstaking task requiring extreme patience and skill to teach this sequence using the shaping, prompting, and fading procedures. Azrin and Foxx (1974) suggest waiting until your child is at least 20 months old before assessing whether she/he is ready to begin toilet training (i.e., has reached the zone of proximal development). Prior to then, it is unlikely that your child will have acquired the observational learning and language skills necessary to use indirect learning procedures. Before beginning you should insure your child is able to sit up, walk, imitate, know the names of and point to different body parts, remove and replace underwear, follow simple instructions, sense the need to go to the bathroom, and stay dry for at least two hours (Azrin and Foxx, 1974, 43-45).

Once these pre-requisite skills have been acquired, you are ready to proceed. Their process is based on the premise that the best way to learn something is to try to teach it to someone else. This will require having a doll which appears to wet. In this way your child can “teach dolly to go potty.” You start by pretending to give the doll a drink and telling your child that the doll has to go to the bathroom. Your child should then be shown how to remove the doll’s diaper, seat it on the potty, wait for it to “urinate” and then praise the doll for going to the potty (see Azrin and Foxx, 1974, 58-85 for detailed instructions).

Language and the Human Condition

Toilet training is perhaps the earliest example of the advantage of using language to teach a child. The acquisition of speech enables the transition to Piaget’s preoperational stage of cognitive development. The early structuralists considered sensations, images, and emotions to be the basic elements comprising conscious experience. The child is now able to symbolically represent these elements (i.e., its internal and external environment) with words. Vygotsky emphasizes the transition from talking out loud to talking to oneself. We use words to represent objects (nouns), actions (verbs), characteristics of objects and actions (adjectives and adverbs), and people (pronouns). Much of our thinking, including planning and problem-solving, consists of covert speech. Little by little, over the course of our lives, we describe our concepts and schemas with words as we develop elaborate narratives for understanding ourselves and our worlds.

All known human cultures speak. As quoted by Skinner (1986) in Chapter 6, it is perhaps the part of our genetic potential most responsible for our achievements as a species. Infants typically show signs of understanding speech at about six months but do not start speaking before one year of age. We described the importance of vocabulary size to success in school in Chapter 6 (Anderson and Freebody, 1986). Hart and Risley (2004) refer to these findings as “The Early Catastrophe: The 30 Million Word Gap by Age 3” (see video below). It has been estimated that disadvantaged children enter school with a vocabulary of 5,000 words in comparison to their more advantaged peers who average 20,000 (Moats, 1999). Research has shown a strong relationship between the vocabulary of first-graders and their subsequent reading comprehension scores (Cunningham & Stanovich, 1997; Scarborough, 2001). Findings indicate that Pre-K and kindergarten intervention efforts designed to improve vocabulary have been successful for middle- and upper-income at-risk children, but unfortunately, not thus far for lower-income children (Marulis & Neuman, 2010).

Preparing Children to Be Hunter-Gatherers

The portion of the Amazonian rain forest inhabited by the Nukak, consisting almost entirely of marshes and wetlands, does not support a permanent lifestyle based upon domestication of plants and animals (Politis, 2007). There are, however, abundant edible non-domesticated plant, fruit, and vegetable species and some edible animals (e.g., several species of monkeys, peccaries, tortoises, birds, ducks, and fish). The Nukak are one of the few remaining cultures continuing to practice the nomadic hunter/gatherer lifestyle characteristic of the earliest members of our species. They travel in bands comprised of approximately five nuclear families with a median of 20-30 individuals. Temporary shelters designed for stays of about four days are crafted from posts, tree branches, and leaves to form a camp. Furnishings include hammocks for sleeping and a hearth for cooking.

The Nukak, living day to day, have maintained a similar lifestyle for more than 10,000 years. They lack familiarity with government, property, or money. The Nukak do not have a concept of the future and their past history is limited to a few generations. In order to survive under very challenging conditions, the Nukak had to acquire the knowledge and skills to protect themselves from the elements and predators. They had to learn to forage and prepare a non-poisonous, nutritionally adequate diet.

Daily excursions from the camp are almost always led by an adult male, and many are limited to males. Most of the activities are related to hunting, fishing, and collecting foods (fruits, vegetables, honey, etc.). These trips also involve collecting resources such as cane for blowpipes, leaves for roofs, bark and vines for cords. Since their shelters have such low population densities, trips can involve searches for potential mates among other bands. Females often take part in local foraging trips, but most of their time is spent near the shelters caring for young children and preparing food. Time is frequently taken out during the day for men and women to pass on survival skills from generation to generation. Having to dedicate most of the day to survival needs and child care leaves precious little time for the Nukak to address the interpersonal and self-actualization needs higher on Maslow’s pyramid.

The Nukak have faced the same environmental demands and parents have transmitted the same survival skills from generation to generation for millennia. The Nukak essentially have two stages of development, childhood and adulthood. Piaget’s first two stages of cognitive development apply to Nukak children as well as those growing up in cities. They will start to speak at approximately the same age and their parents will take advantage of speech when teaching them. Nukak children’s toys usually consist of scaled down versions of survival tools. They participate in foraging, hunting, and food preparation as soon as they are physically able.

Preparing Children for School

The Nukak’s childhood contrasts with the extensive schooling required to create a common knowledge base and prepare children in technologically advanced cultures for ever-changing vocations. Many countries have compulsory primary and secondary education in order to address these goals. Thus, in addition to teaching a culture’s code of conduct, parents in these countries need to prepare their children to attend school.

Time is flying by and your pretend child is about to celebrate her/his third birthday. Congratulations, it is the end of the terrible twos! It is at this time that the life path for children growing up in the rainforest and city will diverge significantly to adapt to their different environmental demands. In order to adapt to the rainforest, the Nukak child will be taught gender appropriate hunter-gatherer skills. During the next two years you will help your child acquire many concepts and skills which are prerequisites for success in school. You will teach the names of colors, letters of the alphabet, numbers, and telling time. Gradually, his/her vocabulary will expand, sentences become more grammatical, use of imagination in telling stories and engaging in fantasy play increase, and interactions with other children become more cooperative. Conversations will increase in length and become more adult like in being targeted to the listener.

In 1998, the International Reading Association (IRA) and the National Association for the Education of Young Children (NAEYC) issued a research-based position statement regarding the teaching of reading and writing. Included in their statement were recommendations regarding what children, caretakers, and teachers can do at different ages to teach these skills. During the preschool years, it was recommended that parents and family members do the following:

  • Talk with children, engage them in conversation, give names of things, show interest in what a child says
  • Read and reread stories with predictable texts to children
  • Encourage children to recount experiences and describe ideas and events that are important to them
  • Visit the library regularly
  • Provide opportunities for children to draw and print, using markers, crayons, and pencils (IRA and NAEYC, 1998)

School represents a different “world” for children with its own set of adaptive requirements. Parents play an important role in preparing their children for school. The authoritative style characterized by high demandingness and high responsiveness has been shown to result in better school performance than the authoritarian style (Pratt, Green, MacVicar, & Bountrogianni, 1992; Hokoda & Fincham, 1995). In school, teachers, rather than parents, are the ones establishing standards and administering consequences. The dimensions of demandingness and responsiveness may be applied to teaching as well as parenting styles. Teachers may hold high or low standards for their students. They may be personable and warm in their classroom interactions or distant and detached. As with parenting, authoritative teaching styles result in better academic and social performance than authoritarian styles (Walker, 2008).

The School Years

Congratulations! Your pretend child is off to kindergarten. For perhaps the first time in your child’s life, you and she/he will endure the emotional process of extended separation. You and your child’s teacher will hopefully engage in a collaborative process designed to identify zones of proximal development and implement appropriate scaffolding techniques. During kindergarten the IRA and NAEYC recommend that parents and family members:

  • Daily read and reread narrative and informational stories to children
  • Encourage children’s attempts at reading and writing
  • Allow children to participate in activities that involve writing and reading (for example cooking, making grocery lists)
  • Play games that involve specific directions (such as Simon says”)
  • Have conversations with children during mealtimes and throughout the day

Your pretend child is getting older. Nature and nurture are interacting to result in a unique individual with likes/ dislikes, interests/ disinterests, ways of coping, and range of emotions. Some changes are obviously continuous. Your child is gradually getting taller (although there will be a spurt later on). We have seen that even a characteristic so obviously influenced by genes as height, is impacted upon by environmental factors such as nutrition and illness. The child’s vocabulary is similarly undergoing incremental growth and we start to observe apparent changes in reasoning, approaching and solving problems, and communicating with you and others.

In Chapter 7, we saw how the ability of chimpanzees to solve two-choice visual discrimination problems improved gradually over the course of 300 problems. At the end, the chimp appeared to solve the problems in a qualitatively different manner. Rather than observing gradual improvement across the first six trials of each new problem, the chimp would jump from chance to perfect performance on the second trial (Harlow, 1949). One might conclude, it transitioned from an “incremental learning stage” to a “hypothesis-testing stage” of development. The name of the stage is descriptive of the performance, not explanatory. The explanation lies in the history of exposure to examples of the same type of problem. Although heredity and environment interact gradually and incrementally, sometimes the behavioral effect appears to constitute a qualitative change in the individual. The ability to comprehend language and speak result from physical development of the infant’s brain and speech organs, improved ability to imitate, and continual exposure to vocalizations.

Theories of Development

Piaget’s Stage Theory of Cognitive Development

Jean Piaget, a Swiss psychologist, proposed an influential theory of cognitive development from birth through adulthood (Piaget, 1928; 1952; 1962; Piaget & Inhelder, 1973). Piaget was an example of a stage theorist. Stage theorists describe human development as a fixed sequence of capabilities resulting in qualitatively different ways of responding to the world. The first two stages last from infancy through preschool and the early grades. Piaget describes cognitive development as the continual modification (i.e., accommodation) of schemas based on the incorporation (i.e., assimilation) of new knowledge. From approximately birth to two years the child is preverbal, learning the relationships between sensory stimuli (e.g., visual and auditory stimuli, etc.) and movement. Piaget’s overarching principles of assimilation, accommodation, and schema development can serve to integrate the cumulative interactive effects of heredity and experience as the child ages and advances through the different stages.

Interacting with a child that speaks is fundamentally different than interacting with a non-verbal child. Piaget’s distinction between non-verbal (sensorimotor) and verbal (pre-operational) stages seems appropriate. We need to be careful, however, in how we interpret the meaning of a stage of development. It is one thing to describe the child as behaving as though being in a particular stage and a very different thing to offer the stage as an explanation for behavior. You might recognize this as another example of a pseudo-explanation. Why does the child speak? Because she/he is in the pre-operational stage. How do you know she/he is in the pre-operational stage? Because she/he speaks.

In Chapter 7, we saw how the ability of chimpanzees to solve two-choice visual discrimination problems improved gradually over the course of 300 problems. At the end, the chimp appeared to solve the problems in a qualitatively different manner. Rather than observing gradual improvement across the first six trials of each new problem, the chimp would jump from chance to perfect performance on the second trial (Harlow, 1949). One might conclude, it transitioned from an “incremental learning stage” to a “hypothesis-testing stage” of development. The name of the stage is descriptive of the performance, not explanatory. The explanation lies in the history of exposure to examples of the same type of problem. Although heredity and environment interact gradually and incrementally, sometimes the behavioral effect appears to constitute a qualitative change in the individual. The ability to comprehend language and speak result from physical development of the infant’s brain and speech organs, improved ability to imitate, and continual exposure to vocalizations.

The newborn is able to sense the environment and emit a variety of responses. Whether in the rainforest or at home, as the newborn turns its head it will observe that some objects are stationary and others move. Some of the moving objects make sounds and others do not. Some of the objects feel soft and cuddly whereas others are hard. One of the soft moving objects makes sounds and sometimes approaches and holds the infant while placing its lips near something soft. This soft object can be sucked, resulting in the availability of a substance which can be tasted and smelled. The newborn’s initial schemas will most likely center around these external environmental stimuli and the internal sensations associated with basic survival drives such as eating and terminating discomfort. Eventually, some objects will be incorporated within a schema (e.g., objects that do not move), others might require modification of a schema (e.g., objects that can be moved and placed in the newborn’s tiny fingers), while others may require creation of an additional schema (e.g., round objects that move if simply touched). Gradually, concepts will be acquired (e.g., flat objects, round objects, heavy objects, light objects, soft moving objects that make noise and provide food, moving objects that make noise and bathe the infant, other similar looking moving objects that are usually present, other similar looking moving objects that are sometimes present, different looking moving objects that make different sounds and are usually present, etc.).

As the infant’s senses and motor abilities improve and it starts manipulating the environment, it gradually acquires the ability to predict and control what happens. Piaget describes a three-stage sequence of circular reactions (i.e., repetitious behavior) taking place during this first, sensorimotor period of development. Primary circular reactions appear to be repetition of a behavior for its own sake, or perhaps the resultant sensations. Secondary circular reactions consist of the types of behavior demonstrated by Rovee-Colier where the infant repeats an act resulting in a specific environmental effect. Tertiary circular reactions appear to involve attempts by the infant to produce the same environmental effect with different responses. Such attempts typically start to appear at about eight months of age and constitute the first examples of “experimentation.”

In addition to learning that she/he exists as an independent object, Piaget felt that an important concept acquired in the sensorimotor period is object permanence. Initially, children act as though once objects disappear from view they no longer exist; that is, “out of sight, out of mind.” Evidence suggests that children as young as 3-1/2 months old behave as though they understand object permanence. This is inferred from the fact that they look longer at events which turn out different than they apparently anticipated (Baillargeon & DeVos, 1991). For example, it has been shown that young infants will gaze longer at an impossible event (e.g., a toy train appearing to move through a block rather than hitting it (see below) than at a possible event.

Piaget suggested that, at about seven years of age, children advance from the pre-operational stage to the stage of concrete operations. It is at this time that the child appears to understand how certain operations can transform the appearance of objects but not their fundamental characteristics. As shown in the previous video, Piaget developed ingenious tasks for assessing this ability through the demonstration of conservation of number, mass, and liquid volume. Here is another video showing developmental changes in children’s understanding of conservation of number, length and volume.

If you first show a pre-operational child the two rows of five coins lined up so they match (a) and then spread out one of the rows (b) while they are watching , the longer row will be described as having more. They do not yet understand that the operation of moving the objects does not change the quantity. Similarly, a pre-operational child is likely to say that if one of two same-sized balls of clay is rolled into a sausage, it is now larger; or if one of two same-sized glasses containing the same amount of water is poured into a narrower but taller glass, it now has more. Children responding correctly are considered to have advanced to the stage of concrete operations. They understand how the concept of reversibility applies to the operations performed on the row of coins, clay, and liquid. The coins can be moved back to their original positions, the clay sausage rolled back into a ball, and the water poured back into the original glass.

To demonstrate another difference between a pre-operational child and one in the concrete operations stage, Piaget developed a task to determine the ability to perceive someone else’s perspective, as shown in the video. The child was shown a realistic model of a scene including a mountain, toy animals, and plants. The young pre-operational boy only sees the scene from his own perspective. The older boy in the stage of concrete operations is able to imagine the scene from the adult’s position. Piaget and others describe the young boy’s behavior as reflecting egocentrism.

Piaget’s stages describe a progression in the child’s ability to use and manipulate symbols (i.e., to “think”). During the pre-operational stage the child is able to use words to symbolically represent objects and events. During the sensorimotor stage the child is restricted to symbols representing the structuralist’s three basic elements of consciousness; sensations, images, and emotions.

You might be wondering what it means to symbolically represent objects and events in the absence of language. In classic research, Walter Hunter (2013), a student of Harvey Carr’s (one of the early functionalists), tested to see whether his daughter and different animals could symbolically represent the location of an object. The procedure involved a small maze where a light could go on behind one of three “doorways.” If the subject went through the lit doorway, there was food present. There was no food behind the other doorways. This is a simple task for most animals to learn. However, if the light was turned on and then off, a rat could only go to the correct doorway if it oriented itself while the light was still on. Then, it would literally “follow its nose.” If the rat was spun around and released after the light went off, it performed at chance. Raccoons, chimpanzees, and Hunter’s daughter, went to the correct location even though there was no longer an external cue (i.e., light) to guide them. Hunter inferred from this behavior, that these subjects must have symbolically stored information concerning the prior location of the light in order to go to the correct doorway. This ability would have important survival value. For example, if an animal that is not hungry noticed food in a particular location, it would increase the likelihood of survival if it could return to that location when hungry. In addition, this test permitted Hunter to know when he needed to keep his eye on the family cookie jar!

During the concrete operations stage, the child is not only able to symbolically represent objects and events, but is also able to imagine manipulating concrete (i.e., observable) objects and events. The child can imagine moving the coins, squeezing the clay, pouring liquid from one container into another, or moving around the scene of the mountain. Piaget believed his final stage, formal operations, was reached between 12 and 15 years of age (Piaget, 1972; Piaget & Inhelder, 1958). The more adult-like teenager is able to imagine manipulating abstract concepts. For example, without looking at actual objects, an adolescent could be asked “If A is larger than B and B is larger than C, must A be larger than C?” She/he can imagine multiple examples fulfilling the requirements of the statements and arrive at the correct answer. The ability to mentally manipulate abstractions underlies logical thinking, scientific hypothesis testing, and every day problem-solving. The teenager can now execute all the stages of the problem-solving process symbolically: consider how things are, consider how she/he would like them to be, list optional solutions, evaluate the short- and long-term consequences of the different strategies, and arrive at a potential solution.

Piaget’s theory of cognitive development has been extremely influential and generated an enormous amount of empirical research. Piaget, himself, was a gifted child with an early interest in biology. He published several articles by the age of 15! A little known fact is that soon after receiving his doctorate, Piaget moved to Paris and worked with Alfred Binet in constructing items for his seminal school readiness test. Piaget’s stage theory was influenced by the distinct types of errors children of different ages made to certain questions on this test. From these errors, Piaget inferred qualitatively different cognitive styles (pre-operational, concrete operations, and formal operations). Based on his early interests and later work, it is not surprising that the tasks Piaget developed to study cognitive development are oriented toward scientific thinking or that performance on these tasks correlates with school readiness and intelligence tests (Humphreys & Parsons, 1979).

Piaget has been criticized for basing his theory on the observation of a very small non-representative sample of individuals; his three precocious children and the children of highly-educated professionals. Research conducted with more representative samples has generally supported the sequence Piaget described in the ability to solve different types of problems. However, there is considerable variability in the ages at which different children demonstrate the characteristic behavior patterns of the different stages. For example, as previously cited, pre-verbal (i.e., sensorimotor stage) children can demonstrate object permanence (Baillargeon & DeVos, 1991). At the other end of Piaget’s developmental sequence, adults frequently lack or are inconsistent in their usage of formal operational thinking. Piaget (1972) himself recognized this inconsistency. He suggested that experiential differences with different domains of skills (e.g., physics, mathematics, philosophy, etc.) could result in concrete operational type performance in some situations and formal operational performance in others. In the next chapter, we will see that this same pattern of inconsistent performance across situations applies to other human personality characteristics in addition to cognitive style. Some have argued that Piaget fails to appreciate the underlying role of basic cognitive processes (e.g., short-term memory, processing speed, etc.) in movement from stage to stage as well as individual differences (Demetriou & Raftopoulos, 1999; Demetriou, Mouyi, & Spanoudis, 2010). It has been demonstrated that science training improves performance on the Piagetian tasks (Lawson, 1985). Such training effects suggest that movement through the stages is more reliant upon experience than Piaget implies.

Moral Development

Children are completely egoistic; they feel their needs intensely and strive ruthlessly to satisfy them.

It is impossible to overlook the extent to which civilization is built upon a renunciation of instinct.

The first requisite of civilization is that of justice.

Sigmund Freud

Piaget’s interests extended beyond the development of knowledge and skills related to nature (i.e., scientific thinking). He was also deeply interested in the individual’s development of a moral code (Piaget, 1932). Not surprisingly, Piaget believed that the cognitive changes occurring as the child and teenager advanced through the developmental stages influenced their moral thinking as well as their understanding of nature. During the pre-verbal sensorimotor stage, direct learning principles account for changes in behavior. The child increases the frequency of behaviors resulting in appetitive (i.e., “feel good”) or reducing aversive (i.e., “feel bad”) outcomes and suppresses behaviors resulting in aversive or the loss of appetitive outcomes. As the child initially acquires language during the preoperational stage, rules are imposed by adults (primarily parents and caregivers) and understood in a literal, inflexible way. Later, the child gradually interacts with other children, makes friends, and goes to school. The parents’ influence is diluted by the direct and indirect (i.e., observational and verbal) contingencies experienced with different adults (e.g., teachers, members of the clergy, etc.) and their peers. As the child becomes less egocentric during the stage of concrete operations, he/she is able to appreciate the perspectives of others and recognize the possibility and need to cooperate by negotiating rules of conduct. Once attaining the stage of formal operations, teenagers and adults are able to appreciate and consider more subtle and abstract aspects of interpersonal and moral issues (e.g., the benefits and need for fairness, justice, responsibility, etc.).

Kohlberg’s Stage Model of Moral Development

Lawrence Kohlberg (1976) developed a very influential stage model of moral development based on Piaget’s stage model of cognitive development (see Figure 8.11). He distinguished between three different levels (“styles”) of reasoning: pre-conventional, conventional, and post-conventional, each sub-divided for a total of six stages. Pre-conventional morality is based upon extrinsic rewards and punishers. At first, during Piaget’s sensorimotor period, the child is only sensitive to extrinsic rewards and punishers. Once the child acquires speech during Piaget’s pre-operational stage, distinctions between right and wrong are taught by parents and other authority figures. The child learns the value of cooperation (e.g., “I’ll scratch your back and you scratch mine”) once making friends and interacting with others. Conventional morality is based on reference to an authority figure (e.g., parent, teacher, clergy member, etc.) at first and then advances to written sources (e.g., the Bible, Koran, Constitution, etc.). The child acquires a more abstract and flexible understanding of morality once progressing to the stages of concrete and formal operations, . The highest (and rarest) Post-conventional morality level is based on the application of universal principles such as the Golden Rule (Do unto others as you would have others do unto you).

File:Kohlberg Model of Moral Development.svg

Figure 8.3 Kohlberg’s stage theory of moral development.

In attempting to teach codes of moral conduct, much parenting consists of the intentional or non-intentional administration of appetitive and aversive events. We may consider how the different parental styles implement learning procedures and how they may relate to Kohlberg’s levels of moral development (see Figure 8.12).

Indifferent Indulgent Authoritarian Authoritative

Unavailable to monitor behavior, administer consequences consistently, or provide explanations Available to administer non-contingent presentation of appetitive events and provide praise Available to administer contingent presentation of mostly aversive events without explanation Available to administer contingent presentation of appetitive and aversive events with explanation
Pre-Conventional morality Sense of entitlement Conventional morality Post-Conventional morality

Figure 8.4 Parental styles and stages of morality (adapted from Levy, 2013).

Indifferent parents (low demandingness, low responsiveness) do not specify codes of conduct or respond to their children’s needs. If other people (siblings, relatives, peers) do not provide rules and/or consequences, the children will most likely base right and wrong on the outcomes of their actions (if it feels good it is right; if it feels bad it is wrong). Indifferent parenting would appear to be most likely to produce pre-conventional reasoning in children. The authoritarian parenting style would appear likely to result in conventional reasoning and the authoritative style in post-conventional reasoning. Ideally, by providing reasons and explanations in age-appropriate language, our children would internalize principles of moral conduct and apply them appropriately throughout their lives.

Indulgent parents (low demandingness, high responsiveness) do not specify codes of conduct but are affectionate and responsive. They provide “unconditional positive regard” (Rogers, 1957), the type of non-contingent appetitive consequence likely to result in “spoiling” and a sense of entitlement. This could create problems for the children in other contexts (e.g., school, playgrounds) when others react differently to their behavior.

Authoritarian parents (high demandingness, low responsiveness) specify strict codes of conduct in a non-responsive manner. If the children ask for reasons, they may reply “because I say so!” In their parents’ absence, the children would seek other sources of authority.

Authoritative parents (high demandingness, high responsiveness) specify strict codes of conduct within a context of warmth and sensitivity to the children’s needs. They are likely to provide reasons for their codes of conduct, listen to their children’s perspective, and in instances, negotiate alternative codes.

It is very difficult to administer punishment immediately and on a consistent basis in the natural, free-living environment. Therefore, it is not likely that punishment will work as intended, to suppress undesired behavior. Often, instead, the child will learn to become deceptive or lie in order to avoid being punished. Indifferent parents are not likely to be present to appropriately administer punishment and will probably be inconsistent. Indulgent parents are less likely to administer punishment than other parents, if at all. Authoritarian parents (“my way or the highway”) might effectively suppress the undesired behavior when they are present. However, the behavior may occur when they are not present or when the child is in different situations. Authoritative parents, taking advantage of their children’s verbal and reasoning skills, probably have the greatest likelihood of attaining the desired result. For example, an older sibling picking on a younger one might be told the following scenario which includes stipulation of rules of conduct:

There is a difference between a jungle and a society. In the jungle, strong animals often attack weaker animals who receive no protection. Human beings have families and societies in which the strong protect the weak and help them grow stronger. You have to decide whether you want to live in our family and be a member of society. If you keep picking on your little brother/sister, we will need to treat you like an animal from the jungle. We put dangerous animals in a zoo so they cannot hurt anyone, so we will keep you in your room. If you take care of your little brother/sister, mommy and daddy will let you play together and have fun.

By relying upon language to stipulate and enforce rules in this manner, a parent is most likely to achieve the short-term objective of encouraging appropriate and discouraging inappropriate behavior. In addition, by providing thoughtful explanations and justifications of rules, the parent increases the likelihood that the child will internalize a moral code of conduct as he/she matures.

Erik Erikson’s Stage Theory of Lifespan Development

Erik Erikson (1950; 1959) proposed a “cradle to grave” sequence of development which complements the stage theories of Piaget and Kohlberg. Erikson described eight “conflicts” associated with different periods of one’s life (see video). It was assumed that successful resolution of the conflict associated with a particular stage resulted in acquisition of the related “virtue” (e.g., trust, autonomy, initiative, etc.) for the rest of one’s life. Unsuccessful resolution would result in developmental problems during subsequent stages.

Childhood

During Erikson’s first (infancy) stage, taking place during Piaget’s pre-verbal sensorimotor stage, the attachment style of the caregiver will influence whether or not the infant experiences a nurturing and responsive social environment. If the caregiver is consistent in satisfying the basic needs for food, comfort, and relief from pain, the infant learns to trust them. If negligent, inconsistent, or abusive, the child will mistrust and perhaps fear the caregiver.

In the second (early childhood) stage, starting toward the end of Piaget’s sensorimotor stage and extending into the beginning of the verbal preoperational stage, the young child is exploring and learning to control the environment on its own. A patient caregiver waits until the zone of proximal development is reached and applies encouraging, supportive scaffolding techniques during toilet training and other learning experiences. Such a parent is likely to insure the child’s success, resulting in the feeling of independence and autonomy. If the child is hurried, scolded, or punished for failures, she/he may feel shame and doubt her/his capabilities.

In the third (preschool) stage, occurring in the middle of Piaget’s preoperational stage, the child must learn to dress and groom in a manner consistent with social norms and standards. If the child is encouraged to explore options, satisfy its curiosity, and express its own preferences and interests, it is likely to develop initiative. If discouraged, the child may become passive and doubtful of its own capabilities and experience guilt regarding its choices.

Erikson’s lengthy fourth (school age) stage starts toward the end of Piaget’s preoperational stage and extends through concrete operations into the beginning of the final formal operations stage. If at home and school, the child is appropriately challenged and succeeds at progressively more difficult tasks, it becomes competent, confident, and industrious. The child must experience and learn to cope with frustration and inevitable failure. It is during this stage that the child becomes concerned about its own performance in comparison to others in and out of school. Feelings of inferiority can result from perceived inadequacies and negative social comparisons.

Adolescence and Adulthood

Adolescence: Preparing for Adulthood

Anatomy is destiny.

Love and work… work and love, that’s all there is.

Sigmund Freud

“Sex, drugs, and rock & roll”

Life Magazine, 1969

Congratulations, your child has made it to Piaget’s formal operations and Kohlberg’s conventional moral stages and is a trusting, independent, guilt-free, industrious teenager. Erikson’s fifth (adolescence) stage takes place during the middle-school and high-school years. This period can include substantial peer-pressure as students compare their physical appearance, school performance, personal characteristics, and personal possessions with others. Erikson is perhaps most famous for coining the term “identity crisis” to refer to the characteristic questioning of one’s personal qualities, goals, and social roles during his fifth developmental stage. Parental tensions between the desire to protect one’s child and the need to foster independence can result in an increase in the frequency and intensity of conflicts and arguments during this stage. It is tempting to volunteer suggestions or to impose solutions on a teenager struggling with identity issues. Peers are replacing parents as the primary role models and sources of reinforcement. Personal choices can have significant impacts upon the course of one’s life and pose significant dangers. It is a period marked by experimentation with hobbies, jobs, grooming habits, dress styles, sexual practices, alcohol, drugs, music and media, religious participation, and political beliefs. Negotiating the fine line between helpful guidance (scaffolding) and interference, is difficult for parents as the teenager struggles to attain a unique identity. Non-requested suggestions or attempts to enforce restrictions can result in identity confusion.

James Marcia (1966) developed a model addressing Erikson’s identity crisis. Marcia distinguished between four different identity states based on two considerations; had exploration occurred and had a commitment been made.

Identity issues could include relationships (friends and/or romantic partners), gender roles, religion, politics, interest in attending higher education, future vocation, etc. As implied by Erikson, it is desirable that the adolescent be exposed to and permitted to explore various options for each of these issues. Only then, could the adolescent make an informed decision regarding whether to commit to a particular choice, thereby attaining identity achievement. If commitment occurs without the ability to explore options (e.g., a parent’s making the decision regarding a romantic partner or future career) identity foreclosure occurs. Moratorium refers to the continual stage of exploring prior to making a commitment. Identity diffusion occurs when one never addresses identity issues or commits to a specific choice.

Freud remains current in his observation that the human condition includes the two major developmental tasks of preparing for love and work. The beginning of adolescence is demarcated by the onset of puberty as males and females gradually become physically capable of reproduction. Associated is the development of a new and powerful basic drive. The Nukak have no reason to discourage sexuality or delay child-bearing. As soon as males and females are ready, they pair off and usually form monogamous relationships. Living in the low-density population rainforest means that there are very few available potential mates. This has its advantages (e.g., a relatively simple and low-stress “courtship” period) and disadvantages (extremely limited choice).

We live very different lives than the Nukak as the result of centuries of civilization and available technologies. Industrialization resulted in people moving from predominantly rural, low-density population, agricultural lifestyles to urban, high-density population, manufacturing lifestyles. Many of the jobs created were dangerous and some required advanced intellectual skills. This raised the need and desire for compulsory education toward the end of the 19th century. In 1890, 5% of American 14-17 year-olds were enrolled in high school. By 1970, 90% were enrolled (Tanner, 1972). The requirement to attend school meant delaying becoming independent from one’s parents and starting a family. G. Stanley Hall suggested the need for a new developmental stage to refer to this delay period between childhood and adulthood. He called it adolescence (Hall, 1904), derived from the Latin word adolescentem meaning to mature, or grow up.

Adulthood

Congratulations! Thanks to the magic of fast-forward developmental psychology, your pretend child has caught up with you and is a college student. What do you want to be when you grow up? Interestingly, your pretend child catching up with you puts you in a similar situation to your parents. You might be considering what you would like your child to become. Your answers for both yourself and your pretend child probably continue to relate to Freuds’s two major developmental tasks: finding a partner in life; attaining suitable, stimulating, and enjoyable work. Unlike the Nukak, you and your young adult theoretically have an enormous choice of potential life partners and occupations. This is true, even in comparison to relatively recent generations of city dwellers. The internet has introduced globalization to pairing off as well as to the marketplace. We live in a hyper-connected world where geographic distance no longer necessarily limits our opportunities to meet or communicate with others.

The same trend observed with high school attendance also applies to college attendance. In 1890, less than 5% of 18-21 year-olds were enrolled in college. By 1990 this number exceeded 60% (Arnett & Taber, 1994). No doubt this percentage will continue to increase in the future and we will observe similar trends in graduate and post-graduate education. Thanks to machinery reducing the need for physical strength and the availability of contraception, anatomy is not necessarily the dominating force in a woman’s destiny that it was in the past. Opportunities for women enormously expanded during the past two generations as we transitioned to an economy based upon service and information. In 1950, the average age of marriage was 23 for men and 20 for women in the United States. By 2000 these ages had increased to 27 and 25, respectively (Arnett, 2000). Due to the educational requirements of many current vocations in technologically-advanced societies, there is usually the need to delay financial independence from one’s parents and starting a family even longer than when Hall proposed the adolescent developmental stage. Arnett (2000) suggested the need to add emerging adulthood as another developmental phase between the end of adolescence (e.g., graduation from high school at about the age of 18) and adulthood (financial independence, living apart from one’s parents, starting a family, etc.). Arnett found that many college students report feeling “in between” adolescence and adulthood, consistent with considering emerging adulthood an intermediary stage of development. If you were to define yourself as being in a developmental stage, would you consider yourself an adolescent, adult, or emerging adult? What would you consider your pretend student at the start of the freshman year?

Erikson’s sixth through eighth stages consist of young adulthood, middle adulthood, and maturity (see Figure 8.14). In the rainforest, the major developmental transition occurs when children leave their parents’ home to mate and have children. Life in the developed world is marked by minor transitions from elementary- to middle-school and middle-school to high school. Those who fail to graduate high school tend to fare poorly in the increasingly skills- and education-oriented global economy. Even high school graduates can have difficulty finding jobs when there are downturns in the economy. Taking on the responsibilities of adulthood at about 18 years of age necessarily limits ones career (and associated economic) options. The extent of your education will also probably affect whom you find suitable as a mate and vice-versa. If you are attending college, it is likely that your parents considered these economic and social realities in supporting this goal.

As indicated previously, a substantial majority of contemporary high school students go on to college. This almost always results in delaying the start of a career or a family. These students would be considered by Arnett (2000) to be in the emerging adult stage until taking on the responsibilities associated with being an adult. The identity issues characteristic of Erikson’s adolescent stage carry over into emerging adulthood for those attending college and graduate school. Marcia’s (1966) emphasis on the importance of exploring options and making commitments remains appropriate. In addition to being a time to study and advance in your pursuit of a career, it is a time for meeting new potential friends and romantic partners. The commitments you make to a career and romantic partner will have major impacts upon the success and enjoyment you experience during middle adulthood and the likelihood of having disappointments and regrets when you reflect back on your life.

Putting It All Together: Looking Back

As an exercise for ending the human development chapter, I would like you to consider how the material helps you understand the factors which made you the unique person you are. Think of the importance each of the following played at different points of your life:

  • Heredity
  • Health (including nutrition, alcohol, and drugs)
  • Parents and caregivers
  • Siblings
  • Other family members
  • Friends from the sandbox through high school
  • Where you grew up, including community activities and problems
  • Schools
  • Religion
  • Jobs
  • Sports
  • Hobbies
  • Music and the Arts
  • Technology (including cars, computers, cell phones, the internet, etc.)

It might prove meaningful to consider the role each of these played during the ages corresponding to Piaget’s, Kohlberg’s and Erikson’s stages. Previously, I indicated the inappropriateness of considering stage theories as explanations. Information concerning specific genes and experiences is not provided. Stage theories can, however, provide valuable perspectives for describing and understanding the important behavioral changes that appear characteristic of the human condition. This is true whether describing a child growing up in the rain forest or a modern city.

Putting It All Together: Looking Forward

You may also wish to consider how the information in this chapter helps you plan for your future, including the possibility of becoming a parent. Are there implications regarding who you would like to become in the future and for accomplishing your goals? What are the implications of the findings regarding different parenting styles should you decide to trade in your pretend child for the real thing? Major issues you may wish to consider include how to address gender roles, which toys and technologies to introduce, and when and how to introduce them.

File:First Shave.jpg

File:Happy child 2.jpg

Figures 8.4 and 8.5 Male and female gender roles.

We can consider the implications of transformation of the human condition to the development of human potential. Whether growing up in the rainforest or city, much of a child’s capabilities are increased by genetically influenced growth and neurological changes. Improved observational learning skills and the introduction of speech enables the application of more effective and efficient indirect learning principles. All healthy children possess the potential to adapt to their environment. The pictures of members of the Nukak tribe remind us of the extreme differences in the human condition that currently exist on our planet. The manufactured picture of the changes occurring on Manhattan Island over the past 400 years, dramatically reveals how technology has transformed the natural human condition (i.e., the planet earth) to one created by humans themselves. This may seem like science fiction, but we appear on the verge of creating a third, virtual, human condition. One of the most popular choices for a self-control project in my classes the past few years, has addressed some form of computer or cell phone usage. Contemporary college students are spending substantial parts of their life (i.e., large slices of their personal pie charts) on social networking sites, playing video games, texting, and so on. The virtual community Second Life is popular. For many, it and other internet sites are becoming the person’s first life!

Your generation’s children will be exposed to natural, human-manufactured, and virtual realities from birth (or earlier?). Parents and other caregivers, as always, will need to keep Vygotsky’s principles of the zone of proximal development and scaffolding in mind as they help children adapt to their ever-increasing choices regarding the human condition.

Chapter 7: Cognition, Intelligence and Human Potential

Learning Objectives

  • Describe how you would teach a child concepts of shape and number
  • Describe how prior learning can facilitate or interfere with problem-solving
  • Describe how the basic characteristics of the normal curve relate to individual differences
  • Describe how adaptive learning relates to performance on intelligence tests

Knowledge, Skills and Human Potential

I have described psychology as the science of human potential. It is through this lens that we view the different content areas. The biological evolutionary process, taking place over millions of years, resulted in our physical structure. In the “Mostly Nature” chapters, we examined how our physical structure, including our brain, permits and limits what humans can achieve. We saw how our genetic and physical features permit speech and the use of tools. Without these capabilities, humans could not have individually and socially evolved to the point where we could transform the planet over a relatively brief period of time. Recall how Manhattan appeared only 400 years ago.

In the previous two chapters of the “Mostly Nurture” section, we described how direct and indirect learning enables much of the animal kingdom to adapt to their specific environmental conditions. The extent to which we fulfill our individual potential depends on our environmental conditions and the types of learning experiences to which we are exposed. The Nukak survive under conditions requiring a nomadic lifestyle. This restriction impacts upon every level of Maslow’s human needs hierarchy. Hunting and gathering must be conducted daily in order for tribe members to survive. Shelters are temporary and unstable, providing little protection from the elements and predators. Most of a Nukak’s life is spent living with a small number of people, limiting opportunities for finding friends or potential mates. Activities related to survival consume much of the day, leaving little discretionary time for self-actualization (i.e., achieving one’s potential).

You probably found when you plotted your personal pie-chart that, in comparison to the Nukak, a relatively small part of your day is dedicated to survival. Instead, much of your time is spent on school-related work, perhaps a job, social activities, and recreation. At the end of the previous chapter we saw how soon after you learned to speak, you may have learned the ABCs and to count. This was followed by the acquisition of other knowledge and skills in preparation for you to attend school. Consider the importance of what you have learned in school to your ability to attain your personal goals and achieve your potential. This chapter completes the “Mostly Nurture” section. Here we will consider the types of knowledge and skills acquired in school and how they relate to human intelligence and to achieving our potential as individuals and a species.

Concept Learning

A stimulus class is a collection of objects sharing at least one common property. For example, all circles are geometric objects with all points on the circumference equally distant from the center. Concept learning is inferred when an individual responds in the same way to all instances of a stimulus class. Much of our knowledge base consists of concepts. For example “circle” and “boy” are qualitative concepts. In comparison, “middle-sized” and “tenth” are quantitative concepts. They differ in amount, not just in kind.

Parents usually try to teach such concepts soon after their children speak. How would a parent go about teaching the concept circle and know if the child understands? When I ask my students this question, they usually suggest that the parent say the word “circle” while pointing to circular objects in the environment. You may recall our discussion of the acquisition of word meaning under the topic of classical conditioning. In this instance, the word “circle” is associated with many different stimuli sharing the property. A discrimination learning procedure could also be used to establish and assess conceptual responding to circles. The child would receive an appetitive stimulus for saying the word “circle” while pointing to appropriate examples differing in size, color, etc. The child would never be reinforced for saying “circle” to other shaped stimuli. Eventually, the child should be able to appropriately generalize the response to new instances of circles.

The same procedure could be used with quantitative concepts. When I was a graduate student, the research literature on transposition (i.e., responding to stimuli on the basis of a relationship) suggested that other animals and young children were unable to apply the middle-size relation to physically dissimilar stimuli (Reese, 1968). In my doctoral dissertation (Levy, 1975), I demonstrated near-perfect middle-sized transposition on two very different sets of stimuli by nursery-school children. First they were taught to point to three small squares (such as those shown below in Figure 1) in the order of their height before being required to select the middle-sized one. The placement was changed over trials so the child had to change the pointing order in a manner consistent with the sizes. They were then asked to order and point to the middle-sized member of three much larger squares. The results supported the conclusion that middle-size transposition occurs only when a child sequentially orders the three stimuli in an array prior to choosing the middle-sized one. After being taught to count, it becomes possible to establish relational responding based on any quantity. For example, a child could be asked to point to the fifth-largest triangle. This would require ordering all the triangles in an array based on size, and then, starting from the smallest, counting to five.

Figure 7.1 Example of stimulus arrays presented on each trial of a middle-size problem. The three stimuli in an array appeared in random order on each trial.

Concept learning, perhaps surprisingly, occurs throughout the animal kingdom. For example, pigeons readily learn visual concepts such as “triangle” and “square” (Towe, 1954), can distinguish between letters of the alphabet (Blough, 1982), and respond to ordinal position (Terrace, 1986). Presenting slides in a Skinner box, it has been demonstrated that pigeons easily learn such abstract natural concepts as “tree”, “water”, and even “person” (Herrnstein and Loveland, 1964; Herrnstein, Loveland, and Cable, 1976). Apparently, excellent vision, not a large cortex (i.e., pigeons have “bird brains”) is necessary for learning such concepts. In a fascinating application of concept learning, Skinner (1960) humorously describes a previously- classified World War II project in which pigeons were taught to identify the defining characteristics of axis-power military ships. The objective was to respond to the invasion of Pearl Harbor with our own squadron of “kamikaze pigeons.” The pigeons became the brains behind the first non-human “smart missile”

We and the Nukak have in common many basic needs (e.g., food, water, shelter, temperature, danger, pain, etc.) and family relationships (e.g., mother, father, brother, sister, etc.). One strategy for describing and contrasting our distinct human conditions would be to study our linguistic concepts. For example, there is no doubt that the Nukak will have a much more extensive vocabulary for types of rain and types of forestry than we will. We will have more extensive vocabularies regarding planes, trains, and automobiles. When I was very young, my mother taught me “red car”, “blue car”, “green car”, etc. My father taught me “coupe”, “convertible”, and “sedan”, and eventually “Chevy”, “Chrysler”, “Ford”, etc.

Unlike the rain forest, some climates and geographies support domestication of plants and/or large animals. Such environmental conditions enabled the development of agriculture and animal husbandry, permitting humans to abandon the nomadic lifestyle. New vocabularies developed related to the essential concepts for these life-transforming activities. When humans were able to permanently settle in a location, larger and larger communities evolved. This created the need for concepts related to increasingly complex interpersonal relations. As food surpluses occurred, there were opportunities for people to dedicate their time and creative efforts to the development of new tools, technologies, and occupations. Eventually, communities, economic arrangements, governments, and formal religions evolved. Along with these developments, the collective human knowledge base and vocabulary expanded. It was after the last ice age, approximately 13,000 years ago, that the agricultural lifestyle became the predominant human condition (see Figures 7.2 and 7.3). For the great majority, this stage of human history probably had more in common with Stone-Age nomadic cultures than our contemporary conditions. Literacy was not essential and survival needs took up most of one’s daily activities. As noted previously, this changed with the industrial revolution and the institution of compulsory education.

File:ClaySumerianSickle.jpg

Figure 7.2 Sickle from 3000 B.C.

File:PSM V18 D467 Ancient egyptian hoe and plough.jpg

Figure 7.3 Ancient Egyptian hoe and plow.

We have seen how the ability to use speech to communicate a continually-expanding vocabulary of concepts has enabled humans to live very differently and control their fates far more than the rest of the animal kingdom. Until relatively recently, however, only the privileged acquired the ability to read, write, and perform mathematic operations (i.e., learn the 3 “R”s). This meant that the great majority of humans, even in the relatively-advanced western societies, were unable to profit from or contribute to the accumulating knowledge recorded on the written page. John Adams, one of America’s founding fathers, stated “A memorable change must be made in the system of education, and knowledge must become so general as to raise the lower ranks of society nearer to the higher. The education of a nation, instead of being confined to a few schools and universities for the instruction of the few, must become the national care and expense for the formation of the many” (McCullough, 2001, p. 364). As Adam’s call for universal education was eventually realized, an increasing number of people became literate over the past century. This created an expanding pool to contribute to the ever-evolving knowledge-base. The resulting technologies continue to transform the human condition at an accelerating pace.

Learning to Learn

The ability to transform the human condition involves more than the knowledge and skills acquired in school. This knowledge must be converted into the action necessary to solve problems and to create tools. We will begin our discussion of problem solving by describing Harlow’s (1949) classic research demonstrating chimpanzees’ acquisition of learning sets. The term “learning set” may be interpreted to refer to either an independent or dependent variable. It can refer to a number of experiences that have something in common or to the effect of those experiences (as in being “set up”). Harlow provided his chimpanzees with over 300 two-choice visual discrimination problems. For example, the first problem might require choosing between a circle and a square; the second problem, between a red triangle and a green triangle; the third problem, a large diamond and a small diamond, etc. Different stimuli and dimensions were relevant across the different problems. Since each problem includes only two possible choices, the likelihood of being correct on the first trial by chance was always 50 per cent. The result on the first trial provides the necessary information for an alert subject could be correct from trial two on. If correct, one would continue to choose the same stimulus; if incorrect, one would switch to the other possibility. Harlow and others described this ideal performance pattern as a “win-stay, lose-shift” strategy.

The chimps’ performance improved gradually over the first 30 problems, suggesting an incremental learning process. This appears qualitatively, rather than quantitatively different from the sudden, discrete win-stay, lose-shift strategy characteristic of human adults. However, over the remaining trials, the win-stay, lose-shift strategy emerged so that the performance of the chimps on the last 55 trials was human-like with perfect performance on the second trial. It appears that just as pigeons are able to learn concepts by “abstracting out” the common characteristics of a collection of visual stimuli, chimpanzees are able to “abstract out” an approach to solving two-choice visual discrimination problems regardless of the stimuli involved. They have been “set” (i.e., have learned how to learn) to solve a particular type of problem.

Problems

We frequently describe challenges in life as problems. A problem exists when there is a discrepancy between the way things are and the way one would like them to be. The solution consists of acquiring the information and ability to eliminate the discrepancy. As described in Chapter 3, many animals appear to engage in behaviors which do not appear survival-related. Kittens and infants play with toys for extended periods of time with no apparent external reward other than the sensory stimulation. Monkeys will learn a response in order to gain the opportunity to look through a window (Butler, 1953). Human adults appear to find intrinsic reinforcements in solving complex problems. How else could we understand the creation of crossword puzzles and recreational games such as chess?

Two-choice discriminations are as simple as problems get. One piece (i.e., bit) of information is all that is required to solve the problem and obtain the reward. Crossword puzzles and chess are far more complicated. Perhaps we seek complexity because such experience is adaptive. Unfortunately, many problems in life are extremely difficult to solve and to address. Issues related to health, interpersonal relationships, and finances often top the list. It would be good preparation to acquire skills and strategies that apply in such circumstances. Many have likened life to a game of chess, posing problems having many possible options and requiring extensive planning for future possibilities. In fact, some have described life as consisting of one problem followed by another.

Psychologists have studied problem-solving in humans and other animals almost since the founding of the discipline. As described previously, Thorndike studied a few different species in puzzle boxes, describing the problem-solving process as involving trial-and-error (or success) learning. In his classic, The Mentality of Apes (translated in 1925), the Gestalt psychologist Wolfgang Kohler argued that the puzzle-box, by its very nature, requires a “blind” (i.e., trial-and-error) learning process since the required behaviors cannot be determined by observing the environment. Kohler created a number of problems for his subjects, primarily chimpanzees, in which the solution could be grasped by observing the environment. He considered such problems to be more representative of those we confront on a day-to-day basis.

One famous example of Kohler’s problems required that the chimpanzee insert a thin bamboo stick within a wider one. This created a tool long enough to reach a banana outside the cage. Another problem required stacking boxes high enough to reach a banana. A third required combining sticks to reach a banana hanging from above. The following classic video of Kohler’s research demonstrates individual and collective (i.e., social) problem-solving by his chimps with these tasks. Kohler amusingly anthropomorphized, attributing human characteristics to his subjects. He described the chimp’s initial frustration resulting from unsuccessful attempts and attributed characterized it as involving “insight” when the chimps performs the behaviors necessary to obtain the banana.

Under circumstances where the necessary components of a solution are observable, Kohler characterized the problem-solving process as requiring “insight.” You may recall that Gestalt psychologists primarily studied perceptual phenomena (e.g., the phi phenomenon). It is not surprising that Kohler considered insight to be a perceptual process requiring reorganization of the perceptual field in order to attain “closure.” Presumably, the chimp continued to scan the environment until attaining the specific insight required to solve the current problem. Wertheimer (1945) later published a “how to” book based on Kohler’s work, extending Gestalt concepts to childhood education.

Facilitative Effects of Prior Experience

Other researchers believed that Gestalt psychologists under-emphasized the role of prior experience in problem-solving. The subjects in Kohler’s primate colony were reared in the wild, not in captivity. Since bamboo sticks were prevalent in that environment, it was likely that the chimpanzees had handled them previously, increasing the likelihood of solving the two-stick problem. Birch (1945) provided five previously unsuccessful chimps with sticks to play with for three days. They were observed to gradually use the sticks to poke, shovel, and pry objects. When again provided with the two-stick problem, all five chimps discovered the solution within 20 seconds, demonstrating the importance of prior experience.

Based on Harlow’s observation of learning to learn, one can imagine Kohler’s chimps entering their cages, looking for the banana, and asking themselves “OK, what does Kohler want me to do today?” In a humorous simulation of the box climbing problem (Epstein, R., Kirshnit, C. E., Lanza, R. P., & Rubin, 1984), pigeons needed to move a box under a plastic banana and then step on the box in order to peck the banana to receive food. Some pigeons were shaped to move the box to wherever a spot appeared on the floor, others were shaped to stand on the box and peck the plastic banana, and a third group was taught both responses. Only the group taught both components of the required behavior displayed “insight”, confirming the importance of prior learning experiences in problem-solving (see video).

Interference Effects of Prior Experience

Prior experience can impede, as well as facilitate, problem-solving. Luchins (1942) gave college students a series of arithmetic problems to solve (see Figure 7.4). They were asked to provide the most direct way of obtaining a certain amount of liquid from jars holding different quantities.

Problem Volume of jug A Volume of jug B Volume of jug C Amount to obtain
Example  29 3 20
1  21 127 3 100
2  14 163 25 99
3  18 43 10 5
4 9 42 6 21
5 20 59 4 31
6 20 49 3 20

Figure 7.4 Luchin’s water jar problems.

In an example, subjects were first shown how it was possible to obtain 20 units of water by filling a 29-unit container and spilling 3 units into a separate container, three times (i.e., 29–3*3, or A-3B). After the example, a control group was administered problem 6 that could be solved using two (the direct solution, A-C) or all three jars (the indirect solution, B-A-2C). An experimental group was provided with five problems that could be solved with the B-A-2C formula prior to being administered the last problem. The experimental subjects were much more likely than the control subjects to use the less-efficient indirect method. This sort of “rigidity” is counter-productive. Thus, although it is often helpful to rely upon past experience in approaching problems, there is also value in considering each problem separately. Otherwise, we may be very unlikely to “think outside the box.” Figure 7.5 shows the solution to the well-known 9-dot problem in which one is instructed “Without lifting your pencil from the paper, draw exactly four straight, connected lines that will go through all nine dots, but through each dot only once.” Here, the solution literally requires thinking outside the box.

File:Ninedots.svg

Figure 7.5 “Think outside the box.”

A special case of being blinded by past experience has been demonstrated with the use of physical objects, a phenomenon called “functional fixedness.” Dunker was one of the pioneers investigating functional fixedness in humans. One of the tasks he created required using several common objects in an unfamiliar way to create a “candle box” (see Figure 7.3).

Figure 7.6 Functional fixedness.

In another example of functional fixedness, Maier and Janzen (1968) found that college students were much more likely to use some objects rather than others to tie strings suspended from the ceiling together. For example, they were more likely to tie a ruler to the bottom of a string than a bar of soap. Presumably, the usual function of soap interferes with consideration of it for another use, even within a different context. This effect was demonstrated experimentally by Birch and Rabinowitz (1951). Two groups of college students were provided experience using two different objects to complete electrical circuits. Subjects were far more likely to use the unfamiliar object as the weight when they were given the two-string problem to solve

The Gestalt psychologists emphasized the tendency to perceive objects as meaningful wholes. Functional fixedness appears to be an inevitable result of this tendency. An implication of this perspective is that requiring individuals to describe the parts of objects should reduce the likelihood of functional fixedness. This was found to be the case when college students were asked to engage in a task similar to the introspection procedure employed by the structuralists to analyze conscious experience (McCaffrey, 2012). Subjects were asked to break down objects into their component parts without consideration of how they were used. This reduced the occurrence of functional fixedness.

The Unusual Uses Test (Guilford, Merrifield, and Wilson, 1958; Guilford and Guilford, 1980) is a popular assessment of creativity based upon the concept of functional fixedness. One is asked to list as many uses as possible for different objects (e.g., “What can you do with a brick?”). Responses may be counted or scored for originality. It is conceivable that encouraging test-takers to break down objects into their component parts could increase creativity scores on this test.

The General Problem-Solving Process

A general problem-solving process including five distinct stages has been described. The stages are: (1) general orientation; (2) problem definition and formulation; (3) generation of alternatives; (4) decision making; (5) verification (Goldfried and Davison, 1976, p. 187). The general orientation stage encourages individuals to approach situations eliciting unpleasant emotions as problems. Problems relating to health, interpersonal, and financial matters can be devastating, possibly resulting in debilitating anxiety and/or depression.

Weight-control is a common health concern. When I consulted for a medically-supervised weight clinic, I encouraged a self-control approach to fitness and health. Frequently, emotionality related to unrealistic societal ideals for appearance interfered with a client’s adhering to a prudent lifestyle. It was helpful to reduce the emotionality related to one’s appearance by adopting a problem-solving approach to weight control and body shape (Stage 1). The problem was defined as a discrepancy between one’s current weight and dimensions and a more desired profile (Stage 2). This permitted a relatively-detached brainstorming discussion of different nutritional and exercise modifications designed to affect caloric input and output (Stage 3). The likely benefits and drawbacks of implementing the different approaches were discussed with the goal of deciding upon a strategy that could be sustained (Stage 4). The decided upon strategy was implemented, with objective (weight and measurements) and subjective (ease of implementation, satisfaction, etc.) progress consistently monitored (Stage 5). A TOTE (Test-Operate-Test-Exit) approach was implemented to determine the need for fine-tuning or changing the strategy (Miller, Galanter, and Pribram, 1960). Similar to a thermostat, the individual would test the environment (i.e., determine current weight and measurements), operate on the environment (i.e., “turn on” the nutritional and exercise program), and continue to assess progress until achieving the desired objective. This same process would be sustained in order to maintain the desired end state.

The same “thermostat” approach could be applied to financial matters. The problem could be defined as a discrepancy between a family’s income and expenditures. Brainstorming would be conducted to list possible ways to increase income or reduce costs. A strategy would be decided upon, implemented, and continually assessed. Adoption of a problem-solving approach is particularly helpful with interpersonal problems, which are almost always emotionally charged. It is difficult, but possible, to teach individuals or couples to respond objectively to the substance of what someone says while ignoring provocative language. Once this is achieved, difficulties and solutions can be mutually defined and strategies for addressing them can be negotiated prior to implementation and assessment (D’Zurilla and Goldfried, 1971).

Tools, Technology and the Human Condition

The Law of Accelerating Returns

The general problem-solving process represents a higher level of abstraction than the win-stay, lose-shift strategy that applies only to two-choice discrimination problems. This generic process emerges from learning-set type experiences with a variety of types of problems and may be applied to all others. For example, problems can occur in sense modalities other than vision, include more than two choices, and differ in complexity. The abilities to predict and control our environment, including problem-solving and creating tools, have enabled the transformation of the human condition. It is a mistake to believe that this occurred quickly or in a linear progression (i.e., equally spaced in time). To paraphrase Charles Dickens’ opening to A Tale of Two Cities (2003): It is the fastest of times, it is the slowest of times; it is the age of the internet, it is the age of the blowpipe.

Gordon Moore (1965) calculated that, since the invention of the integrated circuit in 1958, the number of transistors that could be contained on a computer chip doubled approximately every two years. Now known as Moore’s Law, this geometric relationship has been shown to also hold with computing speed and memory. Raymond Kurzweil (2001), inventor and futurist, proposed that Moore’s Law was simply one example of the generic Law of Accelerating Returns that applies to the pace of all evolutionary biological and technological change.

The Stone Age, Bronze Age, and Iron Age

Many archeologists divide the time period prior to recorded history into three stages. During the Stone Age, it took tens of thousands of years for the occurrence of such paradigm shifts (i.e., life transforming events) as the use of stone tools, the control of fire, and invention of the wheel. It took until the Bronze Age (3300-1200 BC) and Iron Age (1200-900 BC) for tools to be manufactured, as opposed to being handmade from items found in nature (see Figures 7.7 and 7.8). Skipping to the first millennium A.D., advances such as the use of paper for writing and toiletry, inventions such as the quill and fountain pens, guns and gunpowder, and the creation of the first public library, were occurring every hundred years or so.

File:Stone Tools from Skorba.jpg

Figure 7.7 Ancient stone tools.

File:Stone Wheel for Mortar making.JPG

Figure 7.8 Ancient stone wheel.

The Industrial Revolution and the Modern Era

In the nineteenth century, major advances occurred every few years. Toward the end, the industrial revolution increased the pace, laying the groundwork for our current human condition. The steam locomotive and automobile replaced the horse as the fastest way to travel on land, permitting travel across the continent (see Figures 7.9 and 7.10) and the airplane took travel to the skies (Figure 7.11); the steamboat and submarine enabled speedy travel on and beneath the sea. These technologies were enhanced and in instances, replaced, during the 20th century. Orville and Wilbur’s initial flight in 1903 was followed by the development of the airplane as a speedy mode of transportation connecting the continents. Cross-country trips that took months by horse, took days by train and hours by plane. Intercontinental flights replaced ships as the only possible way to traverse oceans. During the industrial revolution people left farms for employment in large cities. Highway development and the proliferation of cars reversed the trend and suburban living became a preferred lifestyle for many. The reaper, steel plow, and refrigerator improved the yield and storage of food. Improved agricultural and meat processing techniques led to large, highly-efficient industries. Within 150 years, people went from producing their own food, to shopping at small markets, to shopping at large supermarkets.

Figure 7.9 Early steam locomotive.

https://upload.wikimedia.org/wikipedia/commons/5/55/Early_Ford_automobile_LCCN2011660900.jpg

Figure 7.10 Early automobile.

Figure 7.11 Wright brothers airplane

The telegraph and telephone (see Figure 7.12) enabled instant communication over long distances. The light bulb prolonged work and recreation time, and phonographs and cameras enabled the recording of audio and visual media. The revolver, repeating rifle, and machine gun changed self-defense and warfare, altering the balance of power among cultures and nations.

File:Smartphone - The Noun Project.svg

Figures 7.12 and 7.13 The telephone then and now.

Radio enabled everyone with electricity to listen to the same event at the same time, culminating in talk shows with audience participation. Television extended this phenomenon to the visual world and soon a common culture was being created consisting of news, sports, variety shows, comedies, soap operas, and reality shows (see Figures 7.14 and 7.15).

File:Philips early TV, MIM Berlin.jpg

Figures 7.14 and 7.15 TV then and now.

Digitization and computer technology transformed the speed and power of processing and communicating information (see Figures 7.16 and 7.17). Communication satellites in space and optical fibers beneath the oceans connected the continents, enabling and encouraging globalization.

File:Black laptop computer open frontal.svg

Figure 7.16 and 7.17 Computer then and now.

While foraging, the Nukak sometimes build small bridges to pass over small bodies of water. The bridge is a wonderful metaphor as well as a feat of engineering for connecting peoples and places. Think of where you are and how far it is possible to travel before reaching a bridge. One’s world is much smaller without them! The Nukak live in bands of approximately 15 individuals (usually the children and adults of 3 to 5 families). The tribe consists of approximately 20 bands living in different parts of a small region in the rain forest. That is their world. At the beginning of the 19th century, most human cultures lived on farms in small villages. That was their world. Technological innovations in transportation and communication have potentially connected every person on the planet. Within the span of 200 years, we have moved from communication by word of mouth and written letters to wireless phones and e-mail. We have landed a rocket ship on the moon and are currently exploring Mars for signs of life. These are our worlds!

The advances over the past two centuries have also led to a dramatic change in human life expectancy. “In the eighteenth century, we added a few days every year to human longevity; during the nineteenth century we added a couple of weeks each year; and now we’re adding almost a half a year every year. With the revolutions in genomics, proteomics, rational drug design, therapeutic cloning of our own organs and tissues, and related developments in bio-information sciences, we will be adding more than a year every year within ten years. So take care of yourself the old fashioned way for just a little while longer, and you may actually get to experience the next fundamental paradigm shift in our destiny” (Kurzweil, 2001).

The dramatic increases in human life expectancy over the past 1-1/2 centuries are primarily the result of improved sanitary conditions and inoculations against such diseases as smallpox, polio, rubella, diphtheria, and influenza. In current technologically-advanced cultures, the major reasons for loss of life are lifestyle related. For example, over 300,000 Americans die each year from smoking-related disorders. Heart disease and cancer, which combine for a half of all deaths on an annual basis, are significantly related to one’s nutritional and exercise habits. Health psychology has emerged as a sub-discipline of psychology dedicated to “the prevention and treatment of illness, and the identification of etiologic and diagnostic correlates of health, illness and related dysfunction” (Matarazzo, 1980). Hopefully, the knowledge acquired through this discipline will enable the development of lifestyle-related technologies essential to the continuation of the trend in human life expectancy.

Unfortunately, technology is a two-edged sword. It may be used to improve the human condition for the betterment of all or lead to our own extinction. The basic science of biology resulted in improvement in sanitary conditions and inoculations against major diseases. The same knowledge has been applied to create potentially devastating biological weapons. Chemistry has enabled the development of plastics and plastic explosives. Physics has enabled nuclear energy and nuclear weapons. Humans are the most creative and destructive force on this planet. It is the hope of this author that the science of psychology can contribute to our survival and enable us to realize the potential of our species.

Clearly, the automobile, airplane, telephone, radio, television, personal computer, cell phone, and World Wide Web have each transformed the human condition. How do we reconcile these advances occurring within such a short period of time with the concurrent Stone-Age existence of the Nukak? Once again, I will quote from Kurzweil’s extraordinary essay: “Technology goes beyond mere tool making; it is a process of creating ever more powerful technology using the tools from the previous round of innovation. In this way, human technology is distinguished from the tool making of other species. There is a record of each stage of technology, and each new stage of technology builds on the order of the previous stage (Kurzweil, 2001). The recording of progress is responsible for the distinction between tool-making in other species and human technological change, according to Kurzweil. This same explanation can be applied to the distinction between the human conditions for the Nukak and us. I have emphasized the speeding up of the pace of technological change during the past two centuries. It is easy to forget the glacial pace of change during the Stone Age. The Nukak survive under geographic and climatic conditions limited to a hunter-gatherer lifestyle. They have learned to make fires by rubbing sticks together, to make blowpipes from cane, and to tip darts with the paralyzing drug curare. The inability to store foods or domesticate large animals makes it impossible to produce food surpluses. Life is a day-to-day struggle for survival. There is no time or opportunity to create the technologies that transformed the human condition in cultures originating in the Fertile Crescent.

Individual Differences

Not all individuals contribute equally or in the same way to the human condition. It was necessary for a substantial number of people with exceptional knowledge, problem-solving ability, and skills to work as a team to transform Manhattan from a forest to a metropolis. Until now, we have been focusing on differences between Stone-Age and technologically-enhanced cultures in order to appreciate extreme variations of the human condition. We have not discussed differences between the individual members of a culture. Not every member of the Nukak is the same height and weight. Not all members of the Nukak are equally skilled in blowing darts or fashioning necklaces. Not all college students are the same height and weight. Not all college students are equally proficient at shooting free throws or playing a musical instrument.

Psychology can be described as the science of individual differences. In the prior examples, psychologists would look to hereditary and experiential variables as potential causes of behavioral variation. Before we consider some controversial issues, it should prove helpful to place these issues within the larger context of how to formulate useful questions regarding individual differences. Frequently, by being specific and clear when defining terms, it is possible to shed light and avoid heat, even with the most contentious of topics. The scientific method is our best strategy for obtaining useful information to address difficult theoretical and practical questions.

I will use the game of basketball as an example since it is an internationally popular sport among adult males and females. The objective of the game is to shoot a 9-1/2–inch diameter sphere through an 18-inch circular rim located ten feet off the ground. The easiest, and most certain way to accomplish this, is to hold the basketball in your hands and “dunk” (or “stuff”) it through the rim (“hoop”). Basketball is a game where “size matters” (especially height). It is an advantage to be as close to the rim as possible.

One has to be over seven feet tall to be able to dunk a basketball while still standing on the ground. How likely is it that a person grows to be over seven feet tall? To answer this question, it would be necessary to measure everyone’s height and divide the number of people who are seven feet or more by the total. A more analytic approach would be to create a frequency distribution of the number of people of different heights.

height3.png

Figure 7.18 Normal curves for height.

Figure 7.18 is an example of frequency distributions for the heights of samples of American husbands and wives. The normal curve is a symmetrical bell-shaped curve characteristic of many variables in nature, including human characteristics and performance (e.g., height, reaction time, etc.). It is defined by a formula resulting in specific percentages of the area under the curve being related to distance along the X-axis. The distance is measured in standard deviation units, a statistical index of variability (i.e., consistency). The size of the standard deviation is based on the extent to which scores cluster around the mean. If scores tend to be close to the mean (i.e., are consistent), the standard deviation is low. If the scores vary widely from the mean, the standard deviation is high. The normal curve includes approximately two-thirds of the scores between plus and minus one standard deviation, and 95 percent of the scores between plus and minus two standard deviations.

One characteristic of any symmetrical curve is that the peak indicates the mean (i.e., average) score. Another characteristic of a symmetrical curve is how “spread out” it is. The male curve above seems more spread out than the “narrower” female curve. The narrowness of a curve indicates the extent to which the scores pile up close to the mean, that is, the consistency (or variability). The female scores are more consistent (i.e., less variable) in the figure. The average height for women in the figure is 65 inches with a standard deviation of 4, and the average for men is 71 inches with a standard deviation of 5. Assuming the distributions are normal, this would mean that approximately two-thirds of women are between 61 and 69 inches, and two-thirds of men are between 66 and 76 inches. A height of seven feet (84 inches) would be almost three standard deviations above the mean height for men. This would mean that only about one in five hundred men attain that height. No wonder extremely tall individuals tend to be favored draft picks in professional basketball. They are hard to find.

Sometimes a basketball scout remarks that a particular player “has what you can’t teach.” The implication is that height is entirely genetically determined. In fact, it has been reported that hundreds of genes influence human height (Lango, Estrada, and Lettre et al., 2010). Clearly, whether one does or does not possess the Y-chromosome matters. It needs to be emphasized, however, that even a physical characteristic such as height can be significantly affected by environmental factors. The Centers for Disease Control statistics (2012) indicate that overall, the average heights for American women and men have been stable for many years. However, the heights of recent immigrants show an increase. The apparent explanation is that those recently arriving react to the American diet. Those who have been exposed to this diet for extended periods have apparently approached their genetic potential.

If you cannot reach the basket while standing on the ground, it may still be possible to dunk the ball by jumping. An amusing basketball movie from several years ago was entitled “White Men Can’t Jump.” The implication of the title was that if one created frequency distributions for men of different races, one would see diverging curves similar to those for the height of women and men. Collecting such data and plotting the curves would determine the accuracy of the title. That is, it is an empirical question. Another empirical question would address the extent to which jumping is like height. Do you think nutrition might influence jumping ability? What about exercises designed to strengthen your leg muscles or improve flexibility? Is jumping something you can teach? Do you think there is such a thing as jumping technique? As you move further from the hoop, one’s height becomes less of an advantage and skill level increases in importance. Shooting ability is clearly a characteristic related to basketball performance which can be taught and practiced.

Intelligence Testing

We will now try to apply the approach used to address questions regarding basketball to issues related to human intelligence. Perhaps no term is more misunderstood or, as we shall see, more misused, than intelligence. It is common to describe ourselves or others as being “smart” (i.e., intelligent) or “not so smart.” A repeated lesson of this book is the need to be careful when labeling people. Labels can be used as pseudo-explanations, diverting us from searching for true explanations. Also, there is always the potential for self-fulfilling prophecies. When one attributes exceptionally good or poor performance to levels of “intelligence”, the search for another explanation ceases. Once one is labeled as intelligent or dull, this can have significant effects upon how they are treated by those with the best intentions.

Do you think people vary in intelligence the way they do with height, jumping ability, and shooting from a distance? If so, is intelligence more like height, jumping ability, or shooting from a distance? The first step in addressing this question requires defining what we mean by intelligence. Recall, an operational definition defines terms by the procedures used to measure them. For example, the definition of height would be the number of standardized units (e.g., inches) from the bottom of your feet to the top of your head when you are in an erect standing position. A person’s height is observable to someone else. We cannot directly observe intelligence as we do height. Intelligence is like learning, which is also not directly observable. Rather, it is operationally defined based on behavioral observations. Technically, we do not observe learning; we observe learned behavior. Applying this same approach to intelligence, we need to observe intelligent behavior.

At the beginning of the twentieth century, many of the countries experiencing the Industrial Revolution implemented compulsory education to increase the knowledge and skills of future workers. The French government asked Alfred Binet, a psychologist, to develop an easily administered test to identify children requiring special assistance to succeed in the public schools. Binet (1903) formulated an ordered list of 30 questions addressing basic skills such as memory, problem-solving, and vocabulary. Examples of simple items include asking a child to point to his/her nose and to name a food. Examples of difficult items would be to use three different words in a sentence and to provide the definition of an abstract word. Scoring was based on the concept of mental age as determined by the average number of items children of different ages got correct. It must be emphasized that Binet formulated his test to address a practical problem, school readiness, not to assess native ability. The test was designed to serve a supportive function; to diagnose the type of assistance a child needed to succeed. Binet anticipated the possibility of interpreting his test as measuring intelligence, but believed intelligence was multifaceted and fluid, rather than unitary and stable. He also believed intelligence was influenced by experience and that comparisons could only be made for people sharing similar environmental conditions (White, 2000).

Despite Binet’s (1903) stated reservations, the Stanford psychologist, Lewis Terman (1916), standardized his test on American children, calculated an IQ (intelligence quotient) score as proposed by William Stern (1912), and considered it to measure intelligence. The IQ score was obtained by dividing a child’s mental age by the child’s chronological age and multiplying by 100. For example, if a 4-year old tested at the level of an average 5-year old, the IQ score would equal 125 (5/4 X 100).

Unlike Binet, Terman believed his test items measured an inherited, unitary, and stable trait of intelligence. Based on this assumption, his standardization process produced IQ test results adhering to the normal curve with a mean of 100 and standard deviation of 15 (see Figure 7.19). This meant that a little more than 68 per cent of the scores were between 85 and 115 (i.e., from minus one to plus one standard deviation) and a little more than 95 per cent were between 70 and 130 (minus two to plus two standard deviations).

Image result for normal curve for iq scores

Figure 7.19 Normal curve for IQ.

The Stanford-Binet became the most popular intelligence test for decades. It is ironic that a test developed to address a practical concern and considered by its founder to be inappropriate as an index of intelligence, became the basis for the first operational definition of intelligence (i.e., IQ test score). Having an operational definition for intelligence, it becomes possible to ask if the questions on the test appear to be measuring something clearly biological such as height, something probably having a strong biological component such as jumping ability, or something clearly requiring skill development such as shooting from a distance. Terman believed and acted as though IQ score, despite being inferred from behavioral observations, measured something akin to height. Arguably, a memory test such as digit span seems akin to height or jumping ability. The amount of items one is able to repeat back is limited by the capacity of short-term memory. However, the great majority of IQ test questions are obviously influenced by experience. Children are taught to label and point to different body parts. Vocabulary and grammatical rules are learned. As described in Chapter 1, children must be taught to follow instructions and work to the best of their ability in order for the test to provide meaningful results.

Would it make sense to visit the rainforest and administer the Stanford-Binet to a Nukak child in English? Based on the test results, would it make sense to make important life decisions for the child? It is unfortunate that so much controversy and harm was introduced by redefining a procedure designed to assess school readiness as a test of intelligence. Terman believed “There is nothing about an individual as important as his IQ” (Terman, 1922). It is true that, IQ score is a better predictor of school performance at all levels and of job performance than any other test result (Schmidt & Hunter, 1998). This should not be surprising. Binet and his colleagues spent 15 years developing items to determine which children would require special assistance to succeed in school. Many jobs in a technologically advanced culture are dependent on the skills and knowledge acquired in schools.

Unlike Binet, whose goal was to identify school children requiring special assistance, Terman proposed using IQ tests to classify children and place them on separate educational and career paths. This was frequently recommended despite the fact that the children were unschooled or English was not their native language. Terman became an advocate of eugenics, proposing that IQ test results should be used as a basis for controlling reproductive and educational practices. According to him, “High-grade or border-line deficiency… is very, very common among Spanish-Indian and Mexican families of the Southwest and also among Negroes. Their dullness seems to be racial, or at least inherent in the family stocks from which they come. Children of this group should be segregated into separate classes… They cannot master abstractions but they can often be made into efficient workers… from a eugenic point of view they constitute a grave problem because of their unusually prolific breeding” (Terman, 1916, pp. 91-92). Tragically, thousands of poor African-American women were involuntarily sterilized as the result of such positions (Larson, 1995, p. 74).

In 1974, Leon Kamin published The Science and Politics of IQ questioning the motivations behind the use of IQ test results as the basis for social policy recommendations. Other similar articles and books soon followed (c.f., Block & Dworkin, 1976; Cronback, 1975; Scarr, & Carter-Saltzman, 1982). In 1994, Herrnstein & Murray published The Bell Curve: Intelligence and Class Structure in American Life, sparking further controversy regarding the interpretation of research findings and their social implications. In reaction to the increasingly heated public and professional debates regarding intelligence testing, the American Psychological Association appointed a Task Force chaired by the respected cognitive scientist, Ulrich Neisser. The Task Force was charged with reviewing the findings of the voluminous research literature, reaching conclusions, and making recommendations. The authors of the report concluded:

In a field where so many issues are unresolved and so many questions unanswered, the confident tone that has characterized most of the debate on these topics is clearly out of place. The study of intelligence does not need politicized assertions and recriminations; it needs self-restraint, reflection, and a great deal more research. The questions that remain are socially as well as scientifically important. There is no reason to think them unanswerable, but finding the answers will require a shared and sustained effort as well as the commitment of substantial scientific resources. Just such a commitment is what we strongly recommend (Neisser et al., 1996).

In Chapter 1, we discussed the requirements of psychological explanations and the implications regarding nature/nurture controversies. Intelligence is frequently used in a circular manner as a pseudo-explanation for behavior. Why does someone obtain a high score on an IQ test? – Because she/he is intelligent. How do you know someone is intelligent? – Because she/he scores high on the IQ test. IQ cannot serve as both an independent and dependent variable. An IQ test consists of behavioral tasks presumed to require intelligence. As such, IQ test performance is something to be explained (i.e., a dependent variable), not in and of itself an explanation (i.e., an independent variable). As always, psychology looks to nature and nurture for its explanations. No single gene has consistently been reported to have a strong effect on IQ (Deary, Whalley, & Starr, 2009). Hundreds of genes have been found to impact upon human height (Lanktree et al., 2011). It is likely that thousands of the 17,000 or so human genes influence IQ test scores.

We described how pseudo-explanations can result in self-fulfilling prophecies. It might surprise you to know that such effects have been experimentally demonstrated to occur with regard to intelligence both in the laboratory and in the field. In one study, college students were told that they were given either “maze bright” or “maze dull” rats to run through a maze (Rosenthal & Fode, 1963). Even though the rats were randomly assigned to the categories, the “maze-bright” rats performed better than the “maze dull” rats. Presumably, the students’ expectancies influenced how they treated the rats and affected the results.

In an important book entitled Pygmalion in the Classroom, Rosenthal & Jacobson (1968) demonstrated the external validity of this finding with children in schools. After tests were administered to first- through sixth-grade students, teachers were told that the results indicated that some of their students would “bloom” that year. Randomly, 20 per cent of the students in each of the classes were designated as “bloomers.” Sure enough, upon re-testing at the end of the year, first- and second grade students designated as “bloomers” improved more than the control students. The same effect was not demonstrated in the students in the later grades. It was suggested that young children are especially sensitive to the types of behaviors related to teacher expectancies.

Rather than acting as though intelligence exists as a human characteristic akin to height, it is more accurate, as well as prescriptive, to consider intelligence akin to jumping or shooting a basketball from a distance. Research must be designed to analyze the specific genetic and experiential components of behaviors considered to be intelligent. For example, what genes and learning experiences are necessary for a child to respond to an instruction to touch his/her nose, or include three words in a sentence? This approach avoids unnecessary controversy concerning racial or ethnic differences in intelligence. Rather, research is conducted to determine the potential causal variables in the acquisition of culturally-defined intelligent behaviors. Such a strategy is grounded in the reality that both nature and nurture contribute to an individual’s responding to any item on an IQ test.

Analyzing Intelligence

Do you think there is a trait of athleticism that applies to all sports? Or, do you think that there are separate abilities and skills that apply to different sports? One of Alfred Binet’s initial suggestions was that intelligence is complicated and can be analyzed into separate abilities and skills. This differed from Terman’s belief that intelligence was a unitary aptitude applicable under all conditions. More than a century has passed since Binet implemented his test in the Paris school system. Since then, other more comprehensive tests permitting more analytic scoring and prescriptive applications, have been developed.

David Wechsler gained experience developing adult intelligence tests for the military during World War 1. While serving as Chief Psychologist at Bellevue Medical Center in New York City, he developed the Wechsler-Bellevue Intelligence Scale (1939). This was later published in 1955 as the Wechsler Adult Intelligence Scale (WAIS) and revised in 1981, 1997, and 2008. Wechsler agreed with Binet that intelligence was multi-faceted and included several diverse types of questions on his test. Wechsler also believed that the verbal abilities assessed on the Stanford-Binet were highly dependent on education and therefore culturally biased. He developed a combination of tasks which did not rely on verbal knowledge and that could produce a separate performance IQ score. Subsequent revisions of the WAIS included additional types of questions and more analytical scores.

Wechsler Adult Intelligence Scale subscores and subtests.png

Figure 7.20 Subscales of the Wechsler Adult Intelligence Scale.

Figure 7.20 provides an overview of the different categories and types of test items and the different scores (indexes in the Figure) one can obtain with recent versions of the WAIS. The WAIS and WISC (Wechsler Intelligence Scale for Children) are presently the most frequently administered intelligence tests (Kaplan & Saccuzzo, 2009, pp 250-251). One of the reasons for this popularity is the prescriptive capability resulting from the subscale indexes and the scores for different item types comprising each subscale. For example, a low score on the vocabulary items of the Verbal Comprehension index could suggest the benefit of working with flashcards whereas a low score on the information items might suggest assignment of reading material. A similar analytic and prescriptive approach would apply to the other indexes and item types.

It is possible to use the statistical technique of factor analysis to analyze intelligent behavior based upon the results of empirical research studies. Citing more than six decades of research evaluating human cognition, John Carrol (1993) obtained results supporting a three-stratum model of cognitive ability (see Figure 7.21). The first stratum consisted of a General Intelligence factor, consistent with Terman’s unitary approach. However, the results also suggested the eight “Broad Ability” factors listed above as well as 69 narrow abilities. Analyzing intelligence test performance into different components in this way reduces the controversy resulting from a single global score. Rather than generating questions regarding differences in “intelligence”, questions regarding differences in performance on different types of tasks are generated. This requires examination of the specific broad and narrow abilities involved in answering test items. Ultimately, the genetic (nature – e.g., parts of the brain) and experiential (nurture – e.g., learning experiences)) variables influencing the abilities impacting upon specific test items need to be specified.

Figure 7.21 Carroll’s three-stratum model of cognitive ability. Key: fluid intelligence (Gf), crystallized intelligence (Gc), general memory and learning (Gy), broad visual perception (Gv), broad auditory perception (Gu), broad retrieval ability (Gr), broad cognitive speediness (Gs), and processing speed (Gt). Carroll regarded the broad abilities as different “flavors” of g.

Different Types of Intelligence

Do you think the same type of athleticism applies to all sports? Or, do you think there are different forms of athleticism applying to basketball players, baseball players, soccer players, etc.? Relating this to intelligence, it is common for people to distinguish between “school smarts” and “street smarts.” Does that distinction make sense to you? It does to Howard Gardner. Wechsler disagreed with Terman’s belief that intelligence was unitary as opposed to multi-faceted. Gardner (1983) disagreed with Terman’s belief that the Stanford-Binet test measured the only important form of intelligence and proposed a multiple intelligence model (see Figure 7.22).

Image result for Howard Gardner???s Multiple Intelligence Model

Figure 7.22 Howard Gardner’s Multiple Intelligence Model.

Verbal/linguistic intelligence, logical/mathematical intelligence, and to a lesser extent, visual spatial intelligence, are the domains emphasized on the majority of standardized tests. Again, this should not be surprising since Binet developed the original test to assess school readiness. Gardner believed it was necessary to also consider bodily/kinesthetic intelligence, musical/rhythmic intelligence, intra- and inter-personal intelligence, and naturalistic intelligence, in order to appreciate the full range of human intellectual ability and accomplishment.

Intelligence and Human Potential

I previously quipped that based on our DNA and the amount of brain space dedicated to our hands and speech-related body parts, the title of this book could be “Thumbs, Tongues, and Cortex.” Human potential and accomplishment is built upon this three-legged stool. Without the conceptual knowledge, problem-solving ability, imagination, and creativity permitted by our brains (i.e., what we usually consider “intelligence”), our speaking and tool-making capabilities would be very limited. Wechsler defined intelligence as “the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his environment” (1939). Eat, survive, reproduce. When we examine the aptitudes and abilities required to obtain and prepare food, build and maintain shelters, establish and maintain cooperative relationships with relatives, friends, and significant others, and raise children, we can appreciate Gardner’s consideration of other, non-school related forms of intelligence. We had to be intelligent in order to survive on this planet for a very long time before we created schools. It is only in the past century that for many, adapting to the human condition became so related to the three “R”s and performing well on standardized and non-standardized tests. One can debate the appropriateness or inappropriateness of considering any of Garner’s eight “intelligences” as aptitudes, talents, skills, or traits. What cannot be debated is the essential role each has played in the totality of human achievement and the importance of each when considering our potential as individuals and a species. Much human achievement requires cooperation and teamwork. This is true in order to survive in the rainforest or to transform Manhattan Island. Our combined potential is greater than the sum of our individual potentials. The transformation of Manhattan required cooperation among diverse individuals possessing the different talents and skills required to plan, design, and create the impressive skyline. The best strategy for realizing our potential as a species is to act upon John Adam’s and Albert Binet’s desires to educate each and every individual.

Consideration of intelligence in this chapter is out of place with regard to the organization of the book. As described in Chapter 1 and in the material above, nature and nurture are involved in intelligent human behavior. This implies the Nature/Nurture section as being the appropriate location to include intelligence. Instead, I chose to discuss intelligence as a way of concluding the Mostly Nurture section.

The bottom line of Wechsler’s definition of intelligence is its adaptive nature. What is considered intelligent depends upon one’s physical and social environmental demands. Surviving in the rainforest requires very different behaviors than performing well in school. Performing well in school requires different behaviors than performing well on the job or in social contexts. It took millions of years of natural selection for the human being to evolve. The result was an animal capable of adapting to a wide range of environmental conditions. As social and communicating animals, humans profit from the experiences of others. Shared knowledge and skills have resulted in the accelerating development of life-transforming tools and technologies. There is no way to predict the environmental conditions humans will create in the future. We can predict the continued modification of and adaptation to a new world; perhaps even new worlds!

Intelligence and Self-Control

God, give me grace to accept with serenity the things that cannot be changed,

Courage to change the things which should be changed,

and the Wisdom to distinguish the one from the other.

Reinhold Neibuhr

Do you think you can be more intelligent? Your answer to the question may depend on whether you agree with Terman’s or Binet’s assumptions. If, like Terman, you believe intelligence is unitary, inherited, and fixed, a passive serenity is called for. If you agree with Binet, that intelligence is multi-faceted and affected by experience, a more active, courageous approach becomes possible. We have seen that the science of psychology has resulted in knowledge regarding procedures effecting behavior change. This makes it possible for you to apply the self-control process described in previous chapters to change the behaviors you consider to reflect intelligence and develop your potential.

At the beginning of this chapter we saw how much of our knowledge consists of concepts and that adaptation may often be described as problem-solving. A college education is designed to expand your knowledge base as well as improve and add to your problem-solving skills. Review of the types of items tested on Wechsler’s IQ test (see Figure 7.10) reveals how attending college could improve your performance on each of the sub-scale indexes. Succeeding in college will require a significant amount of reading in diverse content areas. Along the way you will acquire many new concepts and expand your vocabulary. You will take math courses requiring quantitative reasoning and humanities courses requiring comprehension and critical thinking. You can maximize the benefit of your formal education by being an active student. Constantly test yourself for mastery of the material. Try to integrate the information acquired in different courses and consider how to apply the knowledge and skills beyond the classroom. Time permitting, read for pleasure. Whether you enjoy fiction or non-fiction, reading will expose you to new information and ideas. The more you learn, the more informed and thoughtful you will become and the more likely to fulfill your potential.

Chapter 6: Indirect Learning and Human Potential

Learning Objectives

  • Provide direct and indirect examples of predictive and control learning
  • Relate Bandura’s four-stage model of observational learning to the results of the Bobo doll study
  • Describe how adaptive learning principles explain the acquisition and use of language
  • Provide examples of short-term and long-term memory

Observational Learning

Direct and Indirect Learning

In order to appreciate the differences between the lives of hunter-gatherer humans such as the Nukak, and technologically enhanced humans, it is necessary to consider the role and extent of social learning (i.e., involving others of the same species). Social learning can consist of simply observing how others behave under specific circumstances, or symbolic communication through the use of language. Usually, Introduction to Psychology textbooks cover observational learning in the same chapter as classical and instrumental conditioning, with language appearing in a different chapter. I prefer to combine these topics, using the previously mentioned distinction between direct and indirect learning.

In classical and instrumental conditioning, an individual interacts directly with environmental events. Pavlov’s dogs were exposed to the tone and food; Skinner’s rats could press the bar and receive food. In contrast, observational learning is indirect in the sense that someone (or something) else is interacting with the environment. An example of indirect classical conditioning might involve one child (the observer) witnessing another child (the model) being jumped upon by a dog and acting fearful. It is likely that even though the dog did not jump on the observer, he will be fearful in its presence. An example of indirect instrumental conditioning might involve a child witnessing another taking a cookie from a cookie jar. We all know what will happen next.

Language is a consensually agreed upon collection of arbitrary symbols representing objects, movements, properties, and relationships among objects and events. Through language, humans can provide similar information to that provided through observational means, thereby resulting in similar behaviors. For example, one can tell a child that a particular dog might jump on him, or that there are cookies in a cookie jar. Olsson and Phelps (2004) compared direct, observational, and linguistic learning of a fear of faces. Human subjects were either exposed to a shock (direct learning) in the presence of a picture of a face, observed another person’s emotional reaction to the face (indirect observational), or were told that the picture of the face would be followed by shock (indirect symbolic). All three groups subsequently demonstrated similar fear reactions to the picture of the face. The three types of experience represent different paths to the same adaptive learning (see also Kirsch, Lynn, Vigorito, and Miller, 2004). We will now consider each of the forms of indirect learning in greater depth.

Observational Learning

File:Albert Bandura Psychologist.jpg

Figure 6.1 Albert Bandura.

Bandura’s Four-stage Model of Observational Learning

Albert Bandura is to the study of observational learning what Pavlov is to the study of predictive learning (classical conditioning) and what Thorndike and Skinner are to the study of control learning (instrumental or operant conditioning). Bandura conducted some of the pioneering research demonstrating observational learning in children and developed a comprehensive theory of social learning (Bandura, 1962, 1965, 1969, 1971, 1973, 1977a, 1977b, 1978, 1986; Bandura, Ross, & Ross, 1961, 1963a, b; Bandura & Walters, 1963). Much of his empirical research relates to the four-stage model of observational learning he proposed to analyze and organize the voluminous literature (see You Tube video). The four logically necessary observational learning processes include: attention, retention, production, and motivation. That is, in order for an observer to imitate a model it is essential that the observer attend to the model’s behavior, retain information regarding the important components, have the ability to produce the same actions, and be motivated to perform. This is a “chain as strong as its weakest link.” If any stage is missing, imitation (but not necessarily learning) does not occur. We will now review some of the major variables found to influence each of these stages.

Attention

Much of what we have learned about direct predictive and control learning applies to observational learning as well. Human beings predominantly rely upon the senses of vision and hearing to adapt to environmental demands. In order to imitate what we see or hear, we must be attending to critical elements of modeled behavior. Factors such as intensity, attractiveness, and emotionality will enhance the salience of a stimulus, increasing the likelihood of imitation (Waxler & Yarrow, 1975).

Prior learning experience, in the form of perceived similarity to self, significantly affects the probability of attending to different models in one’s environment. One is more likely to attend to individuals of the same sex, age, race, ethnicity, social class, and other variables. Girls and boys are universally treated very differently. Starting from birth, they are dressed differently, given different hair styles, and encouraged to engage in different behaviors. Girls are encouraged to “play house” and boys to “play ball.” When children first start to speak, they soon learn to categorize the world into “mama”, “dada”, boys and girls, and assign themselves gender and age identities. These assignments impact upon their choices of models throughout their lives. Selective indirect learning experiences influence family responsibilities in all cultures and education and career opportunities in technologically enhanced cultures.

In addition to those similar to them, people are most likely to attend to others designated as “authority figures” or “role models”, whether these designations are earned or assigned. In a Stone-Age culture such as the Nukak, there are very few potential models. Elders are most likely to be considered authority figures with special powers or abilities attributed to some. In our culture, every day we come into contact with a large number of potential models based on kinship, grade-level, occupation, organization-membership, friendship, etc. In addition to these “live” examples we are exposed to a countless number of potential models on the radio, TV, internet, etc. The likelihood of paying attention to a model can be based upon perceived functional value. For example, one may seek out a particular relative or friend or search for a particular website in order to obtain knowledge or skills that relate to a current problem. Sources of authority may include elders, teachers, clergy, “experts”, or celebrities. For example, it was demonstrated that 11- to 14-year old women performed better on a task modeled by a cheerleader as opposed to a lower-status female model (McCaullaugh, 1986).

In the previous chapter, we described Nukak foraging trips. Many of the skills required for hunting, gathering, and preparing food are acquired through observational learning. “Every Nukak knows how to make virtually everything he or she will need during his or her lifetime, and the basic material for making these items can be found within the band’s territory” (Politis, 2007, 229). The blowpipe, fashioned from cane, is the primary hunting tool. Darts made from palm trees are shaped, sharpened, and tipped with the paralyzing drug curare obtained from the bark of the parupi vine. Nukak men spend considerable time making 7- to 10-foot long blowpipes, caring for and maintaining them. Smaller blowpipes (less than 6-feet-long) are constructed for young boys to play with and acquire expertise. Male adolescents often accompany their fathers on foraging trips with scaled-down blowpipes.

Women and girls are responsible for grinding various fruit and seeds. The mortars are created from sections of tree trunks and the pestle is a straight stick with one end flattened. Women also fashion clay pots for the storage and transport of fruits and liquids, fiber hammocks, and baskets of different sizes made from vines (Politis, 2007, 210-217).

Kidilicious

Figure 6.2 Children’s cooking class.

Retention

Bartlett (1932) conducted memory research with meaningful materials such as stories or fables. He found that in retelling stories, people tended to alter them in systematic ways. He concluded that memory is a reconstructive rather than a reproductive process involving leveling (simplification), sharpening (exaggeration of specific details), and assimilation (incorporation into existing schemas). Thus, when we observe a model, we are not storing a “videotape” of what we see and hear but rather encoding our observations in such a manner that we can reconstruct what occurred at a later time. For example, if someone is demonstrating how to open a combination lock, we will probably try to memorize verbal instructions (e.g., turn clockwise past 0 to 14, turn counter-clockwise past 0 to 28, etc.). It will decrease the likelihood of encoding errors if this complex behavior is broken down into manageable units. The instructions should be repeated at a slow pace, out loud or silently, to improve retention and increase the likelihood of opening the lock. Adults who verbally coded modeled events and actively rehearsed afterward were much better at imitating what they observed than adults who did not code the events or were prevented from rehearsing (Bandura & Jeffrey, 1973). Later in this chapter, we will review research related to memory and forgetting in more depth.

Response Production

When I was a child, I loved the TV character Superman. I would join my friends with a towel draped around my neck and try to fly. I have yet to take off. Obviously I had attended to Superman and remembered what he did. As I grew up, I continued to watch TV and have role models. Many of these, like Superman, possessed natural abilities that escaped my genes or skills that escaped my learning history. In the former case, I was forced to be serene. In the latter, with “courage”, I could acquire the component responses necessary to imitate the model. In Chapter 14, we will consider the topic of self-control and I will describe a research-based process for changing one’s behavior in a desired fashion. Still, I wouldn’t suggest trying to fly.

Motivation

We can see people doing pretty much anything on the internet. Fortunately, it is not necessarily the case that “people see, people do.” In our complicated, open, media-dominated world, we are constantly exposed to models performing undesirable, illegal, or dangerous acts. We do not automatically try to imitate everything we observe. Often, the outcome is an example of latent observational learning. The rats in Tolman and Honzik’s (1932) group that did not receive food at the end of the maze learned the correct route, but didn’t show it. We often attend to models, remember what they performed, and possess the ability to imitate their actions but do not in the absence of an incentive.

In a classic study, Bandura (1965) showed boys and girls a film depicting a child displaying unusually aggressive acts with a Bobo doll punching bag (e.g., hitting the doll with a hammer – see below). In one version of the film, an adult observed the child and punished his aggressive acts. In a second version, the adult praised and provided candy to the boy. There was no consequence in a third condition. Afterwards, the children were placed in a room with a Bobo doll and observed to see how often they displayed the unusual aggressive acts. The findings indicated that boys committed more of these aggressive acts than girls in all three conditions. Both boys and girls were more likely to imitate the model that was rewarded at the end of the film than the model that was punished. Afterwards, children in the 3 groups were offered treats to imitate what they observed in the film. As was true in Tolman and Honzik’s group that was switched from non-reward to reward, there was a dramatic increase in the number of aggressive acts. Clearly, the children had learned and retained what they observed. The likelihood of imitation was influenced by both the consequences displayed in the film as well as the contingencies implemented in the playroom (see Figure 6.2).

File:Bobo Doll Deneyi.jpg

Figure 6.3 Children displaying observational aggression (Bandura, 1965).

Figure 6.4 Mean number of different matching responses reproduced by children as a function of response consequences to the model and positive incentives (Adapted from Bandura, 1965).

In a classic series of studies (Bandura, Grusec, & Menlove, 1967; Bandura and Menlove, 1968), children with fears of dogs were shown films of other children interacting with dogs exhibiting progressively-threatening behaviors. This indirect observational learning procedure was very successful in reducing or eliminating the children’s fears.

Speech and Language

Civilization began the first time an angry person cast a word instead of a rock.

Sigmund Freud

Observational learning has been evidenced in many species of animals including birds (Zentall, 2004) but approximations to speech appear practically unique to humans. Paul Revere famously ordered a lantern signal of “one if by land and two if by sea” during his Revolutionary War midnight ride through the streets of Massachusetts. This is not functionally different from the distinct alarm calls emitted by vervet monkeys in the presence of eagles, snakes, and leopards (Strushaker, 1967; Seyfarth and Cheney, 1980). Through observational learning, young vervets learn to respond to different screeches for “heads up”, “heads down”, and “look around!” Vervets hide under trees to the eagle warning, rear on their hind paws to the snake warning, and climb the nearest tree to the leopard warning. Recently, even more descriptive “speech” has been demonstrated in prairie dogs (Slobodchikoff, Perla, & Verdolin, 2009). These examples are the closest we see to social learning of speech in other animals. Slobodchikoff (2012) has written a fun and informative review of animal communication entitled Chasing Dr. Doolittle: Learning the Language of Animals.

Meltzoff and Moore (1977, 1983) demonstrated unambiguous examples of imitation in infant humans as young as 12- to 21-days of age, leading to the conclusion that humans normally do not need to be taught this mode of learning.

Skinner (1986) contributed an interesting but admittedly post-hoc speculative theoretical article describing possible evolutionary scenarios for the adaptive learning of imitation and speaking. An imitative prompt is more informative than an ordinary gestural prompt in that it specifies the specific characteristics of a desired response. Speech is preferable to signing as a means of communication since it is possible at long distances and other circumstances where individuals cannot see each other.

Hockett’s Features of Language

If we are to understand human behavior, we must understand how language is acquired and its impact upon subsequent adaptive learning. Before we proceed, we must consider what we mean by language. Charles Hockett (1960) listed 13 features that he considered essential to language:

  1. Vocal-auditory channel – We saw in Chapter 1 that the human being’s brain, with its disproportional amount of space dedicated to the tongue, larynx, and voice box, facilitates the acquisition of speech. Sign language, involving a manual-visual channel, is mostly restricted to deaf people and those wishing to communicate with them.
  1. Broadcast transmission and directional reception – Sound is sent out in all directions while being received in a single place. This provides an adaptive advantage in that people can communicate with others out of their line of sight.
  1. Rapid fading (transitoriness) – Sounds are temporary. Writing and audio-recordings are techniques used to address this limitation of speech (and alas, lectures).
  1. Interchangeability – One must be able to transmit and receive messages.
  1. Total feedback – One must be able to monitor one’s own use of language.
  1. Specialization – The organs used for language must be specially adapted to that task. Human lips, tongues and throats meet this criterion.
  1. Semanticity – Specific signals can be matched with specific meanings. Different sounds exist for different words.
  1. Arbitrariness – There is no necessary connection between a meaningful unit (e.g., word) and its reference.
  1. Discreteness – There are distinct basic units of sound (phonemes) and meaning (words).
  2. Displacement – One must be able to communicate about things that are not present. One must be able to symbolically represent the past and the future.
  1. Productivity – The units of sound and meaning must be able to be combined to create new sounds and meaningful units (sentences).
  1. Duality of patterning – The sequence of meaningful units must matter (i.e., there must be a syntax).
  1. Traditional Transmission – Specific sounds and words must be learned from other language users.

Although all of Hockett’s features are frequently cited as the defining characteristics of language, the first 3 elements are restricted to speech. These features do not apply to sign language, letter writing, reading, and other examples of non-vocal/auditory modes of symbolic communication.

Language Acquisition

The principles of predictive and control learning help us understand the acquisition of language and the role it plays in subsequent human adaptation. At a few months old, infants start to babble and are able to make all the possible human sounds. Eventually, as the child is increasingly exposed to the sounds of her/his social unit, some of the sounds are “selected” and others removed from the repertoire. Routh (1969) demonstrated that infants are able to make subtle discriminations in sounds. The frequency of speaking either vowels or consonants could be increased if selectively reinforced with tickles and “coos.” It has been demonstrated that the mother’s vocal imitation of a child’s verbalizations is also an effective reinforcer (Pelaez, Virues-Ortega, and Gewirtz, 2011).

Children may learn their first word as early as 9 months. Usually the first words are names of important people (“mama”, “dada”), often followed by greetings (“hi”, “bye”) and favored foods. As described in Chapter 5, classical conditioning procedures may be used to establish word meaning. For example, the sound “papa” is consistently paired with a particular person. Children are encouraged to imitate the sound in the presence of the father. It may be the source of humor (or embarrassment) when a child over-generalizes and uses the word for another male adult. With experience, children learn to attend to the relevant dimensions and apply words consistently and exclusively to the appropriate stimuli or actions (e.g., “walk”, “run”, “eat”, etc.). Similarly, words are paired with the qualities of objects (e.g., “red”, “circle”, etc.) and actions (e.g., “fast”, “loud”, etc.). Children learn to abstract out the common properties through the process of concept formation. Words are also paired with quantities of objects. In the same way that “redness” may be a quality of diverse stimuli having little else in common, “three-ness” applies to a particular number of diverse stimuli.

Much of our vocabulary applies to non-observable objects or events. It is important to teach a child to indicate when “hurt” or “sick”, or “happy” or “sad.” In these instances, an adult must infer the child’s feelings from his/her behavior and surrounding circumstances. For example, if you see a child crying after bumping her head, you might ask if it hurts. As vocabulary size increases, meaning can be established through higher-order conditioning using only words. For example, if a child is taught that a jellyfish is a “yucky creature that lives in the sea and stings”, he/she will probably become fearful when swimming in the ocean.

Since different languages have different word orders for the parts of speech, syntax (i.e., grammatical order) must be learned. At about 18 months to 2 years of age, children usually start to combine words and by 2-1/2 they are forming brief (not always grammatical) sentences. With repeated examples of their native language, children are able to abstract out schemas (i.e., an organized set of rules) for forming grammatical sentences (e.g., “the car is blue”, “the square is big”, etc.). It is much easier to learn grammatical sequences of nonsense words (e.g., The maff vlems oothly um the glox nerfs) than non-grammatical sequences (e.g., maff vlem ooth um glox nerf). This indicates the role of schema learning in the acquisition of syntax (Osgood, 1957, p.88). Children usually acquire the intricacies of grammar by about 6 years of age. In the next chapter, we will describe the process of abstraction as it applies to concept learning, schema development, and problem-solving.

Vocabulary size has been found to be an important predictor of success in school (Anderson & Freebody, 1981). Major factors influencing vocabulary size include socio-economic status (SES) and the language proficiencies of significant others, particularly the mother. In a monumental project, Hart and Risley (1995) recorded the number of words spoken at home by parents and 7-month-to 36-month-old children in 42 families over a 3-year period. They found that differences in the children’s IQ scores, language abilities, and success in school were all related to how much their parents spoke to them. They also found significant differences in the manner in which low and high SES parents spoke to their children. Low SES parents were more likely to make demands and offer reprimands while high SES parents were more likely to engage in extended conversations, discussion, and problem-solving. Whereas the number of reprimands given for inappropriate behavior was about the same for low and high SES parents, high SES parents administered much more praise.

Speech becomes an important and efficient way of communicating one’s thoughts, wishes, and feelings. This is true for the Nukak as well as for us. Given the harshness of their living conditions and the limits of their experiences, the Nukak have much in common with low SES children within our society. Declarative statements (e.g., “the stick is sharp”, “the stove is hot”; “pick up the leaves”, “don’t fight with your sister”; “I am happy”, “you are tired”, become the primary basis for conducting much of the everyday chores and interactions.

Spoken language is observed in stone-age hunter/gatherer and technologically advanced cultures. There has been controversy concerning the role of nature and nurture in human language development (Chomsky, 1959; Skinner, 1957). Skinner, writing from a functionalist/behavioral perspective, tellingly entitled his book Verbal Behavior, not “Using Language.” Watson (1930) described thinking as “covert speech” while Skinner (1953) referred to “private behavior.” According to Vygotsky (originally published in 1934), children initially “think out loud” and eventually learn to “think to themselves.” Skinner suggested that speaking and thinking were not different in kind from other forms of behavior and that respondent conditioning (predictive learning) and operant conditioning (control learning) could provide the necessary experiential explanatory principles. There was no need to propose a separate “language acquisition device” to account for human speech.

We saw in Chapter 5, how predictive learning principles could be applied to the acquisition of word meaning. Basically, Skinner argued that words could serve as overt and covert substitutes for the control learning ABCs. As antecedents, words could function as discriminative stimuli and warning stimuli. For example, “Give mommy a kiss” or “Heads up!” As consequences, words can substitute for reinforcers and punishers (e.g., “Thank you.”, “Stop that!”). A rule is a common, useful, and important type of verbal statement including each of the control learning ABCs (Hayes, 1989). That is, a rule specifies the circumstances (antecedents) under which a particular act (behavior) is rewarded or punished (consequence). For example, a parent might instruct a child, “At dinner, if you eat your vegetables you can have your dessert” or, “When you get to the curb look both ways before crossing the street or you could get hit by a car.”

Chomsky, a psycholinguist, submitted a scathing critique of Skinner’s book, emphasizing how human genetics appears to include a “language acquisition device.” The Chapter 1 picture of the human homunculus, with its disproportional brain space dedicated to the body parts involved in speech, certainly suggests that the human being’s structure facilitates language acquisition. The homunculus also implies there is adaptive value to spoken language; otherwise these structures would not have evolved. Proposing a “language acquisition device”, similar to proposing an instinct to account for speech, is a circular pseudo-explanation. The language acquisition device is inferred from the observation of speech, it does not explain speech. Remember, a psychological explanation must specify specific hereditary and/or environmental causes. Chomsky does neither, whereas Skinner is quite specific about the types of experience that will foster different types of verbal behavior. It is not as though Skinner denies the role of human structure in the acquisition of speech or its importance as indicated in the following quote. “The human species took a crucial step forward when its vocal musculature came under operant control in the production of speech sounds. Indeed, it is possible that all the distinctive achievements of the species can be traced to that one genetic change” (Skinner, 1986). Neuroscientists and behavioral neuroscientists are actively engaged in research examining how our “all-purpose acquisition device” (i.e., brain) is involved in the learning of speech, reading, quantitative skills, problem-solving, etc.

Human beings may have started out under restricted geographic and climatic conditions in Africa, but we have spread all over the globe (Diamond, 2005). We developed different words and languages tailored to our environmental and social circumstances. There is much to be learned from the school of hard knocks, but it is limited to our direct experience and can be difficult or dangerous. Our verbal lives enormously expand learning opportunities beyond our immediate environment to anything that can be imagined. Indirect learning (i.e., observation or language) often speeds up adaptive learning and eliminates danger. It is not surprising that human parents universally dedicate a great deal of effort to teaching their children to speak. It makes life easier, safer, and better for them as well as their children.

MacCorquodale (1969) wrote a retrospective appreciation of Skinner’s book along with a comprehensive and well-reasoned response (1970) to Chomsky’s critique. Essentially, MacCorquodale described Chomsky as a structuralist and Skinner as a functionalist. That is, Chomsky attempted to describe how the structure of the mind enables language. Skinner was concerned with how language enables individuals to adapt to their environmental conditions. Paraphrasing Mark Twain, an article marking the 50th anniversary of its publication concluded that “Reports of the death of Verbal Behavior and behaviorism have been greatly exaggerated” (Schlinger, 2008).

Reading and Writing

It is language in written form that has enabled the rapid and widespread dissemination of knowledge within and between cultures. It is also the medium for recording our evolving advances in knowledge and technology. Early forms of Bronze Age writing were based on symbols or pictures etched in clay. Later Bronze Age writing started to include phonemic symbols that were precursors to the Iron Age Phoenician alphabet consisting of 22 characters representing consonants (but no vowels). The Phoenician alphabet was adopted by the Greeks and evolved into the modern Roman alphabet. The phonetic alphabet permitted written representation of any pronounceable word in a language.

The Arabic numbering system was originally invented in India before being transmitted to Europe in the Middle Ages. It permits written representation of any quantity, real or imagined, and is fundamental to mathematics and the scientific method, which rely on quantification and measurement. The alphabet and Arabic numbers permit words to become “permanent” in comparison to their transitory auditory form. This written permanence made it possible to communicate with more people over greater distances and eventually to build libraries. The first great library was established at Alexandria, Egypt in approximately 300 years B.C. Scrolls of parchment and papyrus were stored on the walled shelves of a huge concrete building (Figure 6.5). Gutenberg’s invention of the printing press in 1439 enabled mass publication of written material throughout Western Europe (Figure 6.6). Today, e-books are available on electronic readers that can be held in the palm of your hands (Figure 6.7)! It should not be surprising that college student differences in knowledge correlate with their amount of exposure to print (Stanovich and Cunningham, 1993).

File:Library of Alexandria (sepia).jpg

Figure 6.5 The library at Alexandria.

Figure 6.6 Gutenberg’s printing press.

https://upload.wikimedia.org/wikipedia/commons/1/18/Kindle_3_by_Jleon.jpg

Figure 6.7 The library now.

Memory

An enormous amount of information is processed by our senses every day. As described, when discussing observational learning, some of it is attended to and some of it is ignored. Some of it is remembered and some is forgotten. Memory and forgetting have been important topics in psychology since the start of the discipline. Hermann Ebbinghaus (1885) invented the 3-letter nonsense syllable (e.g., GUX, VEC, etc.) in order to eliminate the effects of prior familiarity. He generated the first learning and forgetting curves for over 1,000 lists of nonsense syllables using himself as the subject. Whereas Ebbinghaus’ immediate recall was almost perfect, within nine hours retention dropped to less than 40 per cent. His performance continued to decline to about 25 per cent retention after six days and approximately 21 per cent after a month (see Figure 6.8).

https://upload.wikimedia.org/wikipedia/commons/7/75/Ebbinghaus%E2%80%99s_Forgetting_Curve_%28Figure_1%29.jpg

Figure 6.8 Ebbinghaus’ forgetting curve.

Psychologists have tried to understand the mechanisms involved in memory and forgetting. An important question is whether forgetting is simply a function of the passage of time or the result of interference from other memories or activities. Since the time of Ebbinghaus, two significant sources of interference have been identified. Retroactive interference (i.e., working backwards) occurs when learning new materials reduces the ability to recall previously learned material. Proactive interference (i.e., working forwards) refers to the detrimental effect of previously learned material on the memory of new material (Slamecka & Ceraso, 1960). For example, if a student learned Spanish in high school and French in college, sometimes the new French vocabulary might interfere with remembering the former Spanish vocabulary. This is retroactive interference. If the prior learning of Spanish interfered with recalling the more recently learned French, this would be considered proactive interference. In the case of Ebbinghaus, as he learned more and more lists, he was increasing the buildup of both retroactive and proactive interference. That is, if he learned new lists before being asked to recall a previously learned list, this would result in retroactive interference. Proactive interference would occur when prior learning impeded learning a new list.

Starting in the late 1950s, researchers started distinguishing between different types (or stages) of memory. Sensory memory (sometimes referred to as very short-term memory) is basically a very brief continuation of sensation. Sensory memory exists immediately after presentation of a stimulus, is unconscious, and highly detailed (Sperling, 1960). Depending upon the variables previously considered under observational learning, some of the details will be attended to and others ignored. The attended to details may enter consciousness for further processing in the form of different rehearsal strategies. This longer-lasting, but still temporary stage, is usually referred to as working or short-term memory (Brown, 1958; Peterson and Peterson, 1959). Sometimes adapting to one’s environment only requires the use of currently available knowledge (e.g., after looking up a phone number). At other times, one may adapt by taking advantage of prior direct and indirect learning (e.g., when calling a family member or friend). Long-term memory refers to this much longer-lasting (perhaps permanent) stage.

Computers became information-processing models for human memory with sensory, short-term, and long-term memory linked in sequential stages. Atkinson and Shiffrin (1968, 1971) proposed the three-stage model of memory portrayed in Figure 6.9. Input (i.e., environmental sensory information) available in sensory memory had to be attended to in order to be available for rehearsal in short-term memory. There, the information had to be continually rehearsed in order to remain available. Then it needed to be elaborated upon and encoded in a manner which could be interpreted, stored, and retrieved at a later time from the more permanent long-term memory. As shown in Figure 6.4, information from long-term memory can be retrieved into short-term memory to address immediate adaptive needs. We will now elaborate on the methods and findings of the classic experiments which led to the Atkinson-Shiffrin information-processing model of human memory.

File:Modal model of memory.tif

Figure 6.9 The Atkinson and Shiffrin Model of Memory.

Sensory Memory

George Sperling (1960) developed an ingenious procedure demonstrating that a great deal of information may be retrieved from a stimulus for a brief period of time after it is removed (1/20th of a second in Figure 6.10). If you show a person a matrix of twelve letters and ask them to recite as many of the letters as they can soon afterward (up to a second in the Figure), they usually are able to retrieve between four and five of the items.

https://upload.wikimedia.org/wikipedia/commons/e/e7/Sperling%27s_Partial_Report_Paradigm.jpg

Figure 6.10 Sperling’s partial-report procedure.

Sperling believed that the act of reporting what they remembered interfered with maintenance of the information, resulting in an inaccurate estimate of the actual amount retained. He developed a partial-recall procedure in which a high-, middle-, or low-frequency tone indicated which row of the array should be reported on an individual trial. It was demonstrated that up to about a quarter of a second, individuals were able to report twice as many of the items as with the full report method. In order for this to occur, the full array had to be available for processing. Visual sensory memory is often referred to as iconic memory. It has been determined that auditory sensory memory, often referred to as echoic memory, is more durable than visual information and can last several seconds (Cowan, Lichty, & Grove, 1990). This makes it possible to understand spoken sentences. Still, the limitations of auditory memory impact upon the ability to retain lecture material. A lecture requires processing the information contained in several sentences extending over lengthier time intervals.

Short-Term Memory

The information remaining in sensory memory is available for further processing through the conscious act of rehearsal. We have all had the experience of having to look up a phone number again if we are distracted before dialing. An important question raised by Brown (1958) and by Peterson and Peterson (1959) was how long information would remain consciously available in the absence of rehearsal. In order to determine this, it was necessary to ask people to recall information after different time intervals without rehearsing the material. This was accomplished by having them count backwards by three. You can try this yourself if you don’t think it would work. Look up a phone number and start counting backward from 1,000. See how long you can go before having to go back and look it up again. It was found that when rehearsal was prevented, retention of trigrams (i.e., three consonants) declined from 80 per cent after three seconds to 10 percent after 18 seconds. Thus, it is not surprising that if you are distracted soon after looking up a phone number, you will need to look it up again.

Keppel and Underwood (1962) reviewed Peterson’s and Peterson’s results and noted that memory was very good at all the intervals on the first few trials. Performance deteriorated, however, with additional trials. This raised the question of whether the decline in short-term memory as a function of interval length resulted from delay or proactive interference from prior trials. Waugh and Norman (1965) tested this by giving subjects lists of 16 numbers. After the last item, the subject was asked to report the number which appeared immediately after one of the numbers in the list. Digits were presented every second or every four seconds. By varying both the time interval and whether the target number appeared early or late in the list, it was possible to determine which variable was more important. If the time interval were more important, the position of the item in the list should not matter. If the number of items prior to the target was more important, the time interval should not matter. It turned out that the number of prior items had a much greater effect than the delay. Short-term memory loss is primarily the result of interference when someone is not actively rehearsing the material.

In addition to knowing the duration of short-term memory, it is important to know its capacity. That is, how much information can you maintain in consciousness if you are allowed to rehearse? Memory span tests are one way of addressing this question (Humpstone, 1917). You can be asked to repeat letters, words, or numerical digits in order. Items are added until you are correct on less than 50 per cent of the test trials. One of the most cited articles in the psychology literature is “The magical number seven, plus or minus two: Some limits on our capacity for processing information” by George Miller (1956). Miller examined the data from different types of short-term memory tasks, including memory span. He came to the conclusion that there is a relatively small amount of information we can retain in consciousness. He suggested that we are limited to between five and nine chunks of information. A chunk is similar to what we previously referred to as a gestalt; a meaningful unit. For example, study the following list of letters for about fifteen seconds and then look away and see how many you can correctly repeat:

mbicbnifbacbiac

Probably you were only able to correctly repeat about seven letters.

Now do the same with the same letters organized as follows:

ibmnbcfbiabccia

Now you may be able to repeat the entire list since they can be grouped into five chunks of three. Telephone companies try to help us out with our short-term memory limitations by grouping the numbers. Probably you have used mnemonics (i.e., memory enhancing techniques) to memorize different types of information. For example, the word HOMES might help you remember the five great lakes (Huron, Ontario, Michigan, Erie, Superior). Roy G Biv might help you recall the sequence of colors comprising the visual spectrum (red, orange yellow, green blue, indigo, violet). In the following chapter, we will review cognitive processes, including concept formation.

Long-Term Memory

Think of all the concepts you learned as children; the ABCs, numbers, names of relatives, types of food, names of feelings, etc. Think of all the skills you acquired; walking, talking, getting dressed and tying your shoes, riding a bike, etc. Think of different events in your life; birthday parties, fun with your siblings and friends, teachers in different grades, ball games and dances, graduations, etc. Think of how you feel when reading about a calm summer day, watching a close sporting contest, or hearing a buzzing bee. These are all examples of different types of long-term memory included in the overview provided in Figure 6.11 (Squire, 1986, 1993).

https://upload.wikimedia.org/wikipedia/commons/9/91/Diagram_based_on_Squire_and_Zola_%281996%29_about_decalarative_and_non-declarative_memory.png

Figure 6.11 Types of long-term memory (adapted from Squire, 1986, 1993).

The first distinction in types of long-term memory relates to whether conscious effort is required for recall to occur. Explicit (declarative) memory requires conscious effort whereas implicit (non-declarative) memory does not. For example, recalling the name of a type of food you haven’t eaten for several years or when you met your best friend requires conscious effort. No effort is required to recall how to ride a bike or to feel relaxed when reading about a calm summer day.

Explicit memory can be sub-divided into sematic memory and episodic memory (Tulving, 1972). Semantic memory consists of your entire knowledge base including your vocabulary, concepts, and ideas. Episodic memory consists of your chronological listing of life events. A food type is an example of semantic memory. When you met your best friend is an example of episodic memory.

Implicit memory can be sub-divided into procedural memory and emotional memory. Procedural memory refers to all the motor skills you are able to execute. Emotional memory refers to the feelings experienced based on prior experience. Bike riding is an example of procedural memory whereas a fear of buzzing bees is an example of emotional memory.

We will now consider the factors influencing how these different types of information become stored in long-term memory. This has practical applications as you attempt to achieve your potential. Improving your long-term memory will help you adapt to many of the demands of your physical and social environment. It will help you learn your course material, not only to perform well on exams, but also to best apply the knowledge, skills, and attitudes you acquire to your future educational and career objectives.

Figure 6.9 indicates that maintenance rehearsal strategies, consisting of repeating information over and over again, are sufficient to retain information in short-term memory. More active elaborative rehearsal strategies related to the meaning of material, however, are far more effective for transferring information to long-term memory. Elaboration can be in the form of relating new information to previously acquired knowledge or to one’s personal experience. When studying for exams, rather than simply trying to memorize information through repetition, it is more effective to try to describe the information using your own words. You can try to apply the information by making up your own examples or describe how the information relates to other things you know. As described in Chapter 1, an extremely effective study strategy is to make up questions and then test yourself. The finding that this strategy improves recall and test results in college students (Roediger & Karpicke, 2006; Einstein, Mullet, & Harrison, 2012) and the elderly (Meyer & Logan, 2013) has been labeled the testing effect. Another effective way of determining if you understand information is to try to teach it to someone else. This is only possible when you have a thorough understanding of the material yourself.

One of the most important variables influencing your ability to learn, remember, and apply information is how it is organized. Effective lecturers and textbook writers attempt to use schemas (Rumelhart, 1980) and scripts (Mandler, 1984) in order to achieve their instructional objectives. Schemas organize information in a coherent way while scripts create a meaningful sequence. In the previous chapter, I described two schemas developed by Skinner to organize behavioral contingencies and intermittent schedules of reinforcement. Psychology studies how nature and nurture interact to influence the potential for human thought, feeling, and behavior. This description determined the schema for organizing the book. I hope this helps you see where each of the content areas (i.e., chapters) fit within the context of the entire discipline of psychology (see Figure 6.12).

Psychology: The Science of Human Potential

Mostly Nature Mostly Nurture Nature/Nurture

Biological Psychology Direct Learning Developmental

Sensation and Perception Indirect Learning Personality

Motivation and Emotion Cognition Social Psychology

Problem Behavior

Figure 6.12 Nature/nurture schema for organizing the textbook chapters.

I once read a review suggesting that starting a biological psychology textbook with a description of a neuron was like starting a book about airplanes with a description of a screw. When possible, it is helpful to create an overarching organizational schema (i.e., an “airplane”, “big picture” or “forest”) portraying the relationships between the different components (i.e., parts, little pictures, or trees). The nature/nurture schema for human potential was an attempt to create such an overview of the different content areas of psychology. Your potential is initially determined by your unique (assuming you are not an identical twin) genetically determined physical and biological characteristics, sensory capabilities, needs, and drives (i.e., “nature”). Ultimately, the direct and indirect learning experiences to which you are exposed (i.e., “nurture”) will impact upon your personality development and the extent to which you achieve your individual potential (“nature/nurture”).

Maslow’s human needs pyramid is a hierarchically organized script prioritizing categories of human needs. According to Maslow, one first needs to satisfy basic survival needs before being able to concentrate on interpersonal relationships, and so on. Atkinson and Schifrin’s three-stage memory model is another example of a script. Information sequentially flows from sensory, to short-term, to long-term memory. Another script, mentioned below as well as in Chapter 7, is the sequential development of technologies resulting from application of the scientific method. Such technological growth is rapidly transforming the human condition. Kurzweil (2001) attributed the accelerating pace of technological achievements to the recording of prior successes. Without this recording, individuals and cultures would not be able to profit from prior advances.

Preparing for School and the 3 “R”s

It is through indirect learning that the benefits of direct learning, including tool-making and technological change, are recorded and disseminated among humans. This is as true for the Nukak as it is for us. The Nukak use observational learning and language to socialize children and teach survival skills. Whereas the Nukak’s and our basic survival needs are the same, technologies have changed our physical conditions, population densities, and adaptive needs. The Nukak spend their time each day meeting their basic survival needs as individuals and a species.

Last chapter, we saw how verbal behavior can be understood through the application of basic learning principles. Once children speak, it is possible to use language to expand their knowledge and skills. Rather than acquiring hunting, gathering, and other survival skills, you have probably been acquiring school-related knowledge and skills since you were old enough to speak and prepare to attend school.

Children enjoy listening to rhymes. Children enjoy singing. Children REALLY enjoy singing rhymes! It is not unusual for adults to start singing the alphabet song to children as young as 2 years of age. The alphabet is an example of a serial list in which items always appear in a particular order. Serial learning was one of the first types of memory studied by Ebbinghaus. The serial-position effect (Hovland, 1938) refers to the finding that one learns the items at the beginnings and ends of lists before learning the items in the middle. The alphabet song divides the 26 letters into 4 manageable chunks based on rhyming sounds. This makes learning the entire sequence fun and relatively easy, even for a very young child. Once this is accomplished it is possible to match the sounds to their written forms, an important precursor to learning to read.

Counting represents another fundamental serial learning task for children. It is different from the alphabet in that the sequence of items is not arbitrary. That is, there is no reason “a” has to be the first letter of the alphabet and precede “b”, etc. However, “1” has to be first, and “2”, second, etc. Counting, therefore, requires additional learning in which the numbers are spoken in the presence of the appropriate quantities of different objects. Eventually, the child “abstracts out” the dimension of quantity and the different values. Similar to letters, the sounds are eventually associated with their written forms, an important precursor to learning arithmetic.

Implementation of compulsory education around the turn of the 20th century was an enabling factor for subsequent scientific and technological advances. In order for individuals and for a society to receive the full benefits of compulsory education, it is necessary that children be prepared for the first years of schooling. The richness of their experiences and extensive vocabularies provides many children with the basic knowledge and skills required to excel in pre-school and beyond. Unfortunately, as revealed in Hart and Risley’s (1995) findings, not all children currently receive the level of preparation necessary to immediately acquire the ability to read, write, and perform quantitative operations. Hopefully, parent education and pre-school programs such as Head Start, will reduce the continuing achievement gap between different segments of our population.

The phonetic alphabet, the basis for reading, has served as the major means of recording human knowledge since the time of the Phoenicians. There is certainly much truth to the statement that “reading is fundamental.” Learning to read is an excellent example of the importance of the Gestalt perspective. Reading may be broken down into a sequence of steps establishing larger and larger meaningful units (i.e., gestalts). Eye-movement recordings reveal that individual letters are not initially perceived as units. With increased experience, we are able to integrate the components into a relatively small number of distinct letters, followed by integration of letters into words. Mentioned previously, we perceive the letters of words simultaneously (i.e., as a “gestalt”), not sequentially (Adelman, Marquis, and Sabatos-DeVito, 2010). Eventually, we are able to read aloud fluently by scanning phrases and sentences (Rayner, 1998).

Learning to write requires establishing larger and larger behavioral units through shaping and chaining. As soon as a child is able to grasp a pencil or pen, parents often encourage her/him to “draw.” Once a certain level of proficiency is achieved, it is possible to teach the writing of letters and numbers. This may begin by having the child trace the appropriate signs and then fading them out so that eventually the appropriate symbol can be formed without visual assistance. Eventually fluency of writing letters, followed by words, followed by phrases and complete sentences is achieved. As children advance through the grades, they are assigned tasks requiring more extensive reading and writing.

Learning basic arithmetic is an extension of counting. It is possible to visually differentiate between small numbers of items (e.g., to tell the difference between 3 and 4). This is not possible once a threshold is passed (e.g., trying to see the difference between 10 and 11 items, or 20 and 21, or 120 and 121). It is necessary to accurately apply verbal counting to the actual number of objects in order to perform such tasks. Once a child is able to count objects, it is possible to begin teaching basic mathematics including: addition and subtraction; the base-10 system; multiplication and division; fractions, etc. For an early comprehensive treatment of the application of predictive and control learning principles to reading, writing, and arithmetic, I recommend Staat’s excellent book, Learning, Language, and Cognition (1968). One cannot overstate the fundamental importance of compulsory education to societal development and economic progress. Rindermann and Thompson (2011) conducted sophisticated statistical analyses demonstrating the powerful relationship of cognitive ability, particularly in the STEM fields (science, technology, engineering, and math), to wealth in 90 countries. The top 5% in cognitive ability contribute significantly, often in the form of scientific and technological advances.

<