exam 3 book

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

sex and gender

When filling out a document such as a job application or school registration form you are often asked to provide your name, address, phone number, birth date, and sex or gender. But have you ever been asked to provide your sex and your gender? Like most people, you may not have realized that sex and gender are not the same. However, sociologists and most other social scientists view them as conceptually distinct. Sex refers to physical or physiological differences between males and females, including both primary sex characteristics (the reproductive system) and secondary characteristics such as height and muscularity. Gender refers to behaviors, personal traits, and social positions that society attributes to being female or male. A person's sex, as determined by his or her biology, does not always correspond with the person's gender. Therefore, the terms sex and gender are not interchangeable. A baby boy who is born with male genitalia will be identified as male. As he grows, however, he may identify with the feminine aspects of his culture. Since the term sex refers to biological or physical distinctions, characteristics of sex will not vary significantly between different human societies. Generally, persons of the female sex, regardless of culture, will eventually menstruate and develop breasts that can lactate. Characteristics of gender, on the other hand, may vary greatly between different societies. For example, in U.S. culture, it is considered feminine (or a trait of the female gender) to wear a dress or skirt. However, in many Middle Eastern, Asian, and African cultures, dresses or skirts (often referred to as sarongs, robes, or gowns) are considered masculine. The kilt worn by a Scottish male does not make him appear feminine in his culture. The dichotomous view of gender (the notion that someone is either male or female) is specific to certain cultures and is not universal. In some cultures gender is viewed as fluid. In the past, some anthropologists used the term berdache to refer to individuals who occasionally or permanently dressed and lived as a different gender. The practice has been noted among certain Native American tribes (Jacobs, Thomas, and Lang 1997). Samoan culture accepts what Samoans refer to as a "third gender." Fa'afafine, which translates as "the way of the woman," is a term used to describe individuals who are born biologically male but embody both masculine and feminine traits. Fa'afafines are considered an important part of Samoan culture. Individuals from other cultures may mislabel them as homosexuals because fa'afafines have a varied sexual life that may include men and women (Poasa 1992). SOCIAL POLICY AND DEBATE The Legalese of Sex and Gender The terms sex and gender have not always been differentiated in the English language. It was not until the 1950s that U.S. and British psychologists and other professionals working with intersex and transsexual patients formally began distinguishing between sex and gender. Since then, psychological and physiological professionals have increasingly used the term gender (Moi 2005). By the end of the twenty-first century, expanding the proper usage of the term gender to everyday language became more challenging—particularly where legal language is concerned. In an effort to clarify usage of the terms sex and gender, U.S. Supreme Court Justice Antonin Scalia wrote in a 1994 briefing, "The word gender has acquired the new and useful connotation of cultural or attitudinal characteristics (as opposed to physical characteristics) distinctive to the sexes. That is to say, gender is to sex as feminine is to female and masculine is to male" (J.E.B. v. Alabama, 144 S. Ct. 1436 [1994]). Supreme Court Justice Ruth Bader Ginsburg had a different take, however. Viewing the words as synonymous, she freely swapped them in her briefings so as to avoid having the word "sex" pop up too often. It is thought that her secretary supported this practice by suggestions to Ginsberg that "those nine men" (the other Supreme Court justices), "hear that word and their first association is not the way you want them to be thinking" (Case 1995). This anecdote reveals that both sex and gender are actually socially defined variables whose definitions change over time. Sexual Orientation A person's sexual orientation is their physical, mental, emotional, and sexual attraction to a particular sex (male and/or female). Sexual orientation is typically divided into several categories: heterosexuality, the attraction to individuals of the other sex; homosexuality, the attraction to individuals of the same sex; bisexuality, the attraction to individuals of either sex; asexuality, a lack of sexual attraction or desire for sexual contact; pansexuality, an attraction to people regardless of sex, gender, gender identity, or gender expression; and queer, an umbrella term used to describe sexual orientation, gender identity or gender expression. Heterosexuals and homosexuals are referred to as "straight" and "gay," respectively but more inclusive terminology is needed. Proper terminology includes the acronyms LGBT and LGBTQ, which stands for "Lesbian, Gay, Bisexual, Transgender" (and "Queer" or "Questioning" when the Q is added). The United States is a heteronormative society, meaning many people assume heterosexual orientation is biologically determined and unambiguous. Consider that LGBTQ people are often asked, "When did you know you were gay?" but heterosexuals are rarely asked, "When did you know that you were straight?" (Ryle 2011). According to current scientific understanding, individuals are usually aware of their sexual orientation between middle childhood and early adolescence (American Psychological Association 2008). They do not have to participate in sexual activity to be aware of these emotional, romantic, and physical attractions; people can be celibate and still recognize their sexual orientation, and may have very different experiences of discovering and accepting their sexual orientation. At the point of puberty, some may be able to announce their sexual orientations, while others may be unready or unwilling to make their homosexuality or bisexuality known since it goes against U.S. society's historical norms (APA 2008). Alfred Kinsey was among the first to conceptualize sexuality as a continuum rather than a strict dichotomy of gay or straight. He created a six-point rating scale that ranges from exclusively heterosexual to exclusively homosexual. See the figure below. In his 1948 work Sexual Behavior in the Human Male, Kinsey writes, "Males do not represent two discrete populations, heterosexual and homosexual. The world is not to be divided into sheep and goats ... The living world is a continuum in each and every one of its aspects" (Kinsey 1948). Figure 12.3 The Kinsey scale indicates that sexuality can be measured by more than just heterosexuality and homosexuality. Later scholarship by Eve Kosofsky Sedgwick expanded on Kinsey's notions. She coined the term "homosocial" to oppose "homosexual," describing nonsexual same-sex relations. Sedgwick recognized that in U.S. culture, males are subject to a clear divide between the two sides of this continuum, whereas females enjoy more fluidity. This can be illustrated by the way women in the United States can express homosocial feelings (nonsexual regard for people of the same sex) through hugging, handholding, and physical closeness. In contrast, U.S. males refrain from these expressions since they violate the heteronormative expectation that male sexual attraction should be exclusively for females. Research suggests that it is easier for women violate these norms than men, because men are subject to more social disapproval for being physically close to other men (Sedgwick 1985). There is no scientific consensus regarding the exact reasons why an individual holds a heterosexual, homosexual, or bisexual orientation. Research has been conducted to study the possible genetic, hormonal, developmental, social, and cultural influences on sexual orientation, but there has been no evidence that links sexual orientation to one factor (APA 2008). Research, however, does present evidence showing that LGBTQ people are treated differently than heterosexuals in schools, the workplace, and the military. In 2011, for example, Sears and Mallory used General Social Survey data from 2008 to show that 27 percent of lesbian, gay, bisexual (LGB) respondents reported experiencing sexual orientation-based discrimination during the five years prior to the survey. Further, 38 percent of openly LGB people experienced discrimination during the same time. (Note that this study did not specifically address other sexual orientations.) Much of this discrimination is based on stereotypes and misinformation. Some is based on heterosexism, which Herek (1990) suggests is both an ideology and a set of institutional practices that privilege heterosexuals and heterosexuality over other sexual orientations. Much like racism and sexism, heterosexism is a systematic disadvantage embedded in our social institutions, offering power to those who conform to hetereosexual orientation while simultaneously disadvantaging those who do not. Homophobia, an extreme or irrational aversion to homosexuals, accounts for further stereotyping and discrimination. Transphobia is a fear, hatred or dislike of transgender people, and/or prejudice and discrimination against them by individuals or institutions. Major policies to prevent discrimination based on sexual orientation have not come into effect until the last few years. In 2011, President Obama overturned "don't ask, don't tell," a controversial policy that required gay and lesbian people in the US military to keep their sexuality undisclosed. The landmark 2020 Supreme Court decision added sexual orientation and gender identity as categories protected from employment discrimination by the Civil Rights Act. Organizations such as GLAAD (Gay & Lesbian Alliance Against Defamation) advocate for LGBTQ rights and encourage governments and citizens to recognize the presence of sexual discrimination and work to prevent it. Sociologically, it is clear that gay and lesbian couples are negatively affected in states where they are denied the legal right to marriage. In 1996, The Defense of Marriage Act (DOMA) was passed, explicitly limiting the definition of "marriage" to a union between one man and one woman. It also allowed individual states to choose whether or not they recognized same-sex marriages performed in other states. In another blow to same-sex marriage advocates, in November 2008 California passed Proposition 8, a state law that limited marriage to unions of opposite-sex partners. Over time, advocates for same-sex marriage have won several court cases, laying the groundwork for legalized same-sex marriage across the United States, including the June 2013 decision to overturn part of DOMA in Windsor v. United States, and the Supreme Court's dismissal of Hollingsworth v. Perry, affirming the August 2010 ruling that found California's Proposition 8 unconstitutional. In October 2014, the U.S. Supreme Court declined to hear appeals to rulings against same-sex marriage bans, which effectively legalized same-sex marriage in Indiana, Oklahoma, Utah, Virginia, and Wisconsin, Colorado, North Carolina, West Virginia, and Wyoming (Freedom to Marry, Inc. 2014). Then, in 2015, the Surpreme Court ruled in the case of Obgerfell vs. Hodges that the right to civil marriage was guaranteed to same-sex couples. Gender Roles As we grow, we learn how to behave from those around us. In this socialization process, children are introduced to certain roles that are typically linked to their biological sex. The term gender role refers to society's concept of how men and women are expected to look and how they should behave. These roles are based on norms, or standards, created by society. In U.S. culture, masculine roles are usually associated with strength, aggression, and dominance, while feminine roles are usually associated with passivity, nurturing, and subordination. Role learning starts with socialization at birth. Even today, our society is quick to outfit male infants in blue and girls in pink, even applying these color-coded gender labels while a baby is in the womb. One way children learn gender roles is through play. Parents typically supply boys with trucks, toy guns, and superhero paraphernalia, which are active toys that promote motor skills, aggression, and solitary play. Daughters are often given dolls and dress-up apparel that foster nurturing, social proximity, and role play. Studies have shown that children will most likely choose to play with "gender appropriate" toys (or same-gender toys) even when cross-gender toys are available because parents give children positive feedback (in the form of praise, involvement, and physical closeness) for gender normative behavior (Caldera, Huston, and O'Brien 1998). Figure 12.4 Fathers tend to be more involved when their sons engage in gender-appropriate activities such as sports. (Photo courtesy of Shawn Lea/flickr) The drive to adhere to masculine and feminine gender roles continues later in life. Men tend to outnumber women in professions such as law enforcement, the military, and politics. Women tend to outnumber men in care-related occupations such as childcare, healthcare (even though the term "doctor" still conjures the image of a man), and social work. These occupational roles are examples of typical U.S. male and female behavior, derived from our culture's traditions. Adherence to them demonstrates fulfillment of social expectations but not necessarily personal preference (Diamond 2002). Gender Identity U.S. society allows for some level of flexibility when it comes to acting out gender roles. To a certain extent, men can assume some feminine roles and women can assume some masculine roles without interfering with their gender identity. Gender identity is a person's deeply held internal perception of one's gender. Individuals who identify with a gender that is the different from their biological sex are called transgender. Transgender is not the same as homosexual, and many homosexual males view both their sex and gender as male. A transgender woman is a person who was assigned male at birth but who identifies and/or lives as a woman; a transgender man was assigned female at birth but lives as a man. While determining the size of the transgender population is difficult, it is estimated that two to five percent of the U.S. population is transgender (Transgender Law and Policy Institute 2007). Some transgender individuals may undertake a process to change their outward, physical, or sexual characteristics in order for their physical being to better align with their gender identity. They may also be known as male-to-female (MTF) or female-to-male (FTM). Not all transgender individuals choose to alter their bodies: many will maintain their original anatomy but may present themselves to society as another gender. This is typically done by adopting the dress, hairstyle, mannerisms, or other characteristic typically assigned to another gender. It is important to note that people who cross-dress, or wear clothing that is traditionally assigned to a gender different from their biological sex, are not necessarily transgender. Cross-dressing is typically a form of self-expression, entertainment, or personal style, and it is not necessarily an expression against one's assigned gender (APA 2008). There is no single, conclusive explanation for why people are transgender. Transgender expressions and experiences are so diverse that it is difficult to identify their origin. Some hypotheses suggest biological factors such as genetics or prenatal hormone levels as well as social and cultural factors such as childhood and adulthood experiences. Most experts believe that all of these factors contribute to a person's gender identity (APA 2008). After years of controversy over the treatment of sex and gender in the American Psychiatric Association Diagnostic and Statistical Manual for Mental Disorders (Drescher 2010), the most recent edition, DSM-5, responds to allegations that the term "Gender Identity Disorder" is stigmatizing by replacing it with "Gender Dysphoria." Gender Identity Disorder as a diagnostic category stigmatized the patient by implying there was something "disordered" about them. Gender Dysphoria, on the other hand, removes some of that stigma by taking the word "disorder" out while maintaining a category that will protect patient access to care, including hormone therapy and gender reassignment surgery. In the DSM-5, Gender Dysphoria is a condition of people whose gender at birth is contrary to the one they identify with. For a person to be diagnosed with Gender Dysphoria, there must be a marked difference between the individual's expressed/experienced gender and the gender others would assign him or her, and it must continue for at least six months. In children, the desire to be of the other gender must be present and verbalized. This diagnosis is now a separate category from sexual dysfunction and paraphilia, another important part of removing stigma from the diagnosis (APA 2013). In 2019, the World Health Organization reclassified "gender identity disorder" as "gender incongruence," and categorized it under sexual health rather than a mental disorder. Changing the clinical description may contribute to a larger acceptance of transgender people in society. Studies show that people who identify as transgender are twice as likely to experience assault or discrimination as nontransgender individuals; they are also one and a half times more likely to experience intimidation (National Coalition of Anti-Violence Programs 2010; Giovanniello 2013). Organizations such as the National Coalition of Anti-Violence Programs and Global Action for Trans Equality work to prevent, respond to, and end all types of violence against LGBTQ. These organizations hope that by educating the public about gender identity and empowering transgender individuals, this violence will end. SOCIOLOGY IN THE REAL WORLD What if you had to live as a sex you were not biologically born to? If you are a man, imagine that you were forced to wear frilly dresses, dainty shoes, and makeup to special occasions, and you were expected to enjoy romantic comedies and daytime talk shows. If you are a woman, imagine that you were forced to wear shapeless clothing, put only minimal effort into your personal appearance, not show emotion, and watch countless hours of sporting events and sports-related commentary. It would be pretty uncomfortable, right? Well, maybe not. Many people enjoy participating in activities, whether they are associated with their biological sex or not, and would not mind if some of the cultural expectations for men and women were loosened. Now, imagine that when you look at your body in the mirror, you feel disconnected. You feel your genitals are shameful and dirty, and you feel as though you are trapped in someone else's body with no chance of escape. As you get older, you hate the way your body is changing, and, therefore, you hate yourself. These elements of disconnect and shame are important to understand when discussing transgender individuals. Fortunately, sociological studies pave the way for a deeper and more empirically grounded understanding of the transgender experience. Figure 12.5 Chaz Bono is the transgender son of Cher and Sonny Bono. While he was born female, he considers himself male. Being transgender is not about clothing or hairstyles; it is about self-perception. (Photo courtesy of Greg Hernandez/flickr) Pronoun usage also plays a role in a person's gender identity. Pronouns are used in place of proper nouns, such as a name, during daily conversation. In many languages, including English, pronouns are gendered. That is, pronouns are intended to identify the gender of the individual being referenced. English has traditionally been binary, providing only "he/him/his," for male subjects and "she/her/hers," for female subjects. This binary system excludes those who identify as neither male nor female. The word "they," which was used for hundreds of years as a singular pronoun, is more inclusive. As a result, in fact, Merriam Webster selected this use of "they" as Word of the Year for 2019. "They" and other pronouns are now used to reference those who do not identify as male or female on the spectrum of gender identities. Gender inclusive language is an important step in recognizing and accepting of those whose gender is not man nor woman (e.g., gender nonconforming, gender neutral, gender fluid, genderqueer, or non-binary).

who are the elderly; aging in society

Think of U.S. movies and television shows you have watched recently. Did any of them feature older actors and actresses? What roles did they play? How were these older actors portrayed? Were they cast as main characters in a love story? Or were they cast as grouchy old people? Many media portrayals of the elderly reflect negative cultural attitudes toward aging. In the United States, society tends to glorify youth and associate it with beauty and sexuality. In comedies, the elderly are often associated with grumpiness or hostility. Rarely do the roles of older people convey the fullness of life experienced by seniors—as employees, lovers, or the myriad roles they have in real life. What values does this reflect? One hindrance to society's fuller understanding of aging is that people rarely understand the process of aging until they reach old age themselves. (As opposed to childhood, for instance, which we can all look back on.) Therefore, myths and assumptions about the elderly and aging are common. Many stereotypes exist surrounding the realities of being an older adult. While individuals often encounter stereotypes associated with race and gender and are thus more likely to think critically about them, many people accept age stereotypes without question (Levy 2002). Each culture has a certain set of expectations and assumptions about aging, all of which are part of our socialization. While the landmarks of maturing into adulthood are a source of pride, signs of natural aging can be cause for shame or embarrassment. Some people try to fight off the appearance of aging with cosmetic surgery. Although many seniors report that their lives are more satisfying than ever, and their self-esteem is stronger than when they were young, they are still subject to cultural attitudes that make them feel invisible and devalued. Gerontology is a field of science that seeks to understand the process of aging and the challenges encountered as seniors grow older. Gerontologists investigate age, aging, and the aged. Gerontologists study what it is like to be an older adult in a society and the ways that aging affects members of a society. As a multidisciplinary field, gerontology includes the work of medical and biological scientists, social scientists, and even financial and economic scholars. Social gerontology refers to a specialized field of gerontology that examines the social (and sociological) aspects of aging. Researchers focus on developing a broad understanding of the experiences of people at specific ages, such as mental and physical wellbeing, plus age-specific concerns such as the process of dying. Social gerontologists work as social researchers, counselors, community organizers, and service providers for older adults. Because of their specialization, social gerontologists are in a strong position to advocate for older adults. Scholars in these disciplines have learned that "aging" reflects not only the physiological process of growing older but also our attitudes and beliefs about the aging process. You've likely seen online calculators that promise to determine your "real age" as opposed to your chronological age. These ads target the notion that people may "feel" a different age than their actual years. Some sixty-year-olds feel frail and elderly, while some eighty-year-olds feel sprightly. Equally revealing is that as people grow older they define "old age" in terms of greater years than their current age (Logan 1992). Many people want to postpone old age and regard it as a phase that will never arrive. Some older adults even succumb to stereotyping their own age group (Rothbaum 1983). In the United States, the experience of being elderly has changed greatly over the past century. In the late 1800s and early 1900s, many U.S. households were home to multigenerational families, and the experiences and wisdom of elders was respected. They offered wisdom and support to their children and often helped raise their grandchildren (Sweetser 1984). Multigenerational U.S. families began to decline after World War II, and their numbers reached a low point around 1980, but they are on the rise again. In fact, a 2010 Pew Research Center analysis of census data found that multigenerational families in the United States have now reached a record high. The 2008 census data indicated that 49 million U.S. families, 16.1 percent of the country's total population, live in a family household with at least two adult generations—or a grandparent and at least one other generation. Attitudes toward the elderly have also been affected by large societal changes that have happened over the past 100 years. Researchers believe industrialization and modernization have contributed greatly to lowering the power, influence, and prestige the elderly once held. The elderly have both benefitted and suffered from these rapid social changes. In modern societies, a strong economy created new levels of prosperity for many people. Healthcare has become more widely accessible, and medicine has advanced, which allows the elderly to live longer. However, older people are not as essential to the economic survival of their families and communities as they were in the past. Studying Aging Populations Figure 13.2 How old is this woman? In modern U.S. society, appearance is not a reliable indicator of age. In addition to genetic differences, health habits, hair dyes, Botox, and the like make traditional signs of aging increasingly unreliable. (Photo courtesy of the Sean and Lauren Spectacular/flickr) Since its creation in 1790, the U.S. Census Bureau has been tracking age in the population. Age is an important factor to analyze with accompanying demographic figures, such as income and health. The population pyramid below shows projected age distribution patterns for the next several decades. Figure 13.3 This population pyramid shows the age distribution pattern for 2010 and projected patterns for 2030 and 2050 (Graph courtesy of the U.S. Census Bureau). Statisticians use data to calculate the median age of a population, that is, the number that marks the halfway point in a group's age range. In the United States, the median age is about forty (U.S. Census Bureau 2010). That means that about half of the people in the United States are under forty and about half are over forty. This median age has been increasing, which indicates the population as a whole is growing older. A cohort is a group of people who share a statistical or demographic trait. People belonging to the same age cohort were born in the same time frame. Understanding a population's age composition can point to certain social and cultural factors and help governments and societies plan for future social and economic challenges. Sociological studies on aging might help explain the difference between Native American age cohorts and the general population. While Native American societies have a strong tradition of revering their elders, they also have a lower life expectancy because of lack of access to healthcare and high levels of mercury in fish, which is a traditional part of their diet. Phases of Aging: The Young-Old, Middle-Old, and Old-Old In the United States, all people over eighteen years old are considered adults, but there is a large difference between a person who is twenty-one years old and a person who is forty-five years old. More specific breakdowns, such as "young adult" and "middle-aged adult," are helpful. In the same way, groupings are helpful in understanding the elderly. The elderly are often lumped together to include everyone over the age of sixty-five. But a sixty-five-year-old's experience of life is much different from a ninety-year-old's. The United States' older adult population can be divided into three life-stage subgroups: the young-old (approximately sixty-five to seventy-four years old), the middle-old (ages seventy-five to eighty-four years old), and the old-old (over age eighty-five). Today's young-old age group is generally happier, healthier, and financially better off than the young-old of previous generations. In the United States, people are better able to prepare for aging because resources are more widely available. Also, many people are making proactive quality-of-life decisions about their old age while they are still young. In the past, family members made care decisions when an elderly person reached a health crisis, often leaving the elderly person with little choice about what would happen. The elderly are now able to choose housing, for example, that allows them some independence while still providing care when it is needed. Living wills, retirement planning, and medical power of attorney are other concerns that are increasingly handled in advance. The Graying of the United States Figure 13.4 As senior citizens begin to make up a larger percentage of the United States, the organizations supporting them grow stronger. (Photo courtesy of Congressman George Miller/flickr) What does it mean to be elderly? Some define it as an issue of physical health, while others simply define it by chronological age. The U.S. government, for example, typically classifies people aged sixty-five years old as elderly, at which point citizens are eligible for federal benefits such as Social Security and Medicare. The World Health Organization has no standard, other than noting that sixty-five years old is the commonly accepted definition in most core nations, but it suggests a cut-off somewhere between fifty and fifty-five years old for semi-peripheral nations, such as those in Africa (World Health Organization 2012). AARP (formerly the American Association of Retired Persons) cites fifty as the eligible age of membership. It is interesting to note AARP's name change; by taking the word "retired" out of its name, the organization can broaden its base to any older people in the United States, not just retirees. This is especially important now that many people are working to age seventy and beyond. There is an element of social construction, both local and global, in the way individuals and nations define who is elderly; that is, the shared meaning of the concept of elderly is created through interactions among people in society. This is exemplified by the truism that you are only as old as you feel. Demographically, the U.S. population over sixty-five years old increased from 3 million in 1900 to 33 million in 1994 (Hobbs 1994) and to 36.8 million in 2010 (U.S. Census Bureau 2011c). This is a greater than tenfold increase in the elderly population, compared to a mere tripling of both the total population and of the population under sixty-five years old (Hobbs 1994). This increase has been called "the graying of America," a term that describes the phenomenon of a larger and larger percentage of the population getting older and older. There are several reasons why the United States is graying so rapidly. One of these is life expectancy: the average number of years a person born today may expect to live. When we review Census Bureau statistics grouping the elderly by age, it is clear that in the United States, at least, we are living longer. In 2010, there were about 80,000 centenarians in the United States alone. They make up one of the fastest-growing segments of the population (Boston University School of Medicine 2014). People over ninety years of age now account for 4.7 percent of the older population, defined as age sixty-five or above; this percentage is expected to reach 10 percent by the year 2050 (U.S. Census Bureau 2011). As of 2013, the U.S. Census Bureau reports that 14.1 percent of the total U.S. population is sixty-five years old or older. It is interesting to note that not all people in the United States age equally. Most glaring is the difference between men and women; as Figure 13.5 shows, women have longer life expectancies than men. In 2010, there were ninety sixty-five-year-old men per one hundred sixty-five-year-old women. However, there were only eighty seventy-five-year-old men per one hundred seventy-five-year-old women, and only sixty eighty-five-year-old men per one hundred eighty-five-year-old women. Nevertheless, as the graph shows, the sex ratio actually increased over time, indicating that men are closing the gap between their life spans and those of women (U.S. Census Bureau 2010). Figure 13.5 This U.S. Census graph shows that women live significantly longer than men. However, over the past two decades, men have narrowed the percentage by which women outlive them. (Graph courtesy of the U.S. Census Bureau) Baby Boomers Of particular interest to gerontologists today is the population of baby boomers, the cohort born between 1946 and 1964 and now reaching their 60s. Coming of age in the 1960s and early 1970s, the baby boom generation was the first group of children and teenagers with their own spending power and therefore their own marketing power (Macunovich 2000). As this group has aged, it has redefined what it means to be young, middle-aged, and now old. People in the boomer generation do not want to grow old the way their grandparents did; the result is a wide range of products designed to ward off the effects—or the signs—of aging. Previous generations of people over sixty-five were "old." Baby boomers are in "later life" or "the third age" (Gilleard and Higgs 2007). The baby boom generation is the cohort driving much of the dramatic increase in the over-sixty-five population. Figure 13.6 shows a comparison of the U.S. population by age and gender between 2000 and 2010. The biggest bulge in the pyramid (representing the largest population group) moves up the pyramid over the course of the decade; in 2000, the largest population group was age thirty-five to fifty-five. In 2010, that group was age forty-five to sixty-five, meaning the oldest baby boomers were just reaching the age at which the U.S. Census considers them elderly. In 2020, we can predict, the baby boom bulge will continue to rise up the pyramid, making the largest U.S. population group between sixty-five and eighty-five years old. Figure 13.6 In this U.S. Census pyramid chart, the baby boom bulge was aged thirty-five to fifty-five in 2000. In 2010, they were aged forty-five to sixty-five. (Graph courtesy of the U.S. Census Bureau) This aging of the baby boom cohort has serious implications for our society. Healthcare is one of the areas most impacted by this trend. For years, hand-wringing has abounded about the additional burden the boomer cohort will place on Medicare, a government-funded program that provides healthcare services to people over sixty-five years old. And indeed, the Congressional Budget Office's 2008 long-term outlook report shows that Medicare spending is expected to increase from 3 percent of gross domestic product (GDP) in 2009 to 8 percent of GDP in 2030, and to 15 percent in 2080 (Congressional Budget Office 2008). Certainly, as boomers age, they will put increasing burdens on the entire U.S. healthcare system. A study from 2008 indicates that medical schools are not producing enough medical professionals who specialize in treating geriatric patients (Gerontological Society of America 2008). However, other studies indicate that aging boomers will bring economic growth to the healthcare industries, particularly in areas like pharmaceutical manufacturing and home healthcare services (Bierman 2011). Further, some argue that many of our medical advances of the past few decades are a result of boomers' health requirements. Unlike the elderly of previous generations, boomers do not expect that turning sixty-five means their active lives are over. They are not willing to abandon work or leisure activities, but they may need more medical support to keep living vigorous lives. This desire of a large group of over-sixty-five-year-olds wanting to continue with a high activity level is driving innovation in the medical industry (Shaw). The economic impact of aging boomers is also an area of concern for many observers. Although the baby boom generation earned more than previous generations and enjoyed a higher standard of living, they also spent their money lavishly and did not adequately prepare for retirement. According to a 2008 report from the McKinsey Global Institute, approximately two-thirds of early boomer households have not accumulated enough savings to maintain their lifestyles. This will have a ripple effect on the economy as boomers work and spend less (Farrel et al. 2008). Just as some observers are concerned about the possibility of Medicare being overburdened, Social Security is considered to be at risk. Social Security is a government-run retirement program funded primarily through payroll taxes. With enough people paying into the program, there should be enough money for retirees to take out. But with the aging boomer cohort starting to receive Social Security benefits and fewer workers paying into the Social Security trust fund, economists warn that the system will collapse by the year 2037. A similar warning came in the 1980s; in response to recommendations from the Greenspan Commission, the retirement age (the age at which people could start receiving Social Security benefits) was raised from sixty-two to sixty-seven and the payroll tax was increased. A similar hike in retirement age, perhaps to seventy, is a possible solution to the current threat to Social Security (Reuteman 2010). Aging around the World Figure 13.7 Cultural values and attitudes can shape people's experience of aging. (Photo courtesy of Tom Coppen/flickr) From 1950 to approximately 2010, the global population of individuals age sixty-five and older increased by a range of 5-7 percent (Lee 2009). This percentage is expected to increase and will have a huge impact on the dependency ratio: the number of nonproductive citizens (young, disabled, or elderly) to productive working citizens (Bartram and Roe 2005). One country that will soon face a serious aging crisis is China, which is on the cusp of an "aging boom"— a period when its elderly population will dramatically increase. The number of people above age sixty in China today is about 178 million, which amounts to 13.3 percent of its total population (Xuequan 2011). By 2050, nearly a third of the Chinese population will be age sixty or older, which will put a significant burden on the labor force and impacting China's economic growth (Bannister, Bloom, and Rosenberg 2010). As healthcare improves and life expectancy increases across the world, elder care will be an emerging issue. Wienclaw (2009) suggests that with fewer working-age citizens available to provide home care and long-term assisted care to the elderly, the costs of elder care will increase. Worldwide, the expectation governing the amount and type of elder care varies from culture to culture. For example, in Asia the responsibility for elder care lies firmly on the family (Yap, Thang, and Traphagan 2005). This is different from the approach in most Western countries, where the elderly are considered independent and are expected to tend to their own care. It is not uncommon for family members to intervene only if the elderly relative requires assistance, often due to poor health. Even then, caring for the elderly is considered voluntary. In the United States, decisions to care for an elderly relative are often conditionally based on the promise of future returns, such as inheritance or, in some cases, the amount of support the elderly provided to the caregiver in the past (Hashimoto 1996). These differences are based on cultural attitudes toward aging. In China, several studies have noted the attitude of filial piety (deference and respect to one's parents and ancestors in all things) as defining all other virtues (Hsu 1971; Hamilton 1990). Cultural attitudes in Japan prior to approximately 1986 supported the idea that the elderly deserve assistance (Ogawa and Retherford 1993). However, seismic shifts in major social institutions (like family and economy) have created an increased demand for community and government care. For example, the increase in women working outside the home has made it more difficult to provide in-home care to aging parents, which leads to an increase in the need for government-supported institutions (Raikhola and Kuroki 2009). In the United States, by contrast, many people view caring for the elderly as a burden. Even when there is a family member able and willing to provide for an elderly family member, 60 percent of family caregivers are employed outside the home and are unable to provide the needed support. At the same time, however, many middle-class families are unable to bear the financial burden of "outsourcing" professional healthcare, resulting in gaps in care (Bookman and Kimbrel 2011). It is important to note that even within the United States not all demographic groups treat aging the same way. While most people in the United States are reluctant to place their elderly members into out-of-home assisted care, demographically speaking, the groups least likely to do so are Latinos, African Americans, and Asians (Bookman and Kimbrel 2011). Globally, the United States and other core nations are fairly well equipped to handle the demands of an exponentially increasing elderly population. However, peripheral and semi-peripheral nations face similar increases without comparable resources. Poverty among elders is a concern, especially among elderly women. The feminization of the aging poor, evident in peripheral nations, is directly due to the number of elderly women in those countries who are single, illiterate, and not a part of the labor force (Mujahid 2006). In 2002, the Second World Assembly on Aging was held in Madrid, Spain, resulting in the Madrid Plan, an internationally coordinated effort to create comprehensive social policies to address the needs of the worldwide aging population. The plan identifies three themes to guide international policy on aging: 1) publically acknowledging the global challenges caused by, and the global opportunities created by, a rising global population; 2) empowering the elderly; and 3) linking international policies on aging to international policies on development (Zelenev 2008). The Madrid Plan has not yet been successful in achieving all its aims. However, it has increased awareness of the various issues associated with a global aging population, as well as raising the international consciousness to the way that the factors influencing the vulnerability of the elderly (social exclusion, prejudice and discrimination, and a lack of socio-legal protection) overlap with other developmental issues (basic human rights, empowerment, and participation), leading to an increase in legal protections (Zelenev 2008).

sex and sexuality

Sexual Attitudes and Practices In the area of sexuality, sociologists focus their attention on sexual attitudes and practices, not on physiology or anatomy. Sexuality is viewed as a person's capacity for sexual feelings. Studying sexual attitudes and practices is a particularly interesting field of sociology because sexual behavior is a cultural universal. Throughout time and place, the vast majority of human beings have participated in sexual relationships (Broude 2003). Each society, however, interprets sexuality and sexual activity in different ways. Many societies around the world have different attitudes about premarital sex, the age of sexual consent, homosexuality, masturbation, and other sexual behaviors (Widmer, Treas, and Newcomb 1998). At the same time, sociologists have learned that certain norms are shared among most societies. The incest taboo is present in every society, though which relative is deemed unacceptable for sex varies widely from culture to culture. For example, sometimes the relatives of the father are considered acceptable sexual partners for a woman while the relatives of the mother are not. Likewise, societies generally have norms that reinforce their accepted social system of sexuality. What is considered "normal" in terms of sexual behavior is based on the mores and values of the society. Societies that value monogamy, for example, would likely oppose extramarital sex. Individuals are socialized to sexual attitudes by their family, education system, peers, media, and religion. Historically, religion has been the greatest influence on sexual behavior in most societies, but in more recent years, peers and the media have emerged as two of the strongest influences, particularly among U.S. teens (Potard, Courtois, and Rusch 2008). Let us take a closer look at sexual attitudes in the United States and around the world. Sexuality around the World Cross-national research on sexual attitudes in industrialized nations reveals that normative standards differ across the world. For example, several studies have shown that Scandinavian students are more tolerant of premarital sex than are U.S. students (Grose 2007). A study of 37 countries reported that non-Western societies—like China, Iran, and India—valued chastity highly in a potential mate, while Western European countries—such as France, the Netherlands, and Sweden—placed little value on prior sexual experiences (Buss 1989). CountryMales (Mean)Females (Mean)China2.542.61India2.442.17Indonesia2.061.98Iran2.672.23Israel (Palestinian)2.240.96Sweden0.250.28Norway0.310.30Finland0.270.29The Netherlands0.290.29 Table12.1 Chastity in Terms of Potential Mates Source: Buss 1989 Even among Western cultures, attitudes can differ. For example, according to a 33,590-person survey across 24 countries, 89 percent of Swedes responded that there is nothing wrong with premarital sex, while only 42 percent of Irish responded this way. From the same study, 93 percent of Filipinos responded that sex before age 16 is always wrong or almost always wrong, while only 75 percent of Russians responded this way (Widmer, Treas, and Newcomb 1998). Sexual attitudes can also vary within a country. For instance, 45 percent of Spaniards responded that homosexuality is always wrong, while 42 percent responded that it is never wrong; only 13 percent responded somewhere in the middle (Widmer, Treas, and Newcomb 1998). Of industrialized nations, Sweden is thought to be the most liberal when it comes to attitudes about sex, including sexual practices and sexual openness. The country has very few regulations on sexual images in the media, and sex education, which starts around age six, is a compulsory part of Swedish school curricula. Sweden's permissive approach to sex has helped the country avoid some of the major social problems associated with sex. For example, rates of teen pregnancy and sexually transmitted disease are among the world's lowest (Grose 2007). It would appear that Sweden is a model for the benefits of sexual freedom and frankness. However, implementing Swedish ideals and policies regarding sexuality in other, more politically conservative, nations would likely be met with resistance. Sexuality in the United States The United States prides itself on being the land of the "free," but it is rather restrictive when it comes to its citizens' general attitudes about sex compared to other industrialized nations. In an international survey, 29 percent of U.S. respondents stated that premarital sex is always wrong, while the average among the 24 countries surveyed was 17 percent. Similar discrepancies were found in questions about the condemnation of sex before the age of 16, extramarital sex, and homosexuality, with total disapproval of these acts being 12, 13, and 11 percent higher, respectively, in the United States, than the study's average (Widmer, Treas, and Newcomb 1998). U.S. culture is particularly restrictive in its attitudes about sex when it comes to women and sexuality. It is widely believed that men are more sexual than are women. In fact, there is a popular notion that men think about sex every seven seconds. Research, however, suggests that men think about sex an average of 19 times per day, compared to 10 times per day for women (Fisher, Moore, and Pittenger 2011). Belief that men have—or have the right to—more sexual urges than women creates a double standard. Ira Reiss, a pioneer researcher in the field of sexual studies, defined the double standard as prohibiting premarital sexual intercourse for women but allowing it for men (Reiss 1960). This standard has evolved into allowing women to engage in premarital sex only within committed love relationships, but allowing men to engage in sexual relationships with as many partners as they wish without condition (Milhausen and Herold 1999). Due to this double standard, a woman is likely to have fewer sexual partners in her life time than a man. According to a Centers for Disease Control and Prevention (CDC) survey, the average thirty-five-year-old woman has had three opposite-sex sexual partners while the average thirty-five-year-old man has had twice as many (Centers for Disease Control 2011). The future of a society's sexual attitudes may be somewhat predicted by the values and beliefs that a country's youth expresses about sex and sexuality. Data from the most recent National Survey of Family Growth reveals that 70 percent of boys and 78 percent of girls ages fifteen to nineteen said they "agree" or "strongly agree" that "it's okay for an unmarried female to have a child" (National Survey of Family Growth 2013). In a separate survey, 65 percent of teens stated that they "strongly agreed" or "somewhat agreed" that although waiting until marriage for sex is a nice idea, it's not realistic (NBC News 2005). This does not mean that today's youth have given up traditional sexual values such as monogamy. Nearly all college men (98.9 percent) and women (99.2 percent) who participated in a 2002 study on sexual attitudes stated they wished to settle down with one mutually exclusive sexual partner at some point in their lives, ideally within the next five years (Pedersen et al. 2002). Sex Education One of the biggest controversies regarding sexual attitudes is sexual education in U.S. classrooms. Unlike in Sweden, sex education is not required in all public school curricula in the United States. The heart of the controversy is not about whether sex education should be taught in school (studies have shown that only seven percent of U.S. adults oppose sex education in schools); it is about the type of sex education that should be taught. Much of the debate is over the issue of abstinence. In a 2005 survey, 15 percent of U.S. respondents believed that schools should teach abstinence exclusively and should not provide contraceptives or information on how to obtain them. Forty-six percent believed schools should institute an abstinence-plus approach, which teaches children that abstinence is best but still gives information about protected sex. Thirty-six percent believed teaching about abstinence is not important and that sex education should focus on sexual safety and responsibility (NPR 2010). Research suggests that while government officials may still be debating about the content of sexual education in public schools, the majority of U.S. adults are not. Those who advocated abstinence-only programs may be the proverbial squeaky wheel when it comes to this controversy, since they represent only 15 percent of parents. Fifty-five percent of respondents feel giving teens information about sex and how to obtain and use protection will not encourage them to have sexual relations earlier than they would under an abstinence program. About 77 percent think such a curriculum would make teens more likely to practice safe sex now and in the future (NPR 2004). Sweden, whose comprehensive sex education program in its public schools educates participants about safe sex, can serve as a model for this approach. The teenage birthrate in Sweden is 7 per 1,000 births, compared with 49 per 1,000 births in the United States. Among fifteen to nineteen year olds, reported cases of gonorrhea in Sweden are nearly 600 times lower than in the United States (Grose 2007). Sociological Perspectives on Sex and Sexuality Sociologists representing all three major theoretical perspectives study the role sexuality plays in social life today. Scholars recognize that sexuality continues to be an important and defining social location and that the manner in which sexuality is constructed has a significant effect on perceptions, interactions, and outcomes. Structural Functionalism When it comes to sexuality, functionalists stress the importance of regulating sexual behavior to ensure marital cohesion and family stability. Since functionalists identify the family unit as the most integral component in society, they maintain a strict focus on it at all times and argue in favor of social arrangements that promote and ensure family preservation. Functionalists such as Talcott Parsons (1955) have long argued that the regulation of sexual activity is an important function of the family. Social norms surrounding family life have, traditionally, encouraged sexual activity within the family unit (marriage) and have discouraged activity outside of it (premarital and extramarital sex). From a functionalist point of view, the purpose of encouraging sexual activity in the confines of marriage is to intensify the bond between spouses and to ensure that procreation occurs within a stable, legally recognized relationship. This structure gives offspring the best possible chance for appropriate socialization and the provision of basic resources. From a functionalist standpoint, homosexuality cannot be promoted on a large-scale as an acceptable substitute for heterosexuality. If this occurred, procreation would eventually cease. Thus, homosexuality, if occurring predominantly within the population, is dysfunctional to society. This criticism does not take into account the increasing legal acceptance of same-sex marriage, or the rise in gay and lesbian couples who choose to bear and raise children through a variety of available resources. Conflict Theory From a conflict theory perspective, sexuality is another area in which power differentials are present and where dominant groups actively work to promote their worldview as well as their economic interests. Recently, we have seen the debate over the legalization of gay marriage intensify nationwide. For conflict theorists, there are two key dimensions to the debate over same-sex marriage—one ideological and the other economic. Dominant groups (in this instance, heterosexuals) wish for their worldview—which embraces traditional marriage and the nuclear family—to win out over what they see as the intrusion of a secular, individually driven worldview. On the other hand, many gay and lesbian activists argue that legal marriage is a fundamental right that cannot be denied based on sexual orientation and that, historically, there already exists a precedent for changes to marriage laws: the 1960s legalization of formerly forbidden interracial marriages is one example. From an economic perspective, activists in favor of same-sex marriage point out that legal marriage brings with it certain entitlements, many of which are financial in nature, like Social Security benefits and medical insurance (Solmonese 2008). Denial of these benefits to gay couples is wrong, they argue. Conflict theory suggests that as long as heterosexuals and homosexuals struggle over these social and financial resources, there will be some degree of conflict. Symbolic Interactionism Interactionists focus on the meanings associated with sexuality and with sexual orientation. Since femininity is devalued in U.S. society, those who adopt such traits are subject to ridicule; this is especially true for boys or men. Just as masculinity is the symbolic norm, so too has heterosexuality come to signify normalcy. Prior to 1973, the American Psychological Association (APA) defined homosexuality as an abnormal or deviant disorder. Interactionist labeling theory recognizes the impact this has made. Before 1973, the APA was powerful in shaping social attitudes toward homosexuality by defining it as pathological. Today, the APA cites no association between sexual orientation and psychopathology and sees homosexuality as a normal aspect of human sexuality (APA 2008). Interactionists are also interested in how discussions of homosexuals often focus almost exclusively on the sex lives of gays and lesbians; homosexuals, especially men, may be assumed to be hypersexual and, in some cases, deviant. Interactionism might also focus on the slurs used to describe homosexuals. Labels such as "queen" and "fag" are often used to demean homosexual men by feminizing them. This subsequently affects how homosexuals perceive themselves. Recall Cooley's "looking-glass self," which suggests that self develops as a result of our interpretation and evaluation of the responses of others (Cooley 1902). Constant exposure to derogatory labels, jokes, and pervasive homophobia would lead to a negative self-image, or worse, self-hate. The CDC reports that homosexual youths who experience high levels of social rejection are six times more likely to have high levels of depression and eight times more likely to have attempted suicide (CDC 2011). Queer Theory Queer Theory is an interdisciplinary approach to sexuality studies that identifies Western society's rigid splitting of gender into male and female roles and questions the manner in which we have been taught to think about sexual orientation. According to Jagose (1996), Queer [Theory] focuses on mismatches between anatomical sex, gender identity, and sexual orientation, not just division into male/female or homosexual/hetereosexual. By calling their discipline "queer," scholars reject the effects of labeling; instead, they embraced the word "queer" and reclaimed it for their own purposes. The perspective highlights the need for a more flexible and fluid conceptualization of sexuality—one that allows for change, negotiation, and freedom. The current schema used to classify individuals as either "heterosexual" or "homosexual" pits one orientation against the other. This mirrors other oppressive schemas in our culture, especially those surrounding gender and race (black versus white, male versus female). Queer theorist Eve Kosofsky Sedgwick argued against U.S. society's monolithic definition of sexuality and its reduction to a single factor: the sex of someone's desired partner. Sedgwick identified dozens of other ways in which people's sexualities were different, such as: Even identical genital acts mean very different things to different people. Sexuality makes up a large share of the self-perceived identity of some people, a small share of others'. Some people spend a lot of time thinking about sex, others little. Some people like to have a lot of sex, others little or none. Many people have their richest mental/emotional involvement with sexual acts that they don't do, or don't even want to do. Some people like spontaneous sexual scenes, others like highly scripted ones, others like spontaneous-sounding ones that are nonetheless totally predictable. Some people, homo- hetero- and bisexual, experience their sexuality as deeply embedded in a matrix of gender meanings and gender differentials. Others of each sexuality do not (Sedgwick 1990). Thus, theorists utilizing queer theory strive to question the ways society perceives and experiences sex, gender, and sexuality, opening the door to new scholarly understanding. Throughout this chapter we have examined the complexities of gender, sex, and sexuality. Differentiating between sex, gender, and sexual orientation is an important first step to a deeper understanding and critical analysis of these issues. Understanding the sociology of sex, gender, and sexuality will help to build awareness of the inequalities experienced by subordinate categories such as women, homosexuals, and transgender individuals.

variations in family life

The combination of husband, wife, and children that 99.8 percent of people in the United States believe constitutes a family is not representative of 99.8 percent of U.S. families. According to 2010 census data, only 66 percent of children under seventeen years old live in a household with two married parents. This is a decrease from 77 percent in 1980 (U.S. Census 2011). This two-parent family structure is known as a nuclear family, referring to married parents and children as the nucleus, or core, of the group. Recent years have seen a rise in variations of the nuclear family with the parents not being married. Three percent of children live with two cohabiting parents (U.S. Census 2011). Figure 14.4 More than one quarter of U.S. children live in a single-parent household. (Photo courtesy of Ross Griff/flickr) Single Parents Single-parent households are on the rise. In 2010, 27 percent of children lived with a single parent only, up from 25 percent in 2008. Of that 27 percent, 23 percent live with their mother and three percent live with their father. Ten percent of children living with their single mother and 20 percent of children living with their single father also live with the cohabitating partner of their parent (for example, boyfriends or girlfriends). Stepparents are an additional family element in two-parent homes. Among children living in two-parent households, 9 percent live with a biological or adoptive parent and a stepparent. The majority (70 percent) of those children live with their biological mother and a stepfather. Family structure has been shown to vary with the age of the child. Older children (fifteen to seventeen years old) are less likely to live with two parents than adolescent children (six to fourteen years old) or young children (zero to five years old). Older children who do live with two parents are also more likely to live with stepparents (U.S. Census 2011). In some family structures a parent is not present at all. In 2010, three million children (4 percent of all children) lived with a guardian who was neither their biological nor adoptive parent. Of these children, 54 percent live with grandparents, 21 percent live with other relatives, and 24 percent live with nonrelatives. This family structure is referred to as the extended family, and may include aunts, uncles, and cousins living in the same home. Foster parents account for about a quarter of nonrelatives. The practice of grandparents acting as parents, whether alone or in combination with the child's parent, is becoming widespread among today's families (De Toledo and Brown 1995). Nine percent of all children live with a grandparent, and in nearly half those cases, the grandparent maintains primary responsibility for the child (U.S. Census 2011). A grandparent functioning as the primary care provider often results from parental drug abuse, incarceration, or abandonment. Events like these can render the parent incapable of caring for his or her child. Changes in the traditional family structure raise questions about how such societal shifts affect children. U.S. Census statistics have long shown that children living in homes with both parents grow up with more financial and educational advantages than children who are raised in single-parent homes (U.S. Census 1997). Parental marital status seems to be a significant indicator of advancement in a child's life. Children living with a divorced parent typically have more advantages than children living with a parent who never married; this is particularly true of children who live with divorced fathers. This correlates with the statistic that never-married parents are typically younger, have fewer years of schooling, and have lower incomes (U.S. Census 1997). Six in ten children living with only their mother live near or below the poverty level. Of those being raised by single mothers, 69 percent live in or near poverty compared to 45 percent for divorced mothers (U.S. Census 1997). Though other factors such as age and education play a role in these differences, it can be inferred that marriage between parents is generally beneficial for children. Cohabitation Living together before or in lieu of marriage is a growing option for many couples. Cohabitation, when a man and woman live together in a sexual relationship without being married, was practiced by an estimated 7.5 million people (11.5 percent of the population) in 2011, which shows an increase of 13 percent since 2009 (U.S. Census 2010). This surge in cohabitation is likely due to the decrease in social stigma pertaining to the practice. In a 2010 National Center for Health Statistics survey, only 38 percent of the 13,000-person sample thought that cohabitation negatively impacted society (Jayson 2010). Of those who cohabitate, the majority are non-Hispanic with no high school diploma or GED and grew up in a single-parent household (U.S. Census 2010). Cohabitating couples may choose to live together in an effort to spend more time together or to save money on living costs. Many couples view cohabitation as a "trial run" for marriage. Today, approximately 28 percent of men and women cohabitated before their first marriage. By comparison, 18 percent of men and 23 percent of women married without ever cohabitating (U.S. Census Bureau 2010). The vast majority of cohabitating relationships eventually result in marriage; only 15 percent of men and women cohabitate only and do not marry. About one half of cohabitators transition into marriage within three years (U.S. Census 2010). While couples may use this time to "work out the kinks" of a relationship before they wed, the most recent research has found that cohabitation has little effect on the success of a marriage. In fact, those who do not cohabitate before marriage have slightly better rates of remaining married for more than ten years (Jayson 2010). Cohabitation may contribute to the increase in the number of men and women who delay marriage. The median age for marriage is the highest it has ever been since the U.S. Census kept records—age twenty-six for women and age twenty-eight for men (U.S. Census 2010). Figure 14.5 As shown by this graph of marital status percentages among young adults, more young people are choosing to delay or opt out of marriage. (U.S. Census Bureau, 2000 Census and American Community Survey) Same-Sex Couples The number of same-sex couples has grown significantly in the past decade. The U.S. Census Bureau reported 594,000 same-sex couple households in the United States, a 50 percent increase from 2000. This increase is a result of more coupling, the growing social acceptance of homosexuality, and a subsequent increase in willingness to report it. Nationally, same-sex couple households make up 1 percent of the population, ranging from as little as 0.29 percent in Wyoming to 4.01 percent in the District of Columbia (U.S. Census 2011). Legal recognition of same-sex couples as spouses is different in each state, as only six states and the District of Columbia have legalized same-sex marriage. The 2010 U.S. Census, however, allowed same-sex couples to report as spouses regardless of whether their state legally recognizes their relationship. Nationally, 25 percent of all same-sex households reported that they were spouses. In states where same-sex marriages are performed, nearly half (42.4 percent) of same-sex couple households were reported as spouses. In terms of demographics, same-sex couples are not very different from opposite-sex couples. Same-sex couple households have an average age of 52 and an average household income of $91,558; opposite-sex couple households have an average age of 59 and an average household income of $95,075. Additionally, 31 percent of same-sex couples are raising children, not far from the 43 percent of opposite-sex couples (U.S. Census 2009). Of the children in same-sex couple households, 73 percent are biological children (of only one of the parents), 21 percent are adopted only, and 6 percent are a combination of biological and adopted (U.S. Census 2009). While there is some concern from socially conservative groups regarding the well-being of children who grow up in same-sex households, research reports that same-sex parents are as effective as opposite-sex parents. In an analysis of 81 parenting studies, sociologists found no quantifiable data to support the notion that opposite-sex parenting is any better than same-sex parenting. Children of lesbian couples, however, were shown to have slightly lower rates of behavioral problems and higher rates of self-esteem (Biblarz and Stacey 2010). Staying Single Gay or straight, a new option for many people in the United States is simply to stay single. In 2010, there were 99.6 million unmarried individuals over age eighteen in the United States, accounting for 44 percent of the total adult population (U.S. Census 2011). In 2010, never-married individuals in the twenty-five to twenty-nine age bracket accounted for 62 percent of women and 48 percent of men, up from 11 percent and 19 percent, respectively, in 1970 (U.S. Census 2011). Single, or never-married, individuals are found in higher concentrations in large cities or metropolitan areas, with New York City being one of the highest. Although both single men and single women report social pressure to get married, women are subject to greater scrutiny. Single women are often portrayed as unhappy "spinsters" or "old maids" who cannot find a man to marry them. Single men, on the other hand, are typically portrayed as lifetime bachelors who cannot settle down or simply "have not found the right girl." Single women report feeling insecure and displaced in their families when their single status is disparaged (Roberts 2007). However, single women older than thirty-five years old report feeling secure and happy with their unmarried status, as many women in this category have found success in their education and careers. In general, women feel more independent and more prepared to live a large portion of their adult lives without a spouse or domestic partner than they did in the 1960s (Roberts 2007). The decision to marry or not to marry can be based a variety of factors including religion and cultural expectations. Asian individuals are the most likely to marry while African Americans are the least likely to marry (Venugopal 2011). Additionally, individuals who place no value on religion are more likely to be unmarried than those who place a high value on religion. For black women, however, the importance of religion made no difference in marital status (Bakalar 2010). In general, being single is not a rejection of marriage; rather, it is a lifestyle that does not necessarily include marriage. By age forty, according to census figures, 20 percent of women and 14 of men will have never married (U.S. Census Bureau 2011). Figure 14.6 More and more people in the United States are choosing lifestyles that don't include marriage. (Photo courtesy of Glenn Harper/flickr) SOCIOLOGICAL RESEARCH Deceptive Divorce Rates It is often cited that half of all marriages end in divorce. This statistic has made many people cynical when it comes to marriage, but it is misleading. Let's take a closer look at the data. Using National Center for Health Statistics data from 2003 that show a marriage rate of 7.5 (per 1000 people) and a divorce rate of 3.8, it would appear that exactly one half of all marriages failed (Hurley 2005). This reasoning is deceptive, however, because instead of tracing actual marriages to see their longevity (or lack thereof), this compares what are unrelated statistics: that is, the number of marriages in a given year does not have a direct correlation to the divorces occurring that same year. Research published in the New York Times took a different approach—determining how many people had ever been married, and of those, how many later divorced. The result? According to this analysis, U.S. divorce rates have only gone as high as 41 percent (Hurley 2005). Another way to calculate divorce rates would be through a cohort study. For instance, we could determine the percentage of marriages that are intact after, say, five or seven years, compared to marriages that have ended in divorce after five or seven years. Sociological researchers must remain aware of research methods and how statistical results are applied. As illustrated, different methodologies and different interpretations can lead to contradictory, and even misleading, results. Theoretical Perspectives on Marriage and Family Sociologists study families on both the macro and micro level to determine how families function. Sociologists may use a variety of theoretical perspectives to explain events that occur within and outside of the family. Functionalism When considering the role of family in society, functionalists uphold the notion that families are an important social institution and that they play a key role in stabilizing society. They also note that family members take on status roles in a marriage or family. The family—and its members—perform certain functions that facilitate the prosperity and development of society. Sociologist George Murdock conducted a survey of 250 societies and determined that there are four universal residual functions of the family: sexual, reproductive, educational, and economic (Lee 1985). According to Murdock, the family (which for him includes the state of marriage) regulates sexual relations between individuals. He does not deny the existence or impact of premarital or extramarital sex, but states that the family offers a socially legitimate sexual outlet for adults (Lee 1985). This outlet gives way to reproduction, which is a necessary part of ensuring the survival of society. Once children are produced, the family plays a vital role in training them for adult life. As the primary agent of socialization and enculturation, the family teaches young children the ways of thinking and behaving that follow social and cultural norms, values, beliefs, and attitudes. Parents teach their children manners and civility. A well-mannered child reflects a well-mannered parent. Parents also teach children gender roles. Gender roles are an important part of the economic function of a family. In each family, there is a division of labor that consists of instrumental and expressive roles. Men tend to assume the instrumental roles in the family, which typically involve work outside of the family that provides financial support and establishes family status. Women tend to assume the expressive roles, which typically involve work inside of the family which provides emotional support and physical care for children (Crano and Aronoff 1978). According to functionalists, the differentiation of the roles on the basis of sex ensures that families are well balanced and coordinated. When family members move outside of these roles, the family is thrown out of balance and must recalibrate in order to function properly. For example, if the father assumes an expressive role such as providing daytime care for the children, the mother must take on an instrumental role such as gaining paid employment outside of the home in order for the family to maintain balance and function. Conflict Theory Conflict theorists are quick to point out that U.S. families have been defined as private entities, the consequence of which has been to leave family matters to only those within the family. Many people in the United States are resistant to government intervention in the family: parents do not want the government to tell them how to raise their children or to become involved in domestic issues. Conflict theory highlights the role of power in family life and contends that the family is often not a haven but rather an arena where power struggles can occur. This exercise of power often entails the performance of family status roles. Conflict theorists may study conflicts as simple as the enforcement of rules from parent to child, or they may examine more serious issues such as domestic violence (spousal and child), sexual assault, marital rape, and incest. The first study of marital power was performed in 1960. Researchers found that the person with the most access to value resources held the most power. As money is one of the most valuable resources, men who worked in paid labor outside of the home held more power than women who worked inside the home (Blood and Wolfe 1960). Conflict theorists find disputes over the division of household labor to be a common source of marital discord. Household labor offers no wages and, therefore, no power. Studies indicate that when men do more housework, women experience more satisfaction in their marriages, reducing the incidence of conflict (Coltrane 2000). In general, conflict theorists tend to study areas of marriage and life that involve inequalities or discrepancies in power and authority, as they are reflective of the larger social structure. Symbolic Interactionism Interactionists view the world in terms of symbols and the meanings assigned to them (LaRossa and Reitzes 1993). The family itself is a symbol. To some, it is a father, mother, and children; to others, it is any union that involves respect and compassion. Interactionists stress that family is not an objective, concrete reality. Like other social phenomena, it is a social construct that is subject to the ebb and flow of social norms and ever-changing meanings. Consider the meaning of other elements of family: "parent" was a symbol of a biological and emotional connection to a child; with more parent-child relationships developing through adoption, remarriage, or change in guardianship, the word "parent" today is less likely to be associated with a biological connection than with whoever is socially recognized as having the responsibility for a child's upbringing. Similarly, the terms "mother" and "father" are no longer rigidly associated with the meanings of caregiver and breadwinner. These meanings are more free-flowing through changing family roles. Interactionists also recognize how the family status roles of each member are socially constructed, playing an important part in how people perceive and interpret social behavior. Interactionists view the family as a group of role players or "actors" that come together to act out their parts in an effort to construct a family. These roles are up for interpretation. In the late nineteenth and early twentieth century, a "good father," for example, was one who worked hard to provide financial security for his children. Today, a "good father" is one who takes the time outside of work to promote his children's emotional well-being, social skills, and intellectual growth—in some ways, a much more daunting task.

Sterotypes, Prejudice and Discrimination

The terms stereotype, prejudice, discrimination, and racism are often used interchangeably in everyday conversation. Let us explore the differences between these concepts. Stereotypes are oversimplified generalizations about groups of people. Stereotypes can be based on race, ethnicity, age, gender, sexual orientation—almost any characteristic. They may be positive (usually about one's own group, such as when women suggest they are less likely to complain about physical pain) but are often negative (usually toward other groups, such as when members of a dominant racial group suggest that a subordinate racial group is stupid or lazy). In either case, the stereotype is a generalization that doesn't take individual differences into account. Where do stereotypes come from? In fact new stereotypes are rarely created; rather, they are recycled from subordinate groups that have assimilated into society and are reused to describe newly subordinate groups. For example, many stereotypes that are currently used to characterize black people were used earlier in American history to characterize Irish and Eastern European immigrants. Prejudice and Racism Prejudice refers to the beliefs, thoughts, feelings, and attitudes someone holds about a group. A prejudice is not based on experience; instead, it is a prejudgment, originating outside actual experience. A 1970 documentary called Eye of the Storm illustrates the way in which prejudice develops, by showing how defining one category of people as superior (children with blue eyes) results in prejudice against people who are not part of the favored category. While prejudice is not necessarily specific to race, racism is a stronger type of prejudice used to justify the belief that one racial category is somehow superior or inferior to others; it is also a set of practices used by a racial majority to disadvantage a racial minority. The Ku Klux Klan is an example of a racist organization; its members' belief in white supremacy has encouraged over a century of hate crime and hate speech. Institutional racism refers to the way in which racism is embedded in the fabric of society. For example, the disproportionate number of black men arrested, charged, and convicted of crimes may reflect racial profiling, a form of institutional racism. Colorism is another kind of prejudice, in which someone believes one type of skin tone is superior or inferior to another within a racial group. Studies suggest that darker skinned African Americans experience more discrimination than lighter skinned African Americans (Herring, Keith, and Horton 2004; Klonoff and Landrine 2000). For example, if a white employer believes a black employee with a darker skin tone is less capable than a black employee with lighter skin tone, that is colorism. At least one study suggested the colorism affected racial socialization, with darker-skinned black male adolescents receiving more warnings about the danger of interacting with members of other racial groups than did lighter-skinned black male adolescents (Landor et al. 2013). Discrimination While prejudice refers to biased thinking, discrimination consists of actions against a group of people. Discrimination can be based on age, religion, health, and other indicators; race-based laws against discrimination strive to address this set of social problems. Discrimination based on race or ethnicity can take many forms, from unfair housing practices to biased hiring systems. Overt discrimination has long been part of U.S. history. In the late nineteenth century, it was not uncommon for business owners to hang signs that read, "Help Wanted: No Irish Need Apply." And southern Jim Crow laws, with their "Whites Only" signs, exemplified overt discrimination that is not tolerated today. However, we cannot erase discrimination from our culture just by enacting laws to abolish it. Even if a magic pill managed to eradicate racism from each individual's psyche, society itself would maintain it. Sociologist Émile Durkheim calls racism a social fact, meaning that it does not require the action of individuals to continue. The reasons for this are complex and relate to the educational, criminal, economic, and political systems that exist in our society. For example, when a newspaper identifies by race individuals accused of a crime, it may enhance stereotypes of a certain minority. Another example of racist practices is racial steering, in which real estate agents direct prospective homeowners toward or away from certain neighborhoods based on their race. Racist attitudes and beliefs are often more insidious and harder to pin down than specific racist practices. Prejudice and discrimination can overlap and intersect in many ways. To illustrate, here are four examples of how prejudice and discrimination can occur. Unprejudiced nondiscriminators are open-minded, tolerant, and accepting individuals. Unprejudiced discriminators might be those who unthinkingly practice sexism in their workplace by not considering females for certain positions that have traditionally been held by men. Prejudiced nondiscriminators are those who hold racist beliefs but don't act on them, such as a racist store owner who serves minority customers. Prejudiced discriminators include those who actively make disparaging remarks about others or who perpetrate hate crimes. Discrimination also manifests in different ways. The scenarios above are examples of individual discrimination, but other types exist. Institutional discrimination occurs when a societal system has developed with embedded disenfranchisement of a group, such as the U.S. military's historical nonacceptance of minority sexualities (the "don't ask, don't tell" policy reflected this norm). Institutional discrimination can also include the promotion of a group's status, such in the case of white privilege, which is the benefits people receive simply by being part of the dominant group. While most white people are willing to admit that nonwhite people live with a set of disadvantages due to the color of their skin, very few are willing to acknowledge the benefits they receive. Racial Tensions in the United States The death of Michael Brown in Ferguson, MO on August 9, 2014 illustrates racial tensions in the United States as well as the overlap between prejudice, discrimination, and institutional racism. On that day, Brown, a young unarmed black man, was killed by a white police officer named Darren Wilson. During the incident, Wilson directed Brown and his friend to walk on the sidewalk instead of in the street. While eyewitness accounts vary, they agree that an altercation occurred between Wilson and Brown. Wilson's version has him shooting Brown in self-defense after Brown assaulted him, while Dorian Johnson, a friend of Brown also present at the time, claimed that Brown first ran away, then turned with his hands in the air to surrender, after which Wilson shot him repeatedly (Nobles and Bosman 2014). Three autopsies independently confirmed that Brown was shot six times (Lowery and Fears 2014). The shooting focused attention on a number of race-related tensions in the United States. First, members of the predominantly black community viewed Brown's death as the result of a white police officer racially profiling a black man (Nobles and Bosman 2014). In the days after, it was revealed that only three members of the town's fifty-three-member police force were black (Nobles and Bosman 2014). The national dialogue shifted during the next few weeks, with some commentators pointing to a nationwide sedimentation of racial inequality and identifying redlining in Ferguson as a cause of the unbalanced racial composition in the community, in local political establishments, and in the police force (Bouie 2014). Redlining is the practice of routinely refusing mortgages for households and businesses located in predominately minority communities, while sedimentation of racial inequality describes the intergenerational impact of both practical and legalized racism that limits the abilities of black people to accumulate wealth. Ferguson's racial imbalance may explain in part why, even though in 2010 only about 63 percent of its population was black, in 2013 blacks were detained in 86 percent of stops, 92 percent of searches, and 93 percent of arrests (Missouri Attorney General's Office 2014). In addition, de facto segregation in Ferguson's schools, a race-based wealth gap, urban sprawl, and a black unemployment rate three times that of the white unemployment rate worsened existing racial tensions in Ferguson while also reflecting nationwide racial inequalities (Bouie 2014). Multiple Identities Figure 11.2 Golfer Tiger Woods has Chinese, Thai, African American, Native American, and Dutch heritage. Individuals with multiple ethnic backgrounds are becoming more common. (Photo courtesy of familymwr/flickr) Prior to the twentieth century, racial intermarriage (referred to as miscegenation) was extremely rare, and in many places, illegal. In the later part of the twentieth century and in the twenty-first century, as Figure 11.2 shows, attitudes have changed for the better. While the sexual subordination of slaves did result in children of mixed race, these children were usually considered black, and therefore, property. There was no concept of multiple racial identities with the possible exception of the Creole. Creole society developed in the port city of New Orleans, where a mixed-race culture grew from French and African inhabitants. Unlike in other parts of the country, "Creoles of color" had greater social, economic, and educational opportunities than most African Americans. Increasingly during the modern era, the removal of miscegenation laws and a trend toward equal rights and legal protection against racism have steadily reduced the social stigma attached to racial exogamy (exogamy refers to marriage outside a person's core social unit). It is now common for the children of racially mixed parents to acknowledge and celebrate their various ethnic identities. Golfer Tiger Woods, for instance, has Chinese, Thai, African American, Native American, and Dutch heritage; he jokingly refers to his ethnicity as "Cablinasian," a term he coined to combine several of his ethnic backgrounds. While this is the trend, it is not yet evident in all aspects of our society. For example, the U.S. Census only recently added additional categories for people to identify themselves, such as non-white Hispanic. A growing number of people chose multiple races to describe themselves on the 2010 Census, paving the way for the 2020 Census to provide yet more choices. BIG PICTURE The Confederate Flag vs. the First Amendment Figure 11.3 To some, the Confederate flag is a symbol of pride in Southern history. To others, it is a grim reminder of a degrading period of the United States' past. (Photo courtesy of Eyeliam/flickr) In January 2006, two girls walked into Burleson High School in Texas carrying purses that displayed large images of Confederate flags. School administrators told the girls that they were in violation of the dress code, which prohibited apparel with inappropriate symbolism or clothing that discriminated based on race. To stay in school, they'd have to have someone pick up their purses or leave them in the office. The girls chose to go home for the day but then challenged the school's decision, appealing first to the principal, then to the district superintendent, then to the U.S. District Court, and finally to the Fifth Circuit Court of Appeals. Why did the school ban the purses, and why did it stand behind that ban, even when being sued? Why did the girls, identified anonymously in court documents as A.M. and A.T., pursue such strong legal measures for their right to carry the purses? The issue, of course, is not the purses: it is the Confederate flag that adorns them. The parties in this case join a long line of people and institutions that have fought for their right to display it, saying such a display is covered by the First Amendment's guarantee of free speech. In the end, the court sided with the district and noted that the Confederate flag carried symbolism significant enough to disrupt normal school activities. While many young people in the United States like to believe that racism is mostly in the country's past, this case illustrates how racism and discrimination are quite alive today. If the Confederate flag is synonymous with slavery, is there any place for its display in modern society? Those who fight for their right to display the flag say such a display should be covered by the First Amendment: the right to free speech. But others say the flag is equivalent to hate speech. Do you think that displaying the Confederate flag should considered free speech or hate speech?

theoretical perspectives on aging

What roles do individual senior citizens play in your life? How do you relate to and interact with older people? What role do they play in neighborhoods and communities, in cities and in states? Sociologists are interested in exploring the answers to questions such as these through three different perspectives: functionalism, symbolic interactionism, and conflict theory. Functionalism Functionalists analyze how the parts of society work together. Functionalists gauge how society's parts are working together to keep society running smoothly. How does this perspective address aging? The elderly, as a group, are one of society's vital parts. Functionalists find that people with better resources who stay active in other roles adjust better to old age (Crosnoe and Elder 2002). Three social theories within the functional perspective were developed to explain how older people might deal with later-life experiences. Figure 13.16 Does being old mean disengaging from the world? (Photo courtesy of Candida Performa/Wikimedia Commons) The earliest gerontological theory in the functionalist perspective is disengagement theory, which suggests that withdrawing from society and social relationships is a natural part of growing old. There are several main points to the theory. First, because everyone expects to die one day, and because we experience physical and mental decline as we approach death, it is natural to withdraw from individuals and society. Second, as the elderly withdraw, they receive less reinforcement to conform to social norms. Therefore, this withdrawal allows a greater freedom from the pressure to conform. Finally, social withdrawal is gendered, meaning it is experienced differently by men and women. Because men focus on work and women focus on marriage and family, when they withdraw they will be unhappy and directionless until they adopt a role to replace their accustomed role that is compatible with the disengaged state (Cummings and Henry 1961). The suggestion that old age was a distinct state in the life course, characterized by a distinct change in roles and activities, was groundbreaking when it was first introduced. However, the theory is no longer accepted in its classic form. Criticisms typically focus on the application of the idea that seniors universally naturally withdraw from society as they age, and that it does not allow for a wide variation in the way people experience aging (Hothschild 1975). The social withdrawal that Cummings and Henry recognized (1961), and its notion that elderly people need to find replacement roles for those they've lost, is addressed anew in activity theory. According to this theory, activity levels and social involvement are key to this process, and key to happiness (Havinghurst 1961; Neugarten 1964; Havinghurst, Neugarten, and Tobin 1968). According to this theory, the more active and involved an elderly person is, the happier he or she will be. Critics of this theory point out that access to social opportunities and activity are not equally available to all. Moreover, not everyone finds fulfillment in the presence of others or participation in activities. Reformulations of this theory suggest that participation in informal activities, such as hobbies, are what most effect later life satisfaction (Lemon, Bengtson, and Petersen 1972). According to continuity theory, the elderly make specific choices to maintain consistency in internal (personality structure, beliefs) and external structures (relationships), remaining active and involved throughout their elder years. This is an attempt to maintain social equilibrium and stability by making future decisions on the basis of already developed social roles (Atchley 1971; Atchley 1989). One criticism of this theory is its emphasis on so-called "normal" aging, which marginalizes those with chronic diseases such as Alzheimer's. SOCIOLOGY IN THE REAL WORLD The Graying of American Prisons Figure 13.17 Would you want to spend your retirement here? A growing elderly prison population requires asking questions about how to deal with senior inmates. (Photo courtesy of Claire Rowland/Wikimedia Commons) Earl Grimes is a seventy-nine-year-old inmate at a state prison. He has undergone two cataract surgeries and takes about $1,000 a month worth of medication to manage a heart condition. He needs significant help moving around, which he obtains by bribing younger inmates. He is serving a life prison term for a murder he committed thirty-eight years—half a lifetime—ago (Warren 2002). Grimes' situation exemplifies the problems facing prisons today. According to a recent report released by Human Rights Watch (2012), there are now more than 124,000 prisoners age fifty-five years or older and over 26,000 prisoners age sixty-five or older in the U.S. prison population. These numbers represent an exponential rise over the last two decades. Why are U.S. prisons graying so rapidly? Two factors contribute significantly to this country's aging prison population. One is the tough-on-crime reforms of the 1980s and 1990s, when mandatory minimum sentencing and "three strikes" policies sent many people to jail for thirty years to life, even when the third strike was a relatively minor offense (Leadership Conference, n.d.). Many of today's elderly prisoners are those who were incarcerated thirty years ago for life sentences. The other factor influencing today's aging prison population is the aging of the overall population. As discussed in the section on aging in the United States, the percentage of people over sixty-five years old is increasing each year due to rising life expectancies and the aging of the baby boom generation. So why should it matter that the elderly prison population is growing so swiftly? As discussed in the section on the process of aging, growing older is accompanied by a host of physical problems, like failing vision, mobility, and hearing. Chronic illnesses like heart disease, arthritis, and diabetes also become increasingly common as people age, whether they are in prison or not. In many cases, elderly prisoners are physically incapable of committing a violent—or possibly any—crime. Is it ethical to keep them locked up for the short remainder of their lives? There seem to be a lot of reasons, both financial and ethical, to release some elderly prisoners to live the rest of their lives—and die—in freedom. However, few lawmakers are willing to appear soft on crime by releasing convicted felons from prison, especially if their sentence was "life without parole" (Warren 2002). Conflict Perspective Figure 13.18 At a public protest, older people make their voices heard. In advocating for themselves, they help shape public policy and alter the allotment of available resources. (Photo courtesy of longislandwins/flickr) Theorists working the conflict perspective view society as inherently unstable, an institution that privileges the powerful wealthy few while marginalizing everyone else. According to the guiding principle of conflict theory, social groups compete with other groups for power and scarce resources. Applied to society's aging population, the principle means that the elderly struggle with other groups—for example, younger society members—to retain a certain share of resources. At some point, this competition may become conflict. For example, some people complain that the elderly get more than their fair share of society's resources. In hard economic times, there is great concern about the huge costs of Social Security and Medicare. One of every four tax dollars, or about 28 percent, is spent on these two programs. In 1950, the federal government paid $781 million in Social Security payments. Now, the payments are 870 times higher. In 2008, the government paid $296 billion (Statistical Abstract 2011). The medical bills of the nation's elderly population are rising dramatically. While there is more care available to certain segments of the senior community, it must be noted that the financial resources available to the aging can vary tremendously by race, social class, and gender. There are three classic theories of aging within the conflict perspective. Modernization theory (Cowgill and Holmes 1972) suggests that the primary cause of the elderly losing power and influence in society are the parallel forces of industrialization and modernization. As societies modernize, the status of elders decreases, and they are increasingly likely to experience social exclusion. Before industrialization, strong social norms bound the younger generation to care for the older. Now, as societies industrialize, the nuclear family replaces the extended family. Societies become increasingly individualistic, and norms regarding the care of older people change. In an individualistic industrial society, caring for an elderly relative is seen as a voluntary obligation that may be ignored without fear of social censure. The central reasoning of modernization theory is that as long as the extended family is the standard family, as in preindustrial economies, elders will have a place in society and a clearly defined role. As societies modernize, the elderly, unable to work outside of the home, have less to offer economically and are seen as a burden. This model may be applied to both the developed and the developing world, and it suggests that as people age they will be abandoned and lose much of their familial support since they become a nonproductive economic burden. Another theory in the conflict perspective is age stratification theory (Riley, Johnson, and Foner 1972). Though it may seem obvious now, with our awareness of ageism, age stratification theorists were the first to suggest that members of society might be stratified by age, just as they are stratified by race, class, and gender. Because age serves as a basis of social control, different age groups will have varying access to social resources such as political and economic power. Within societies, behavioral age norms, including norms about roles and appropriate behavior, dictate what members of age cohorts may reasonably do. For example, it might be considered deviant for an elderly woman to wear a bikini because it violates norms denying the sexuality of older females. These norms are specific to each age strata, developing from culturally based ideas about how people should "act their age." Thanks to amendments to the Age Discrimination in Employment Act (ADEA), which drew attention to some of the ways in which our society is stratified based on age, U.S. workers no longer must retire upon reaching a specified age. As first passed in 1967, the ADEA provided protection against a broad range of age discrimination and specifically addressed termination of employment due to age, age specific layoffs, advertised positions specifying age limits or preferences, and denial of healthcare benefits to those over sixty-five years old (U.S. EEOC 2012). Age stratification theory has been criticized for its broadness and its inattention to other sources of stratification and how these might intersect with age. For example, one might argue that an older white male occupies a more powerful role, and is far less limited in his choices, compared to an older white female based on his historical access to political and economic power. Finally, exchange theory (Dowd 1975), a rational choice approach, suggests we experience an increased dependence as we age and must increasingly submit to the will of others because we have fewer ways of compelling others to submit to us. Indeed, inasmuch as relationships are based on mutual exchanges, as the elderly become less able to exchange resources, they will see their social circles diminish. In this model, the only means to avoid being discarded is to engage in resource management, like maintaining a large inheritance or participating in social exchange systems via child care. In fact, the theory may depend too much on the assumption that individuals are calculating. It is often criticized for affording too much emphasis to material exchange and devaluing nonmaterial assets such as love and friendship. Figure 13.19 The subculture of aging theory posits that the elderly create their own communities because they have been excluded from other groups. (Photo courtesy of Icnacio Palomo Duarte/flickr) Symbolic Interactionism Generally, theories within the symbolic interactionist perspective focus on how society is created through the day-to-day interaction of individuals, as well as the way people perceive themselves and others based on cultural symbols. This microanalytic perspective assumes that if people develop a sense of identity through their social interactions, their sense of self is dependent on those interactions. A woman whose main interactions with society make her feel old and unattractive may lose her sense of self. But a woman whose interactions make her feel valued and important will have a stronger sense of self and a happier life. Symbolic interactionists stress that the changes associated with old age, in and of themselves, have no inherent meaning. Nothing in the nature of aging creates any particular, defined set of attitudes. Rather, attitudes toward the elderly are rooted in society. One microanalytical theory is Rose's (1962) subculture of aging theory, which focuses on the shared community created by the elderly when they are excluded (due to age), voluntarily or involuntarily, from participating in other groups. This theory suggests that elders will disengage from society and develop new patterns of interaction with peers who share common backgrounds and interests. For example, a group consciousness may develop within such groups as AARP around issues specific to the elderly like the Medicare "doughnut hole," focused on creating social and political pressure to fix those issues. Whether brought together by social or political interests, or even geographic regions, elders may find a strong sense of community with their new group. Another theory within the symbolic interaction perspective is selective optimization with compensation theory. Baltes and Baltes (1990) based their theory on the idea that successful personal development throughout the life course and subsequent mastery of the challenges associated with everyday life are based on the components of selection, optimization, and compensation. Though this happens at all stages in the life course, in the field of gerontology, researchers focus attention on balancing the losses associated with aging with the gains stemming from the same. Here, aging is a process and not an outcome, and the goals (compensation) are specific to the individual. According to this theory, our energy diminishes as we age, and we select (selection) personal goals to get the most (optimize) for the effort we put into activities, in this way making up for (compensation) the loss of a wider range of goals and activities. In this theory, the physical decline postulated by disengagement theory may result in more dependence, but that is not necessarily negative, as it allows aging individuals to save their energy for the most meaningful activities. For example, a professor who values teaching sociology may participate in a phased retirement, never entirely giving up teaching, but acknowledging personal physical limitations that allow teaching only one or two classes per year. Swedish sociologist Lars Tornstam developed a symbolic interactionist theory called gerotranscendence: the idea that as people age, they transcend the limited views of life they held in earlier times. Tornstam believes that throughout the aging process, the elderly become less self-centered and feel more peaceful and connected to the natural world. Wisdom comes to the elderly, Tornstam's theory states, and as the elderly tolerate ambiguities and seeming contradictions, they let go of conflict and develop softer views of right and wrong (Tornstam 2005). Tornstam does not claim that everyone will achieve wisdom in aging. Some elderly people might still grow bitter and isolated, feel ignored and left out, or become grumpy and judgmental. Symbolic interactionists believe that, just as in other phases of life, individuals must struggle to overcome their own failings and turn them into strengths.

11.1 Racial, Ethnic, and Minority Groups

While many students first entering a sociology classroom are accustomed to conflating the terms "race," "ethnicity," and "minority group," these three terms have distinct meanings for sociologists. The idea of race refers to superficial physical differences that a particular society considers significant, while ethnicity describes shared culture. And the term "minority groups" describe groups that are subordinate, or that lack power in society regardless of skin color or country of origin. For example, in modern U.S. history, the elderly might be considered a minority group due to a diminished status that results from popular prejudice and discrimination against them. Ten percent of nursing home staff admitted to physically abusing an elderly person in the past year, and 40 percent admitted to committing psychological abuse (World Health Organization 2011). In this chapter we focus on racial and ethnic minorities. What Is Race? Historically, the concept of race has changed across cultures and eras, and has eventually become less connected with ancestral and familial ties, and more concerned with superficial physical characteristics. In the past, theorists have posited categories of race based on various geographic regions, ethnicities, skin colors, and more. Their labels for racial groups have connoted regions (Mongolia and the Caucus Mountains, for instance) or skin tones (black, white, yellow, and red, for example). Social science organizations including the American Association of Anthropologists, the American Sociological Association, and the American Psychological Association have all taken an official position rejecting the biological explanations of race. Over time, the typology of race that developed during early racial science has fallen into disuse, and the social construction of race is a more sociological way of understanding racial categories. Research in this school of thought suggests that race is not biologically identifiable and that previous racial categories were arbitrarily assigned, based on pseudoscience, and used to justify racist practices (Omi and Winant 1994; Graves 2003). When considering skin color, for example, the social construction of race perspective recognizes that the relative darkness or fairness of skin is an evolutionary adaptation to the available sunlight in different regions of the world. Contemporary conceptions of race, therefore, which tend to be based on socioeconomic assumptions, illuminate how far removed modern understanding of race is from biological qualities. In modern society, some people who consider themselves "white" actually have more melanin (a pigment that determines skin color) in their skin than other people who identify as "black." Consider the case of the actress Rashida Jones. She is the daughter of a black man (Quincy Jones), and her best-known roles include Ann Perkins on Parks and Recreation, Karen Filippelli on The Office, and Zooey Rice in I Love You Man, none of whom are black characters. In some countries, such as Brazil, class is more important than skin color in determining racial categorization. People with high levels of melanin may consider themselves "white" if they enjoy a middle-class lifestyle. On the other hand, someone with low levels of melanin might be assigned the identity of "black" if he or she has little education or money. The social construction of race is also reflected in the way names for racial categories change with changing times. It's worth noting that race, in this sense, is also a system of labeling that provides a source of identity; specific labels fall in and out of favor during different social eras. For example, the category "negroid," popular in the nineteenth century, evolved into the term "negro" by the 1960s, and then this term fell from use and was replaced with "African American." This latter term was intended to celebrate the multiple identities that a black person might hold, but the word choice is a poor one: it lumps together a large variety of ethnic groups under an umbrella term while excluding others who could accurately be described by the label but who do not meet the spirit of the term. For example, actress Charlize Theron is a blonde-haired, blue-eyed "African American." She was born in South Africa and later became a U.S. citizen. Is her identity that of an "African American" as most of us understand the term? What Is Ethnicity? Ethnicity is a term that describes shared culture—the practices, values, and beliefs of a group. This culture might include shared language, religion, and traditions, among other commonalities. Like race, the term ethnicity is difficult to describe and its meaning has changed over time. And as with race, individuals may be identified or self-identify with ethnicities in complex, even contradictory, ways. For example, ethnic groups such as Irish, Italian American, Russian, Jewish, and Serbian might all be groups whose members are predominantly included in the "white" racial category. Conversely, the ethnic group British includes citizens from a multiplicity of racial backgrounds: black, white, Asian, and more, plus a variety of race combinations. These examples illustrate the complexity and overlap of these identifying terms. Ethnicity, like race, continues to be an identification method that individuals and institutions use today—whether through the census, affirmative action initiatives, nondiscrimination laws, or simply in personal day-to-day relations. What Are Minority Groups? Sociologist Louis Wirth (1945) defined a minority group as "any group of people who, because of their physical or cultural characteristics, are singled out from the others in the society in which they live for differential and unequal treatment, and who therefore regard themselves as objects of collective discrimination." The term minority connotes discrimination, and in its sociological use, the term subordinate group can be used interchangeably with the term minority, while the term dominant group is often substituted for the group that's in the majority. These definitions correlate to the concept that the dominant group is that which holds the most power in a given society, while subordinate groups are those who lack power compared to the dominant group. Note that being a numerical minority is not a characteristic of being a minority group; sometimes larger groups can be considered minority groups due to their lack of power. It is the lack of power that is the predominant characteristic of a minority, or subordinate group. For example, consider apartheid in South Africa, in which a numerical majority (the black inhabitants of the country) were exploited and oppressed by the white minority. According to Charles Wagley and Marvin Harris (1958), a minority group is distinguished by five characteristics: (1) unequal treatment and less power over their lives, (2) distinguishing physical or cultural traits like skin color or language, (3) involuntary membership in the group, (4) awareness of subordination, and (5) high rate of in-group marriage. Additional examples of minority groups might include the LBGT community, religious practitioners whose faith is not widely practiced where they live, and people with disabilities. Scapegoat theory, developed initially from Dollard's (1939) Frustration-Aggression theory, suggests that the dominant group will displace its unfocused aggression onto a subordinate group. History has shown us many examples of the scapegoating of a subordinate group. An example from the last century is the way Adolf Hitler was able to blame the Jewish population for Germany's social and economic problems. In the United States, recent immigrants have frequently been the scapegoat for the nation's—or an individual's—woes. Many states have enacted laws to disenfranchise immigrants; these laws are popular because they let the dominant group scapegoat a subordinate group.

gender

Gender and Socialization The phrase "boys will be boys" is often used to justify behavior such as pushing, shoving, or other forms of aggression from young boys. The phrase implies that such behavior is unchangeable and something that is part of a boy's nature. Aggressive behavior, when it does not inflict significant harm, is often accepted from boys and men because it is congruent with the cultural script for masculinity. The "script" written by society is in some ways similar to a script written by a playwright. Just as a playwright expects actors to adhere to a prescribed script, society expects women and men to behave according to the expectations of their respective gender roles. Scripts are generally learned through a process known as socialization, which teaches people to behave according to social norms. Socialization Children learn at a young age that there are distinct expectations for boys and girls. Cross-cultural studies reveal that children are aware of gender roles by age two or three. At four or five, most children are firmly entrenched in culturally appropriate gender roles (Kane 1996). Children acquire these roles through socialization, a process in which people learn to behave in a particular way as dictated by societal values, beliefs, and attitudes. For example, society often views riding a motorcycle as a masculine activity and, therefore, considers it to be part of the male gender role. Attitudes such as this are typically based on stereotypes, oversimplified notions about members of a group. Gender stereotyping involves overgeneralizing about the attitudes, traits, or behavior patterns of women or men. For example, women may be thought of as too timid or weak to ride a motorcycle. Figure 12.7 Although our society may have a stereotype that associates motorcycles with men, female bikers demonstrate that a woman's place extends far beyond the kitchen in the modern United States. (Photo courtesy of Robert Couse-Baker/flickr) Gender stereotypes form the basis of sexism. Sexism refers to prejudiced beliefs that value one sex over another. It varies in its level of severity. In parts of the world where women are strongly undervalued, young girls may not be given the same access to nutrition, healthcare, and education as boys. Further, they will grow up believing they deserve to be treated differently from boys (UNICEF 2011; Thorne 1993). While it is illegal in the United States when practiced as discrimination, unequal treatment of women continues to pervade social life. It should be noted that discrimination based on sex occurs at both the micro- and macro-levels. Many sociologists focus on discrimination that is built into the social structure; this type of discrimination is known as institutional discrimination (Pincus 2008). Gender socialization occurs through four major agents of socialization: family, education, peer groups, and mass media. Each agent reinforces gender roles by creating and maintaining normative expectations for gender-specific behavior. Exposure also occurs through secondary agents such as religion and the workplace. Repeated exposure to these agents over time leads men and women into a false sense that they are acting naturally rather than following a socially constructed role. Family is the first agent of socialization. There is considerable evidence that parents socialize sons and daughters differently. Generally speaking, girls are given more latitude to step outside of their prescribed gender role (Coltrane and Adams 2004; Kimmel 2000; Raffaelli and Ontai 2004). However, differential socialization typically results in greater privileges afforded to sons. For instance, boys are allowed more autonomy and independence at an earlier age than daughters. They may be given fewer restrictions on appropriate clothing, dating habits, or curfew. Sons are also often free from performing domestic duties such as cleaning or cooking and other household tasks that are considered feminine. Daughters are limited by their expectation to be passive and nurturing, generally obedient, and to assume many of the domestic responsibilities. Even when parents set gender equality as a goal, there may be underlying indications of inequality. For example, boys may be asked to take out the garbage or perform other tasks that require strength or toughness, while girls may be asked to fold laundry or perform duties that require neatness and care. It has been found that fathers are firmer in their expectations for gender conformity than are mothers, and their expectations are stronger for sons than they are for daughters (Kimmel 2000). This is true in many types of activities, including preference for toys, play styles, discipline, chores, and personal achievements. As a result, boys tend to be particularly attuned to their father's disapproval when engaging in an activity that might be considered feminine, like dancing or singing (Coltraine and Adams 2008). Parental socialization and normative expectations also vary along lines of social class, race, and ethnicity. Black families, for instance, are more likely than White families to model an egalitarian role structure for their children (Staples and Boulin Johnson 2004). The reinforcement of gender roles and stereotypes continues once a child reaches school age. Until very recently, schools were rather explicit in their efforts to stratify boys and girls. The first step toward stratification was segregation. Girls were encouraged to take home economics or humanities courses and boys to take math and science. Studies suggest that gender socialization still occurs in schools today, perhaps in less obvious forms (Lips 2004). Teachers may not even realize they are acting in ways that reproduce gender differentiated behavior patterns. Yet any time they ask students to arrange their seats or line up according to gender, teachers may be asserting that boys and girls should be treated differently (Thorne 1993). Even in levels as low as kindergarten, schools subtly convey messages to girls indicating that they are less intelligent or less important than boys. For example, in a study of teacher responses to male and female students, data indicated that teachers praised male students far more than female students. Teachers interrupted girls more often and gave boys more opportunities to expand on their ideas (Sadker and Sadker 1994). Further, in social as well as academic situations, teachers have traditionally treated boys and girls in opposite ways, reinforcing a sense of competition rather than collaboration (Thorne 1993). Boys are also permitted a greater degree of freedom to break rules or commit minor acts of deviance, whereas girls are expected to follow rules carefully and adopt an obedient role (Ready 2001). Mimicking the actions of significant others is the first step in the development of a separate sense of self (Mead 1934). Like adults, children become agents who actively facilitate and apply normative gender expectations to those around them. When children do not conform to the appropriate gender role, they may face negative sanctions such as being criticized or marginalized by their peers. Though many of these sanctions are informal, they can be quite severe. For example, a girl who wishes to take karate class instead of dance lessons may be called a "tomboy" and face difficulty gaining acceptance from both male and female peer groups (Ready 2001). Boys, especially, are subject to intense ridicule for gender nonconformity (Coltrane and Adams 2004; Kimmel 2000). Mass media serves as another significant agent of gender socialization. In television and movies, women tend to have less significant roles and are often portrayed as wives or mothers. When women are given a lead role, it often falls into one of two extremes: a wholesome, saint-like figure or a malevolent, hypersexual figure (Etaugh and Bridges 2003). This same inequality is pervasive in children's movies (Smith 2008). Research indicates that in the ten top-grossing G-rated movies released between 1991 and 2013, nine out of ten characters were male (Smith 2008). Television commercials and other forms of advertising also reinforce inequality and gender-based stereotypes. Women are almost exclusively present in ads promoting cooking, cleaning, or childcare-related products (Davis 1993). Think about the last time you saw a man star in a dishwasher or laundry detergent commercial. In general, women are underrepresented in roles that involve leadership, intelligence, or a balanced psyche. Of particular concern is the depiction of women in ways that are dehumanizing, especially in music videos. Even in mainstream advertising, however, themes intermingling violence and sexuality are quite common (Kilbourne 2000). Social Stratification and Inequality Stratification refers to a system in which groups of people experience unequal access to basic, yet highly valuable, social resources. The United States is characterized by gender stratification (as well as stratification of race, income, occupation, and the like). Evidence of gender stratification is especially keen within the economic realm. Despite making up nearly half (49.8 percent) of payroll employment, men vastly outnumber women in authoritative, powerful, and, therefore, high-earning jobs (U.S. Census Bureau 2010). Even when a woman's employment status is equal to a man's, she will generally make only 77 cents for every dollar made by her male counterpart (U.S. Census Bureau 2010). Women in the paid labor force also still do the majority of the unpaid work at home. On an average day, 84 percent of women (compared to 67 percent of men) spend time doing household management activities (U.S. Census Bureau 2011). This double duty keeps working women in a subordinate role in the family structure (Hochschild and Machung 1989). Gender stratification through the division of labor is not exclusive to the United States. According to George Murdock's classic work, Outline of World Cultures (1954), all societies classify work by gender. When a pattern appears in all societies, it is called a cultural universal. While the phenomenon of assigning work by gender is universal, its specifics are not. The same task is not assigned to either men or women worldwide. But the way each task's associated gender is valued is notable. In Murdock's examination of the division of labor among 324 societies around the world, he found that in nearly all cases the jobs assigned to men were given greater prestige (Murdock and White 1968). Even if the job types were very similar and the differences slight, men's work was still considered more vital. There is a long history of gender stratification in the United States. When looking to the past, it would appear that society has made great strides in terms of abolishing some of the most blatant forms of gender inequality (see timeline below) but underlying effects of male dominance still permeate many aspects of society. Before 1809—Women could not execute a will Before 1840—Women were not allowed to own or control property Before 1920—Women were not permitted to vote Before 1963—Employers could legally pay a woman less than a man for the same work Before 1973—Women did not have the right to a safe and legal abortion (Imbornoni 2009) Figure 12.8 In some cultures, women do all of the household chores with no help from men, as doing housework is a sign of weakness, considered by society as a feminine trait. (Photo courtesy of Evil Erin/flickr) Theoretical Perspectives on Gender Sociological theories help sociologists to develop questions and interpret data. For example, a sociologist studying why middle-school girls are more likely than their male counterparts to fall behind grade-level expectations in math and science might use a feminist perspective to frame her research. Another scholar might proceed from the conflict perspective to investigate why women are underrepresented in political office, and an interactionist might examine how the symbols of femininity interact with symbols of political authority to affect how women in Congress are treated by their male counterparts in meetings. Structural Functionalism Structural functionalism has provided one of the most important perspectives of sociological research in the twentieth century and has been a major influence on research in the social sciences, including gender studies. Viewing the family as the most integral component of society, assumptions about gender roles within marriage assume a prominent place in this perspective. Functionalists argue that gender roles were established well before the pre-industrial era when men typically took care of responsibilities outside of the home, such as hunting, and women typically took care of the domestic responsibilities in or around the home. These roles were considered functional because women were often limited by the physical restraints of pregnancy and nursing and unable to leave the home for long periods of time. Once established, these roles were passed on to subsequent generations since they served as an effective means of keeping the family system functioning properly. When changes occurred in the social and economic climate of the United States during World War II, changes in the family structure also occurred. Many women had to assume the role of breadwinner (or modern hunter-gatherer) alongside their domestic role in order to stabilize a rapidly changing society. When the men returned from war and wanted to reclaim their jobs, society fell back into a state of imbalance, as many women did not want to forfeit their wage-earning positions (Hawke 2007). Conflict Theory According to conflict theory, society is a struggle for dominance among social groups (like women versus men) that compete for scarce resources. When sociologists examine gender from this perspective, we can view men as the dominant group and women as the subordinate group. According to conflict theory, social problems are created when dominant groups exploit or oppress subordinate groups. Consider the Women's Suffrage Movement or the debate over women's "right to choose" their reproductive futures. It is difficult for women to rise above men, as dominant group members create the rules for success and opportunity in society (Farrington and Chertok 1993). Friedrich Engels, a German sociologist, studied family structure and gender roles. Engels suggested that the same owner-worker relationship seen in the labor force is also seen in the household, with women assuming the role of the proletariat. This is due to women's dependence on men for the attainment of wages, which is even worse for women who are entirely dependent upon their spouses for economic support. Contemporary conflict theorists suggest that when women become wage earners, they can gain power in the family structure and create more democratic arrangements in the home, although they may still carry the majority of the domestic burden, as noted earlier (Rismanand and Johnson-Sumerford 1998). Feminist Theory Feminist theory is a type of conflict theory that examines inequalities in gender-related issues. It uses the conflict approach to examine the maintenance of gender roles and inequalities. Radical feminism, in particular, considers the role of the family in perpetuating male dominance. In patriarchal societies, men's contributions are seen as more valuable than those of women. Patriarchal perspectives and arrangements are widespread and taken for granted. As a result, women's viewpoints tend to be silenced or marginalized to the point of being discredited or considered invalid. Sanday's study of the Indonesian Minangkabau (2004) revealed that in societies some consider to be matriarchies (where women comprise the dominant group), women and men tend to work cooperatively rather than competitively regardless of whether a job is considered feminine by U.S. standards. The men, however, do not experience the sense of bifurcated consciousness under this social structure that modern U.S. females encounter (Sanday 2004). Symbolic Interactionism Symbolic interactionism aims to understand human behavior by analyzing the critical role of symbols in human interaction. This is certainly relevant to the discussion of masculinity and femininity. Imagine that you walk into a bank hoping to get a small loan for school, a home, or a small business venture. If you meet with a male loan officer, you may state your case logically by listing all the hard numbers that make you a qualified applicant as a means of appealing to the analytical characteristics associated with masculinity. If you meet with a female loan officer, you may make an emotional appeal by stating your good intentions as a means of appealing to the caring characteristics associated with femininity. Because the meanings attached to symbols are socially created and not natural, and fluid, not static, we act and react to symbols based on the current assigned meaning. The word gay, for example, once meant "cheerful," but by the 1960s it carried the primary meaning of "homosexual." In transition, it was even known to mean "careless" or "bright and showing" (Oxford American Dictionary 2010). Furthermore, the word gay (as it refers to a homosexual), carried a somewhat negative and unfavorable meaning fifty years ago, but it has since gained more neutral and even positive connotations. When people perform tasks or possess characteristics based on the gender role assigned to them, they are said to be doing gender. This notion is based on the work of West and Zimmerman (1987). Whether we are expressing our masculinity or femininity, West and Zimmerman argue, we are always "doing gender." Thus, gender is something we do or perform, not something we are. In other words, both gender and sexuality are socially constructed. The social construction of sexuality refers to the way in which socially created definitions about the cultural appropriateness of sex-linked behavior shape the way people see and experience sexuality. This is in marked contrast to theories of sex, gender, and sexuality that link male and female behavior to biological determinism, or the belief that men and women behave differently due to differences in their biology. SOCIOLOGICAL RESEARCH Being Male, Being Female, and Being Healthy In 1971, Broverman and Broverman conducted a groundbreaking study on the traits mental health workers ascribed to males and females. When asked to name the characteristics of a female, the list featured words such as unaggressive, gentle, emotional, tactful, less logical, not ambitious, dependent, passive, and neat. The list of male characteristics featured words such as aggressive, rough, unemotional, blunt, logical, direct, active, and sloppy (Seem and Clark 2006). Later, when asked to describe the characteristics of a healthy person (not gender specific), the list was nearly identical to that of a male. This study uncovered the general assumption that being female is associated with being somewhat unhealthy or not of sound mind. This concept seems extremely dated, but in 2006, Seem and Clark replicated the study and found similar results. Again, the characteristics associated with a healthy male were very similar to that of a healthy (genderless) adult. The list of characteristics associated with being female broadened somewhat but did not show significant change from the original study (Seem and Clark 2006). This interpretation of feminine characteristic may help us one day better understand gender disparities in certain illnesses, such as why one in eight women can be expected to develop clinical depression in her lifetime (National Institute of Mental Health 1999). Perhaps these diagnoses are not just a reflection of women's health, but also a reflection of society's labeling of female characteristics, or the result of institutionalized sexism.

Theories of Race and Ethnicity

Theoretical Perspectives We can examine issues of race and ethnicity through three major sociological perspectives: functionalism, conflict theory, and symbolic interactionism. As you read through these theories, ask yourself which one makes the most sense and why. Do we need more than one theory to explain racism, prejudice, stereotypes, and discrimination? Functionalism In the view of functionalism, racial and ethnic inequalities must have served an important function in order to exist as long as they have. This concept, of course, is problematic. How can racism and discrimination contribute positively to society? A functionalist might look at "functions" and "dysfunctions" caused by racial inequality. Nash (1964) focused his argument on the way racism is functional for the dominant group, for example, suggesting that racism morally justifies a racially unequal society. Consider the way slave owners justified slavery in the antebellum South, by suggesting black people were fundamentally inferior to white and preferred slavery to freedom. Another way to apply the functionalist perspective to racism is to discuss the way racism can contribute positively to the functioning of society by strengthening bonds between in-group members through the ostracism of out-group members. Consider how a community might increase solidarity by refusing to allow outsiders access. On the other hand, Rose (1951) suggested that dysfunctions associated with racism include the failure to take advantage of talent in the subjugated group, and that society must divert from other purposes the time and effort needed to maintain artificially constructed racial boundaries. Consider how much money, time, and effort went toward maintaining separate and unequal educational systems prior to the civil rights movement. Conflict Theory Conflict theories are often applied to inequalities of gender, social class, education, race, and ethnicity. A conflict theory perspective of U.S. history would examine the numerous past and current struggles between the white ruling class and racial and ethnic minorities, noting specific conflicts that have arisen when the dominant group perceived a threat from the minority group. In the late nineteenth century, the rising power of black Americans after the Civil War resulted in draconian Jim Crow laws that severely limited black political and social power. For example, Vivien Thomas (1910-1985), the black surgical technician who helped develop the groundbreaking surgical technique that saves the lives of "blue babies" was classified as a janitor for many years, and paid as such, despite the fact that he was conducting complicated surgical experiments. The years since the Civil War have showed a pattern of attempted disenfranchisement, with gerrymandering and voter suppression efforts aimed at predominantly minority neighborhoods. Feminist sociologist Patricia Hill Collins (1990) further developed intersection theory, originally articulated in 1989 by Kimberlé Crenshaw, which suggests we cannot separate the effects of race, class, gender, sexual orientation, and other attributes. When we examine race and how it can bring us both advantages and disadvantages, it is important to acknowledge that the way we experience race is shaped, for example, by our gender and class. Multiple layers of disadvantage intersect to create the way we experience race. For example, if we want to understand prejudice, we must understand that the prejudice focused on a white woman because of her gender is very different from the layered prejudice focused on a poor Asian woman, who is affected by stereotypes related to being poor, being a woman, and her ethnic status. Interactionism For symbolic interactionists, race and ethnicity provide strong symbols as sources of identity. In fact, some interactionists propose that the symbols of race, not race itself, are what lead to racism. Famed Interactionist Herbert Blumer (1958) suggested that racial prejudice is formed through interactions between members of the dominant group: Without these interactions, individuals in the dominant group would not hold racist views. These interactions contribute to an abstract picture of the subordinate group that allows the dominant group to support its view of the subordinate group, and thus maintains the status quo. An example of this might be an individual whose beliefs about a particular group are based on images conveyed in popular media, and those are unquestionably believed because the individual has never personally met a member of that group. Another way to apply the interactionist perspective is to look at how people define their races and the race of others. As we discussed in relation to the social construction of race, since some people who claim a white identity have a greater amount of skin pigmentation than some people who claim a black identity, how did they come to define themselves as black or white? Culture of Prejudice Culture of prejudice refers to the theory that prejudice is embedded in our culture. We grow up surrounded by images of stereotypes and casual expressions of racism and prejudice. Consider the casually racist imagery on grocery store shelves or the stereotypes that fill popular movies and advertisements. It is easy to see how someone living in the Northeastern United States, who may know no Mexican Americans personally, might gain a stereotyped impression from such sources as Speedy Gonzalez or Taco Bell's talking Chihuahua. Because we are all exposed to these images and thoughts, it is impossible to know to what extent they have influenced our thought processes.

what is marriage? what is family?

Marriage and family are key structures in most societies. While the two institutions have historically been closely linked in U.S. culture, their connection is becoming more complex. The relationship between marriage and family is an interesting topic of study to sociologists. What is marriage? Different people define it in different ways. Not even sociologists are able to agree on a single meaning. For our purposes, we'll define marriage as a legally recognized social contract between two people, traditionally based on a sexual relationship and implying a permanence of the union. In practicing cultural relativism, we should also consider variations, such as whether a legal union is required (think of "common law" marriage and its equivalents), or whether more than two people can be involved (consider polygamy). Other variations on the definition of marriage might include whether spouses are of opposite sexes or the same sex and how one of the traditional expectations of marriage (to produce children) is understood today. Sociologists are interested in the relationship between the institution of marriage and the institution of family because, historically, marriages are what create a family, and families are the most basic social unit upon which society is built. Both marriage and family create status roles that are sanctioned by society. So what is a family? A husband, a wife, and two children—maybe even a pet—has served as the model for the traditional U.S. family for most of the twentieth century. But what about families that deviate from this model, such as a single-parent household or a homosexual couple without children? Should they be considered families as well? The question of what constitutes a family is a prime area of debate in family sociology, as well as in politics and religion. Social conservatives tend to define the family in terms of structure with each family member filling a certain role (like father, mother, or child). Sociologists, on the other hand, tend to define family more in terms of the manner in which members relate to one another than on a strict configuration of status roles. Here, we'll define family as a socially recognized group (usually joined by blood, marriage, cohabitation, or adoption) that forms an emotional connection and serves as an economic unit of society. Sociologists identify different types of families based on how one enters into them. A family of orientation refers to the family into which a person is born. A family of procreation describes one that is formed through marriage. These distinctions have cultural significance related to issues of lineage. Drawing on two sociological paradigms, the sociological understanding of what constitutes a family can be explained by symbolic interactionism as well as functionalism. These two theories indicate that families are groups in which participants view themselves as family members and act accordingly. In other words, families are groups in which people come together to form a strong primary group connection and maintain emotional ties to one another over a long period of time. Such families may include groups of close friends or teammates. In addition, the functionalist perspective views families as groups that perform vital roles for society—both internally (for the family itself) and externally (for society as a whole). Families provide for one another's physical, emotional, and social well-being. Parents care for and socialize children. Later in life, adult children often care for elderly parents. While interactionism helps us understand the subjective experience of belonging to a "family," functionalism illuminates the many purposes of families and their roles in the maintenance of a balanced society (Parsons and Bales 1956). We will go into more detail about how these theories apply to family in the following pages. Challenges Families Face People in the United States as a whole are somewhat divided when it comes to determining what does and what does not constitute a family. In a 2010 survey conducted by professors at the University of Indiana, nearly all participants (99.8 percent) agreed that a husband, wife, and children constitute a family. Ninety-two percent stated that a husband and a wife without children still constitute a family. The numbers drop for less traditional structures: unmarried couples with children (83 percent), unmarried couples without children (39.6 percent), gay male couples with children (64 percent), and gay male couples without children (33 percent) (Powell et al. 2010). This survey revealed that children tend to be the key indicator in establishing "family" status: the percentage of individuals who agreed that unmarried couples and gay couples constitute a family nearly doubled when children were added. The study also revealed that 60 percent of U.S. respondents agreed that if you consider yourself a family, you are a family (a concept that reinforces an interactionist perspective) (Powell 2010). The government, however, is not so flexible in its definition of "family." The U.S. Census Bureau defines a family as "a group of two people or more (one of whom is the householder) related by birth, marriage, or adoption and residing together" (U.S. Census Bureau 2010). While this structured definition can be used as a means to consistently track family-related patterns over several years, it excludes individuals such as cohabitating unmarried heterosexual and homosexual couples. Legality aside, sociologists would argue that the general concept of family is more diverse and less structured than in years past. Society has given more leeway to the design of a family making room for what works for its members (Jayson 2010). Family is, indeed, a subjective concept, but it is a fairly objective fact that family (whatever one's concept of it may be) is very important to people in the United States. In a 2010 survey by Pew Research Center in Washington, DC, 76 percent of adults surveyed stated that family is "the most important" element of their life—just one percent said it was "not important" (Pew Research Center 2010). It is also very important to society. President Ronald Regan notably stated, "The family has always been the cornerstone of American society. Our families nurture, preserve, and pass on to each succeeding generation the values we share and cherish, values that are the foundation of our freedoms" (Lee 2009). While the design of the family may have changed in recent years, the fundamentals of emotional closeness and support are still present. Most responders to the Pew survey stated that their family today is at least as close (45 percent) or closer (40 percent) than the family with which they grew up (Pew Research Center 2010). Alongside the debate surrounding what constitutes a family is the question of what people in the United States believe constitutes a marriage. Many religious and social conservatives believe that marriage can only exist between a man and a woman, citing religious scripture and the basics of human reproduction as support. Social liberals and progressives, on the other hand, believe that marriage can exist between two consenting adults—be they a man and a woman, or a woman and a woman—and that it would be discriminatory to deny such a couple the civil, social, and economic benefits of marriage. Marriage Patterns With single parenting and cohabitation (when a couple shares a residence but not a marriage) becoming more acceptable in recent years, people may be less motivated to get married. In a recent survey, 39 percent of respondents answered "yes" when asked whether marriage is becoming obsolete (Pew Research Center 2010). The institution of marriage is likely to continue, but some previous patterns of marriage will become outdated as new patterns emerge. In this context, cohabitation contributes to the phenomenon of people getting married for the first time at a later age than was typical in earlier generations (Glezer 1991). Furthermore, marriage will continue to be delayed as more people place education and career ahead of "settling down." One Partner or Many? People in the United States typically equate marriage with monogamy, when someone is married to only one person at a time. In many countries and cultures around the world, however, having one spouse is not the only form of marriage. In a majority of cultures (78 percent), polygamy, or being married to more than one person at a time, is accepted (Murdock 1967), with most polygamous societies existing in northern Africa and east Asia (Altman and Ginat 1996). Instances of polygamy are almost exclusively in the form of polygyny. Polygyny refers to a man being married to more than one woman at the same time. The reverse, when a woman is married to more than one man at the same time, is called polyandry. It is far less common and only occurs in about 1 percent of the world's cultures (Altman and Ginat 1996). The reasons for the overwhelming prevalence of polygamous societies are varied but they often include issues of population growth, religious ideologies, and social status. While the majority of societies accept polygyny, the majority of people do not practice it. Often fewer than 10 percent (and no more than 25-35 percent) of men in polygamous cultures have more than one wife; these husbands are often older, wealthy, high-status men (Altman and Ginat 1996). The average plural marriage involves no more than three wives. Negev Bedouin men in Israel, for example, typically have two wives, although it is acceptable to have up to four (Griver 2008). As urbanization increases in these cultures, polygamy is likely to decrease as a result of greater access to mass media, technology, and education (Altman and Ginat 1996). In the United States, polygamy is considered by most to be socially unacceptable and it is illegal. The act of entering into marriage while still married to another person is referred to as bigamy and is considered a felony in most states. Polygamy in the United States is often associated with those of the Church of Jesus Christ of Latter-day Saints, although in 1890 the church officially renounced polygamy. Fundamentalist Mormons, such as those in the Fundamentalist Church of Jesus Christ of Latter-day Saints (a separate and distinct organization), on the other hand, still hold tightly to the historic beliefs and practices and allow polygamy in their sect. The prevalence of polygamy among Latter-day Saints is often overestimated due to sensational media stories such as the Yearning for Zion ranch raid in Texas in 2008 and popular television shows such as HBO's Big Love and TLC's Sister Wives. It is estimated that there are about 37,500 fundamentalist Mormons involved in polygamy in the United States, Canada, and Mexico, but that number has shown a steady decrease in the last 100 years (Useem 2007). U.S. Muslims, however, are an emerging group with an estimated 20,000 practicing polygamy. Again, polygamy among U.S. Muslims is uncommon and occurs only in approximately 1 percent of the population (Useem 2007). For now polygamy among U.S. Muslims has gone fairly unnoticed by mainstream society, but like fundamentalist Mormons whose practices were off the public's radar for decades, they may someday find themselves at the center of social debate. Figure 14.3 Joseph Smith, Jr., the founder of Mormonism, is said to have practiced polygamy. (Photo courtesy of public domain/Wikimedia Commons) Residency and Lines of Descent When considering one's lineage, most people in the United States look to both their father's and mother's sides. Both paternal and maternal ancestors are considered part of one's family. This pattern of tracing kinship is called bilateral descent. Note that kinship, or one's traceable ancestry, can be based on blood or marriage or adoption. Sixty percent of societies, mostly modernized nations, follow a bilateral descent pattern. Unilateral descent (the tracing of kinship through one parent only) is practiced in the other 40 percent of the world's societies, with high concentration in pastoral cultures (O'Neal 2006). There are three types of unilateral descent: patrilineal, which follows the father's line only; matrilineal, which follows the mother's side only; and ambilineal, which follows either the father's only or the mother's side only, depending on the situation. In partrilineal societies, such as those in rural China and India, only males carry on the family surname. This gives males the prestige of permanent family membership while females are seen as only temporary members (Harrell 2001). U.S. society assumes some aspects of partrilineal decent. For instance, most children assume their father's last name even if the mother retains her birth name. In matrilineal societies, inheritance and family ties are traced to women. Matrilineal descent is common in Native American societies, notably the Crow and Cherokee tribes. In these societies, children are seen as belonging to the women and, therefore, one's kinship is traced to one's mother, grandmother, great grandmother, and so on (Mails 1996). In ambilineal societies, which are most common in Southeast Asian countries, parents may choose to associate their children with the kinship of either the mother or the father. This choice may be based on the desire to follow stronger or more prestigious kinship lines or on cultural customs such as men following their father's side and women following their mother's side (Lambert 2009). Tracing one's line of descent to one parent rather than the other can be relevant to the issue of residence. In many cultures, newly married couples move in with, or near to, family members. In a patrilocal residence system it is customary for the wife to live with (or near) her husband's blood relatives (or family of orientation). Patrilocal systems can be traced back thousands of years. In a DNA analysis of 4,600-year-old bones found in Germany, scientists found indicators of patrilocal living arrangements (Haak et al 2008). Patrilocal residence is thought to be disadvantageous to women because it makes them outsiders in the home and community; it also keeps them disconnected from their own blood relatives. In China, where patrilocal and patrilineal customs are common, the written symbols for maternal grandmother (wáipá) are separately translated to mean "outsider" and "women" (Cohen 2011). Similarly, in matrilocal residence systems, where it is customary for the husband to live with his wife's blood relatives (or her family of orientation), the husband can feel disconnected and can be labeled as an outsider. The Minangkabau people, a matrilocal society that is indigenous to the highlands of West Sumatra in Indonesia, believe that home is the place of women and they give men little power in issues relating to the home or family (Joseph and Najmabadi 2003). Most societies that use patrilocal and patrilineal systems are patriarchal, but very few societies that use matrilocal and matrilineal systems are matriarchal, as family life is often considered an important part of the culture for women, regardless of their power relative to men. Stages of Family Life As we've established, the concept of family has changed greatly in recent decades. Historically, it was often thought that many families evolved through a series of predictable stages. Developmental or "stage" theories used to play a prominent role in family sociology (Strong and DeVault 1992). Today, however, these models have been criticized for their linear and conventional assumptions as well as for their failure to capture the diversity of family forms. While reviewing some of these once-popular theories, it is important to identify their strengths and weaknesses. The set of predictable steps and patterns families experience over time is referred to as the family life cycle. One of the first designs of the family life cycle was developed by Paul Glick in 1955. In Glick's original design, he asserted that most people will grow up, establish families, rear and launch their children, experience an "empty nest" period, and come to the end of their lives. This cycle will then continue with each subsequent generation (Glick 1989). Glick's colleague, Evelyn Duvall, elaborated on the family life cycle by developing these classic stages of family (Strong and DeVault 1992): StageFamily TypeChildren1Marriage FamilyChildless2Procreation FamilyChildren ages 0 to 2.53Preschooler FamilyChildren ages 2.5 to 64School-age FamilyChildren ages 6-135Teenage FamilyChildren ages 13-206Launching FamilyChildren begin to leave home7Empty Nest Family"Empty nest"; adult children have left home Table14.1 Stage Theory This table shows one example of how a "stage" theory might categorize the phases a family goes through. The family life cycle was used to explain the different processes that occur in families over time. Sociologists view each stage as having its own structure with different challenges, achievements, and accomplishments that transition the family from one stage to the next. For example, the problems and challenges that a family experiences in Stage 1 as a married couple with no children are likely much different than those experienced in Stage 5 as a married couple with teenagers. The success of a family can be measured by how well they adapt to these challenges and transition into each stage. While sociologists use the family life cycle to study the dynamics of family over time, consumer and marketing researchers have used it to determine what goods and services families need as they progress through each stage (Murphy and Staples 1979). As early "stage" theories have been criticized for generalizing family life and not accounting for differences in gender, ethnicity, culture, and lifestyle, less rigid models of the family life cycle have been developed. One example is the family life course, which recognizes the events that occur in the lives of families but views them as parting terms of a fluid course rather than in consecutive stages (Strong and DeVault 1992). This type of model accounts for changes in family development, such as the fact that in today's society, childbearing does not always occur with marriage. It also sheds light on other shifts in the way family life is practiced. Society's modern understanding of family rejects rigid "stage" theories and is more accepting of new, fluid models. SOCIOLOGY IN THE REAL WORLD The Evolution of Television Families Whether you grew up watching the Cleavers, the Waltons, the Huxtables, or the Simpsons, most of the iconic families you saw in television sitcoms included a father, a mother, and children cavorting under the same roof while comedy ensued. The 1960s was the height of the suburban U.S. nuclear family on television with shows such as The Donna Reed Show and Father Knows Best. While some shows of this era portrayed single parents (My Three Sons and Bonanza, for instance), the single status almost always resulted from being widowed—not divorced or unwed. Although family dynamics in real U.S. homes were changing, the expectations for families portrayed on television were not. The United States' first reality show, An American Family (which aired on PBS in 1973) chronicled Bill and Pat Loud and their children as a "typical" U.S. family. During the series, the oldest son, Lance, announced to the family that he was gay, and at the series' conclusion, Bill and Pat decided to divorce. Although the Loud's union was among the 30 percent of marriages that ended in divorce in 1973, the family was featured on the cover of the March 12 issue of Newsweek with the title "The Broken Family" (Ruoff 2002). Less traditional family structures in sitcoms gained popularity in the 1980s with shows such as Diff'rent Strokes (a widowed man with two adopted African American sons) and One Day at a Time (a divorced woman with two teenage daughters). Still, traditional families such as those in Family Ties and The Cosby Show dominated the ratings. The late 1980s and the 1990s saw the introduction of the dysfunctional family. Shows such as Roseanne, Married with Children, and The Simpsons portrayed traditional nuclear families, but in a much less flattering light than those from the 1960s did (Museum of Broadcast Communications 2011). Over the past ten years, the nontraditional family has become somewhat of a tradition in television. While most situation comedies focus on single men and women without children, those that do portray families often stray from the classic structure: they include unmarried and divorced parents, adopted children, gay couples, and multigenerational households. Even those that do feature traditional family structures may show less-traditional characters in supporting roles, such as the brothers in the highly rated shows Everybody Loves Raymond and Two and Half Men. Even wildly popular children's programs as Disney's Hannah Montana and The Suite Life of Zack & Cody feature single parents. In 2009, ABC premiered an intensely nontraditional family with the broadcast of Modern Family. The show follows an extended family that includes a divorced and remarried father with one stepchild, and his biological adult children—one of who is in a traditional two-parent household, and the other who is a gay man in a committed relationship raising an adopted daughter. While this dynamic may be more complicated than the typical "modern" family, its elements may resonate with many of today's viewers. "The families on the shows aren't as idealistic, but they remain relatable," states television critic Maureen Ryan. "The most successful shows, comedies especially, have families that you can look at and see parts of your family in them" (Respers France 2010).

challenges facing the elderly

Aging comes with many challenges. The loss of independence is one potential part of the process, as are diminished physical ability and age discrimination. The term senescence refers to the aging process, including biological, emotional, intellectual, social, and spiritual changes. This section discusses some of the challenges we encounter during this process. As already observed, many older adults remain highly self-sufficient. Others require more care. Because the elderly typically no longer hold jobs, finances can be a challenge. And due to cultural misconceptions, older people can be targets of ridicule and stereotypes. The elderly face many challenges in later life, but they do not have to enter old age without dignity. Poverty Figure 13.13 While elderly poverty rates showed an improvement trend for decades, the 2008 recession has changed some older people's financial futures. Some who had planned a leisurely retirement have found themselves at risk of late-age destitution. (Photo (a) courtesy of Michael Cohen/flickr; photo (b) courtesy of Alex Proimos/flickr) For many people in the United States, growing older once meant living with less income. In 1960, almost 35 percent of the elderly existed on poverty-level incomes. A generation ago, the nation's oldest populations had the highest risk of living in poverty. At the start of the twenty-first century, the older population was putting an end to that trend. Among people over sixty-five years old, the poverty rate fell from 30 percent in 1967 to 9.7 percent in 2008, well below the national average of 13.2 percent (U.S. Census Bureau 2009). However, given the subsequent recession, which severely reduced the retirement savings of many while taxing public support systems, how are the elderly affected? According to the Kaiser Commission on Medicaid and the Uninsured, the national poverty rate among the elderly had risen to 14 percent by 2010 (Urban Institute and Kaiser Commission 2010). Before the recession hit, what had changed to cause a reduction in poverty among the elderly? What social patterns contributed to the shift? For several decades, a greater number of women joined the workforce. More married couples earned double incomes during their working years and saved more money for their retirement. Private employers and governments began offering better retirement programs. By 1990, senior citizens reported earning 36 percent more income on average than they did in 1980; that was five times the rate of increase for people under age thirty-five (U.S. Census Bureau 2009). In addition, many people were gaining access to better healthcare. New trends encouraged people to live more healthful lifestyles by placing an emphasis on exercise and nutrition. There was also greater access to information about the health risks of behaviors such as cigarette smoking, alcohol consumption, and drug use. Because they were healthier, many older people continue to work past the typical retirement age and provide more opportunity to save for retirement. Will these patterns return once the recession ends? Sociologists will be watching to see. In the meantime, they are realizing the immediate impact of the recession on elderly poverty. During the recession, older people lost some of the financial advantages that they'd gained in the 1980s and 1990s. From October 2007 to October 2009 the values of retirement accounts for people over age fifty lost 18 percent of their value. The sharp decline in the stock market also forced many to delay their retirement (Administration on Aging 2009). Ageism Figure 13.14 Are these street signs humorous or offensive? What shared assumptions make them humorous? Or is memory loss too serious to be made fun of? (Photo courtesy of Tumbleweed/flickr) Driving to the grocery store, Peter, twenty-three years old, got stuck behind a car on a four-lane main artery through his city's business district. The speed limit was thirty-five miles per hour, and while most drivers sped along at forty to forty-five mph, the driver in front of him was going the minimum speed. Peter tapped on his horn. He tailgated the driver. Finally, Peter had a chance to pass the car. He glanced over. Sure enough, Peter thought, a gray-haired old man guilty of "DWE," driving while elderly. At the grocery store, Peter waited in the checkout line behind an older woman. She paid for her groceries, lifted her bags of food into her cart, and toddled toward the exit. Peter, guessing her to be about eighty years old, was reminded of his grandmother. He paid for his groceries and caught up with her. "Can I help you with your cart?" he asked. "No, thank you. I can get it myself," she said and marched off toward her car. Peter's responses to both older people, the driver and the shopper, were prejudiced. In both cases, he made unfair assumptions. He assumed the driver drove cautiously simply because the man was a senior citizen, and he assumed the shopper needed help carrying her groceries just because she was an older woman. Responses like Peter's toward older people are fairly common. He didn't intend to treat people differently based on personal or cultural biases, but he did. Ageism is discrimination (when someone acts on a prejudice) based on age. Dr. Robert Butler coined the term in 1968, noting that ageism exists in all cultures (Brownell). Ageist attitudes and biases based on stereotypes reduce elderly people to inferior or limited positions. Ageism can vary in severity. Peter's attitudes are probably seen as fairly mild, but relating to the elderly in ways that are patronizing can be offensive. When ageism is reflected in the workplace, in healthcare, and in assisted-living facilities, the effects of discrimination can be more severe. Ageism can make older people fear losing a job, feel dismissed by a doctor, or feel a lack of power and control in their daily living situations. In early societies, the elderly were respected and revered. Many preindustrial societies observed gerontocracy, a type of social structure wherein the power is held by a society's oldest members. In some countries today, the elderly still have influence and power and their vast knowledge is respected. Reverence for the elderly is still a part of some cultures, but it has changed in many places because of social factors. In many modern nations, however, industrialization contributed to the diminished social standing of the elderly. Today wealth, power, and prestige are also held by those in younger age brackets. The average age of corporate executives was fifty-nine years old in 1980. In 2008, the average age had lowered to fifty-four years old (Stuart 2008). Some older members of the workforce felt threatened by this trend and grew concerned that younger employees in higher level positions would push them out of the job market. Rapid advancements in technology and media have required new skill sets that older members of the workforce are less likely to have. Changes happened not only in the workplace but also at home. In agrarian societies, a married couple cared for their aging parents. The oldest members of the family contributed to the household by doing chores, cooking, and helping with child care. As economies shifted from agrarian to industrial, younger generations moved to cities to work in factories. The elderly began to be seen as an expensive burden. They did not have the strength and stamina to work outside the home. What began during industrialization, a trend toward older people living apart from their grown children, has become commonplace. Mistreatment and Abuse Mistreatment and abuse of the elderly is a major social problem. As expected, with the biology of aging, the elderly sometimes become physically frail. This frailty renders them dependent on others for care—sometimes for small needs like household tasks, and sometimes for assistance with basic functions like eating and toileting. Unlike a child, who also is dependent on another for care, an elder is an adult with a lifetime of experience, knowledge, and opinions—a more fully developed person. This makes the care-providing situation more complex. Elder abuse occurs when a caretaker intentionally deprives an older person of care or harms the person in his or her charge. Caregivers may be family members, relatives, friends, health professionals, or employees of senior housing or nursing care. The elderly may be subject to many different types of abuse. In a 2009 study on the topic led by Dr. Ron Acierno, the team of researchers identified five major categories of elder abuse: 1) physical abuse, such as hitting or shaking, 2) sexual abuse, including rape and coerced nudity, 3) psychological or emotional abuse, such as verbal harassment or humiliation, 4) neglect or failure to provide adequate care, and 5) financial abuse or exploitation (Acierno 2010). The National Center on Elder Abuse (NCEA), a division of the U.S. Administration on Aging, also identifies abandonment and self-neglect as types of abuse. Table 13.1 shows some of the signs and symptoms that the NCEA encourages people to notice. Type of AbuseSigns and SymptomsPhysical abuseBruises, untreated wounds, sprains, broken glasses, lab findings of medication overdosageSexual abuseBruises around breasts or genitals, torn or bloody underclothing, unexplained venereal diseaseEmotional/psychological abuseBeing upset or withdrawn, unusual dementia-like behavior (rocking, sucking)NeglectPoor hygiene, untreated bed sores, dehydration, soiled beddingFinancialSudden changes in banking practices, inclusion of additional names on bank cards, abrupt changes to willSelf-neglectUntreated medical conditions, unclean living area, lack of medical items like dentures or glasses Table13.1 Signs of Elder Abuse The National Center on Elder Abuse encourages people to watch for these signs of mistreatment. (Chart courtesy of National Center on Elder Abuse) How prevalent is elder abuse? Two recent U.S. studies found that roughly one in ten elderly people surveyed had suffered at least one form of elder abuse. Some social researchers believe elder abuse is underreported and that the number may be higher. The risk of abuse also increases in people with health issues such as dementia (Kohn and Verhoek-Oftedahl 2011). Older women were found to be victims of verbal abuse more often than their male counterparts. In Acierno's study, which included a sample of 5,777 respondents age sixty and older, 5.2 percent of respondents reported financial abuse, 5.1 percent said they'd been neglected, and 4.6 endured emotional abuse (Acierno 2010). The prevalence of physical and sexual abuse was lower at 1.6 and 0.6 percent, respectively (Acierno 2010). Other studies have focused on the caregivers to the elderly in an attempt to discover the causes of elder abuse. Researchers identified factors that increased the likelihood of caregivers perpetrating abuse against those in their care. Those factors include inexperience, having other demands such as jobs (for those who weren't professionally employed as caregivers), caring for children, living full-time with the dependent elder, and experiencing high stress, isolation, and lack of support (Kohn and Verhoek-Oftedahl 2011). A history of depression in the caregiver was also found to increase the likelihood of elder abuse. Neglect was more likely when care was provided by paid caregivers. Many of the caregivers who physically abused elders were themselves abused—in many cases, when they were children. Family members with some sort of dependency on the elder in their care were more likely to physically abuse that elder. For example, an adult child caring for an elderly parent while at the same time depending on some form of income from that parent, is considered more likely to perpetrate physical abuse (Kohn and Verhoek-Oftedahl 2011). A survey in Florida found that 60.1 percent of caregivers reported verbal aggression as a style of conflict resolution. Paid caregivers in nursing homes were at a high risk of becoming abusive if they had low job satisfaction, treated the elderly like children, or felt burnt out (Kohn and Verhoek-Oftedahl 2011). Caregivers who tended to be verbally abusive were found to have had less training, lower education, and higher likelihood of depression or other psychiatric disorders. Based on the results of these studies, many housing facilities for seniors have increased their screening procedures for caregiver applicants. BIG PICTURE World War II Veterans Figure 13.15 World War II (1941-1945) veterans and members of an Honor Flight from Milwaukee, Wisconsin, visit the National World War II Memorial in Washington, DC. Most of these men and women were in their late teens or twenties when they served. (Photo courtesy of Sean Hackbarth/flickr) World War II veterans are aging. Many are in their eighties and nineties. They are dying at an estimated rate of about 740 per day, according to the U.S. Veterans Administration (National Center for Veterans Analysis and Statistics 2011). Data suggest that by 2036, there will be no living veterans of WWII (U.S. Department of Veteran Affairs). When these veterans came home from the war and ended their service, little was known about posttraumatic stress disorder (PTSD). These heroes did not receive the mental and physical healthcare that could have helped them. As a result, many of them, now in old age, are dealing with the effects of PTSD. Research suggests a high percentage of World War II veterans are plagued by flashback memories and isolation, and that many "self-medicate" with alcohol. Research has found that veterans of any conflict are more than twice as likely as nonveterans to commit suicide, with rates highest among the oldest veterans. Reports show that WWII-era veterans are four times as likely to take their own lives as people of the same age with no military service (Glantz 2010). In May 2004, the National World War II Memorial in Washington, DC, was completed and dedicated to honor those who served during the conflict. Dr. Earl Morse, a physician and retired Air Force captain, treated many WWII veterans. He encouraged them to visit the memorial, knowing it could help them heal. Many WWII veterans expressed interest in seeing the memorial. Unfortunately, many were in their eighties and were neither physically nor financially able to travel on their own. Dr. Morse arranged to personally escort some of the veterans and enlisted volunteer pilots who would pay for the flights themselves. He also raised money, insisting the veterans pay nothing. By the end of 2005, 137 veterans, many in wheelchairs, had made the trip. The Honor Flight Network was up and running. As of 2010, the Honor Flight Network had flown more than 120,000 U.S. veterans of World War II, and some veterans of the Korean War, to Washington. The round-trip flights leave for day-long trips from airports in thirty states, staffed by volunteers who care for the needs of the elderly travelers (Honor Flight Network 2011).

intergroup relations

Intergroup relations (relationships between different groups of people) range along a spectrum between tolerance and intolerance. The most tolerant form of intergroup relations is pluralism, in which no distinction is made between minority and majority groups, but instead there's equal standing. At the other end of the continuum are amalgamation, expulsion, and even genocide—stark examples of intolerant intergroup relations. Genocide Genocide, the deliberate annihilation of a targeted (usually subordinate) group, is the most toxic intergroup relationship. Historically, we can see that genocide has included both the intent to exterminate a group and the function of exterminating of a group, intentional or not. Possibly the most well-known case of genocide is Hitler's attempt to exterminate the Jewish people in the first part of the twentieth century. Also known as the Holocaust, the explicit goal of Hitler's "Final Solution" was the eradication of European Jewry, as well as the destruction of other minority groups such as Catholics, people with disabilities, and homosexuals. With forced emigration, concentration camps, and mass executions in gas chambers, Hitler's Nazi regime was responsible for the deaths of 12 million people, 6 million of whom were Jewish. Hitler's intent was clear, and the high Jewish death toll certainly indicates that Hitler and his regime committed genocide. But how do we understand genocide that is not so overt and deliberate? The treatment of aboriginal Australians is also an example of genocide committed against indigenous people. Historical accounts suggest that between 1824 and 1908, white settlers killed more than 10,000 native aborigines in Tasmania and Australia (Tatz 2006). Another example is the European colonization of North America. Some historians estimate that Native American populations dwindled from approximately 12 million people in the year 1500 to barely 237,000 by the year 1900 (Lewy 2004). European settlers coerced American Indians off their own lands, often causing thousands of deaths in forced removals, such as occurred in the Cherokee or Potawatomi Trail of Tears. Settlers also enslaved Native Americans and forced them to give up their religious and cultural practices. But the major cause of Native American death was neither slavery nor war nor forced removal: it was the introduction of European diseases and Indians' lack of immunity to them. Smallpox, diphtheria, and measles flourished among indigenous American tribes who had no exposure to the diseases and no ability to fight them. Quite simply, these diseases decimated the tribes. How planned this genocide was remains a topic of contention. Some argue that the spread of disease was an unintended effect of conquest, while others believe it was intentional citing rumors of smallpox-infected blankets being distributed as "gifts" to tribes. Genocide is not a just a historical concept; it is practiced today. Recently, ethnic and geographic conflicts in the Darfur region of Sudan have led to hundreds of thousands of deaths. As part of an ongoing land conflict, the Sudanese government and their state-sponsored Janjaweed militia have led a campaign of killing, forced displacement, and systematic rape of Darfuri people. Although a treaty was signed in 2011, the peace is fragile. Expulsion Expulsion refers to a subordinate group being forced, by a dominant group, to leave a certain area or country. As seen in the examples of the Trail of Tears and the Holocaust, expulsion can be a factor in genocide. However, it can also stand on its own as a destructive group interaction. Expulsion has often occurred historically with an ethnic or racial basis. In the United States, President Franklin D. Roosevelt issued Executive Order 9066 in 1942, after the Japanese government's attack on Pearl Harbor. The Order authorized the establishment of internment camps for anyone with as little as one-eighth Japanese ancestry (i.e., one great-grandparent who was Japanese). Over 120,000 legal Japanese residents and Japanese U.S. citizens, many of them children, were held in these camps for up to four years, despite the fact that there was never any evidence of collusion or espionage. (In fact, many Japanese Americans continued to demonstrate their loyalty to the United States by serving in the U.S. military during the War.) In the 1990s, the U.S. executive branch issued a formal apology for this expulsion; reparation efforts continue today. Segregation Segregation refers to the physical separation of two groups, particularly in residence, but also in workplace and social functions. It is important to distinguish between de jure segregation (segregation that is enforced by law) and de facto segregation (segregation that occurs without laws but because of other factors). A stark example of de jure segregation is the apartheid movement of South Africa, which existed from 1948 to 1994. Under apartheid, black South Africans were stripped of their civil rights and forcibly relocated to areas that segregated them physically from their white compatriots. Only after decades of degradation, violent uprisings, and international advocacy was apartheid finally abolished. De jure segregation occurred in the United States for many years after the Civil War. During this time, many former Confederate states passed Jim Crow laws that required segregated facilities for blacks and whites. These laws were codified in 1896's landmark Supreme Court case Plessy v. Ferguson, which stated that "separate but equal" facilities were constitutional. For the next five decades, blacks were subjected to legalized discrimination, forced to live, work, and go to school in separate—but unequal—facilities. It wasn't until 1954 and the Brown v. Board of Education case that the Supreme Court declared that "separate educational facilities are inherently unequal," thus ending de jure segregation in the United States. Figure 11.4 In the "Jim Crow" South, it was legal to have "separate but equal" facilities for blacks and whites. (Photo courtesy of Library of Congress/Wikimedia Commons) De facto segregation, however, cannot be abolished by any court mandate. Segregation is still alive and well in the United States, with different racial or ethnic groups often segregated by neighborhood, borough, or parish. Sociologists use segregation indices to measure racial segregation of different races in different areas. The indices employ a scale from zero to 100, where zero is the most integrated and 100 is the least. In the New York metropolitan area, for instance, the black-white segregation index was seventy-nine for the years 2005-2009. This means that 79 percent of either blacks or whites would have to move in order for each neighborhood to have the same racial balance as the whole metro region (Population Studies Center 2010). Pluralism Pluralism is represented by the ideal of the United States as a "salad bowl": a great mixture of different cultures where each culture retains its own identity and yet adds to the flavor of the whole. True pluralism is characterized by mutual respect on the part of all cultures, both dominant and subordinate, creating a multicultural environment of acceptance. In reality, true pluralism is a difficult goal to reach. In the United States, the mutual respect required by pluralism is often missing, and the nation's past pluralist model of a melting pot posits a society where cultural differences aren't embraced as much as erased. Assimilation Assimilation describes the process by which a minority individual or group gives up its own identity by taking on the characteristics of the dominant culture. In the United States, which has a history of welcoming and absorbing immigrants from different lands, assimilation has been a function of immigration. Figure 11.5 For many immigrants to the United States, the Statue of Liberty is a symbol of freedom and a new life. Unfortunately, they often encounter prejudice and discrimination. (Photo courtesy of Mark Heard/flickr) Most people in the United States have immigrant ancestors. In relatively recent history, between 1890 and 1920, the United States became home to around 24 million immigrants. In the decades since then, further waves of immigrants have come to these shores and have eventually been absorbed into U.S. culture, sometimes after facing extended periods of prejudice and discrimination. Assimilation may lead to the loss of the minority group's cultural identity as they become absorbed into the dominant culture, but assimilation has minimal to no impact on the majority group's cultural identity. Some groups may keep only symbolic gestures of their original ethnicity. For instance, many Irish Americans may celebrate Saint Patrick's Day, many Hindu Americans enjoy a Diwali festival, and many Mexican Americans may celebrate Cinco de Mayo (a May 5 acknowledgment of Mexico's victory at the 1862 Battle of Puebla). However, for the rest of the year, other aspects of their originating culture may be forgotten. Assimilation is antithetical to the "salad bowl" created by pluralism; rather than maintaining their own cultural flavor, subordinate cultures give up their own traditions in order to conform to their new environment. Sociologists measure the degree to which immigrants have assimilated to a new culture with four benchmarks: socioeconomic status, spatial concentration, language assimilation, and intermarriage. When faced with racial and ethnic discrimination, it can be difficult for new immigrants to fully assimilate. Language assimilation, in particular, can be a formidable barrier, limiting employment and educational options and therefore constraining growth in socioeconomic status. Amalgamation Amalgamation is the process by which a minority group and a majority group combine to form a new group. Amalgamation creates the classic "melting pot" analogy; unlike the "salad bowl," in which each culture retains its individuality, the "melting pot" ideal sees the combination of cultures that results in a new culture entirely. Amalgamation, also known as miscegenation, is achieved through intermarriage between races. In the United States, antimiscegenation laws flourished in the South during the Jim Crow era. It wasn't until 1967's Loving v. Virginia that the last antimiscegenation law was struck from the books, making these laws unconstitutional.

the process of aging

As human beings grow older, they go through different phases or stages of life. It is helpful to understand aging in the context of these phases. A life course is the period from birth to death, including a sequence of predictable life events such as physical maturation. Each phase comes with different responsibilities and expectations, which of course vary by individual and culture. Children love to play and learn, looking forward to becoming preteens. As preteens begin to test their independence, they are eager to become teenagers. Teenagers anticipate the promises and challenges of adulthood. Adults become focused on creating families, building careers, and experiencing the world as independent people. Finally, many adults look forward to old age as a wonderful time to enjoy life without as much pressure from work and family life. In old age, grandparenthood can provide many of the joys of parenthood without all the hard work that parenthood entails. And as work responsibilities abate, old age may be a time to explore hobbies and activities that there was no time for earlier in life. But for other people, old age is not a phase that they look forward to. Some people fear old age and do anything to "avoid" it by seeking medical and cosmetic fixes for the natural effects of age. These differing views on the life course are the result of the cultural values and norms into which people are socialized, but in most cultures, age is a master status influencing self-concept, as well as social roles and interactions. Through the phases of the life course, dependence and independence levels change. At birth, newborns are dependent on caregivers for everything. As babies become toddlers and toddlers become adolescents and then teenagers, they assert their independence more and more. Gradually, children come to be considered adults, responsible for their own lives, although the point at which this occurs is widely varied among individuals, families, and cultures. As Riley (1978) notes, aging is a lifelong process and entails maturation and change on physical, psychological, and social levels. Age, much like race, class, and gender, is a hierarchy in which some categories are more highly valued than others. For example, while many children look forward to gaining independence, Packer and Chasteen (2006) suggest that even in children, age prejudice leads to a negative view of aging. This, in turn, can lead to a widespread segregation between the old and the young at the institutional, societal, and cultural levels (Hagestad and Uhlenberg 2006). SOCIOLOGICAL RESEARCH Dr. Ignatz Nascher and the Birth of Geriatrics In the early 1900s, a New York physician named Dr. Ignatz Nascher coined the term geriatrics, a medical specialty that focuses on the elderly. He created the word by combining two Greek words: geron (old man) and iatrikos (medical treatment). Nascher based his work on what he observed as a young medical student, when he saw many acutely ill elderly people who were diagnosed simply as "being old." There was nothing medicine could do, his professors declared, about the syndrome of "old age." Nascher refused to accept this dismissive view, seeing it as medical neglect. He believed it was a doctor's duty to prolong life and relieve suffering whenever possible. In 1914, he published his views in his book Geriatrics: The Diseases of Old Age and Their Treatment (Clarfield 1990). Nascher saw the practice of caring for the elderly as separate from the practice of caring for the young, just as pediatrics (caring for children) is different from caring for grown adults (Clarfield 1990). Nascher had high hopes for his pioneering work. He wanted to treat the aging, especially those who were poor and had no one to care for them. Many of the elderly poor were sent to live in "almshouses," or public old-age homes (Cole 1993). Conditions were often terrible in these almshouses, where the aging were often sent and just forgotten. As hard as it might be to believe today, Nascher's approach was once considered unique. At the time of his death, in 1944, he was disappointed that the field of geriatrics had not made greater strides. In what ways are the elderly better off today than they were before Nascher's ideas gained acceptance? Biological Changes Figure 13.8 Aging can be a visible, public experience. Many people recognize the signs of aging and, because of the meanings that culture assigns to these changes, believe that being older means being in physical decline. Many older people, however, remain healthy, active, and happy. (Photo courtesy of Pedro Riberio Simoes/flickr) Each person experiences age-related changes based on many factors. Biological factors such as molecular and cellular changes are called primary aging, while aging that occurs due to controllable factors such as lack of physical exercise and poor diet is called secondary aging (Whitbourne and Whitbourne 2010). Most people begin to see signs of aging after fifty years old, when they notice the physical markers of age. Skin becomes thinner, drier, and less elastic. Wrinkles form. Hair begins to thin and gray. Men prone to balding start losing hair. The difficulty or relative ease with which people adapt to these changes is dependent in part on the meaning given to aging by their particular culture. A culture that values youthfulness and beauty above all else leads to a negative perception of growing old. Conversely, a culture that reveres the elderly for their life experience and wisdom contributes to a more positive perception of what it means to grow old. The effects of aging can feel daunting, and sometimes the fear of physical changes (like declining energy, food sensitivity, and loss of hearing and vision) is more challenging to deal with than the changes themselves. The way people perceive physical aging is largely dependent on how they were socialized. If people can accept the changes in their bodies as a natural process of aging, the changes will not seem as frightening. According to the federal Administration on Aging (2011), in 2009 fewer people over sixty-five years old assessed their health as "excellent" or "very good" (41.6 percent) compared to those aged eighteen to sixty-four (64.4 percent). Evaluating data from the National Center for Health Statistics and the U.S. Bureau of Labor Statistics, the Administration on Aging found that from 2006 to 2008, the most frequently reported health issues for those over sixty-five years old included arthritis (50 percent), hypertension (38 percent), heart disease (32 percent), and cancer (22 percent). About 27 percent of people age sixty and older are considered obese by current medical standards. Parker and Thorslund (2006) found that while the trend is toward steady improvement in most disability measures, there is a concomitant increase in functional impairments (disability) and chronic diseases. At the same time, medical advances have reduced some of the disabling effects of those diseases (Crimmins 2004). Some impacts of aging are gender-specific. Some of the disadvantages aging women face arise from long-standing social gender roles. For example, Social Security favors men over women, inasmuch as women do not earn Social Security benefits for the unpaid labor they perform (usually at home) as an extension of their gender roles. In the healthcare field, elderly female patients are more likely than elderly men to see their healthcare concerns trivialized (Sharp 1995) and are more likely to have their health issues labeled psychosomatic (Munch 2004). Another female-specific aspect of aging is that mass-media outlets often depict elderly females in terms of negative stereotypes and as less successful than older men (Bazzini and Mclntosh I997). For men, the process of aging—and society's response to and support of the experience—may be quite different. The gradual decrease in male sexual performance that occurs as a result of primary aging is medicalized and constructed as needing treatment (Marshall and Katz 2002) so that a man may maintain a sense of youthful masculinity. On the other hand, aging men have fewer opportunities to assert their masculine identities in the company of other men (for example, through sports participation) (Drummond 1998). And some social scientists have observed that the aging male body is depicted in the Western world as genderless (Spector-Mersel 2006). Figure 13.9 Aging is accompanied by a host of biological, social, and psychological changes. (Photo courtesy of Michael Cohen/flickr) Social and Psychological Changes Male or female, growing older means confronting the psychological issues that come with entering the last phase of life. Young people moving into adulthood take on new roles and responsibilities as their lives expand, but an opposite arc can be observed in old age. What are the hallmarks of social and psychological change? Retirement—the withdrawal from paid work at a certain age—is a relatively recent idea. Up until the late nineteenth century, people worked about sixty hours a week until they were physically incapable of continuing. Following the American Civil War, veterans receiving pensions were able to withdraw from the workforce, and the number of working older men began declining. A second large decline in the number of working men began in the post-World War II era, probably due to the availability of Social Security, and a third large decline in the 1960s and 1970s was probably due to the social support offered by Medicare and the increase in Social Security benefits (Munnell 2011). In the twenty-first century, most people hope that at some point they will be able to stop working and enjoy the fruits of their labor. But do we look forward to this time or fear it? When people retire from familiar work routines, some easily seek new hobbies, interests, and forms of recreation. Many find new groups and explore new activities, but others may find it more difficult to adapt to new routines and loss of social roles, losing their sense of self-worth in the process. Each phase of life has challenges that come with the potential for fear. Erik H. Erikson (1902-1994), in his view of socialization, broke the typical life span into eight phases. Each phase presents a particular challenge that must be overcome. In the final stage, old age, the challenge is to embrace integrity over despair. Some people are unable to successfully overcome the challenge. They may have to confront regrets, such as being disappointed in their children's lives or perhaps their own. They may have to accept that they will never reach certain career goals. Or they must come to terms with what their career success has cost them, such as time with their family or declining personal health. Others, however, are able to achieve a strong sense of integrity and are able to embrace the new phase in life. When that happens, there is tremendous potential for creativity. They can learn new skills, practice new activities, and peacefully prepare for the end of life. For some, overcoming despair might entail remarriage after the death of a spouse. A study conducted by Kate Davidson (2002) reviewed demographic data that asserted men were more likely to remarry after the death of a spouse and suggested that widows (the surviving female spouse of a deceased male partner) and widowers (the surviving male spouse of a deceased female partner) experience their postmarital lives differently. Many surviving women enjoyed a new sense of freedom, since they were living alone for the first time. On the other hand, for surviving men, there was a greater sense of having lost something, because they were now deprived of a constant source of care as well as the focus of their emotional life. Aging and Sexuality Figure 13.10 In Harold and Maude, a 1971 cult classic movie, a twenty-something young man falls in love with a seventy-nine-year-old woman. The world reacts in disgust. What is your response to this picture, given that that the two people are meant to be lovers, not grandmother and grandson? (Photo courtesy of luckyjackson/flickr) It is no secret that people in the United States are squeamish about the subject of sex. And when the subject is the sexuality of elderly people? No one wants to think about it or even talk about it. That fact is part of what makes 1971's Harold and Maude so provocative. In this cult favorite film, Harold, an alienated young man, meets and falls in love with Maude, a seventy-nine-year-old woman. What is so telling about the film is the reaction of his family, priest, and psychologist, who exhibit disgust and horror at such a match. Although it is difficult to have an open, public national dialogue about aging and sexuality, the reality is that our sexual selves do not disappear after age sixty-five. People continue to enjoy sex—and not always safe sex—well into their later years. In fact, some research suggests that as many as one in five new cases of AIDS occurs in adults over sixty-five years old (Hillman 2011). In some ways, old age may be a time to enjoy sex more, not less. For women, the elder years can bring a sense of relief as the fear of an unwanted pregnancy is removed and the children are grown and taking care of themselves. However, while we have expanded the number of psycho-pharmaceuticals to address sexual dysfunction in men, it was not until very recently that the medical field acknowledged the existence of female sexual dysfunctions (Bryant 2004). SOCIOLOGY IN THE REAL WORLD Aging "Out:" LGBT Seniors Figure 13.11 As same-sex marriage becomes a possibility, many gay and lesbian couples are finally able to tie the knot—sometimes as seniors—after decades of waiting. (Photo courtesy of Fibonacci Blue/flickr). How do different groups in our society experience the aging process? Are there any experiences that are universal, or do different populations have different experiences? An emerging field of study looks at how lesbian, gay, bisexual, and transgender (LGBT) people experience the aging process and how their experience differs from that of other groups or the dominant group. This issue is expanding with the aging of the baby boom generation; not only will aging boomers represent a huge bump in the general elderly population but also the number of LGBT seniors is expected to double by 2030 (Fredriksen-Goldsen et al. 2011). A recent study titled The Aging and Health Report: Disparities and Resilience among Lesbian, Gay, Bisexual, and Transgender Older Adults finds that LGBT older adults have higher rates of disability and depression than their heterosexual peers. They are also less likely to have a support system that might provide elder care: a partner and supportive children (Fredriksen-Goldsen et al. 2011). Even for those LGBT seniors who are partnered, some states do not recognize a legal relationship between two people of the same sex, which reduces their legal protection and financial options. As they transition to assisted-living facilities, LGBT people have the added burden of "disclosure management:" the way they share their sexual and relationship identity. In one case study, a seventy-eight-year-old lesbian lived alone in a long-term care facility. She had been in a long-term relationship of thirty-two years and had been visibly active in the gay community earlier in her life. However, in the long-term care setting, she was much quieter about her sexual orientation. She "selectively disclosed" her sexual identity, feeling safer with anonymity and silence (Jenkins et al. 2010). A study from the National Senior Citizens Law Center reports that only 22 percent of LGBT older adults expect they could be open about their sexual orientation or gender identity in a long-term care facility. Even more telling is the finding that only 16 percent of non-LGBT older adults expected that LGBT people could be open with facility staff (National Senior Citizens Law Center 2011). Same-sex marriage—a civil rights battleground that is being fought in many states—can have major implications for the way the LGBT community ages. With marriage comes the legal and financial protection afforded to opposite-sex couples, as well as less fear of exposure and a reduction in the need to "retreat to the closet" (Jenkins et al. 2010). Changes in this area are coming slowly, and in the meantime, advocates have many policy recommendations for how to improve the aging process for LGBT individuals. These recommendations include increasing federal research on LGBT elders, increasing (and enforcing existing) laws against discrimination, and amending the federal Family and Medical Leave Act to cover LGBT caregivers (Grant 2009). Death and Dying Figure 13.12 A young man sits at the grave of his great-grandmother. (Photo courtesy of Sara Goldsmith/flickr) For most of human history, the standard of living was significantly lower than it is now. Humans struggled to survive with few amenities and very limited medical technology. The risk of death due to disease or accident was high in any life stage, and life expectancy was low. As people began to live longer, death became associated with old age. For many teenagers and young adults, losing a grandparent or another older relative can be the first loss of a loved one they experience. It may be their first encounter with grief, a psychological, emotional, and social response to the feelings of loss that accompanies death or a similar event. People tend to perceive death, their own and that of others, based on the values of their culture. While some may look upon death as the natural conclusion to a long, fruitful life, others may find the prospect of dying frightening to contemplate. People tend to have strong resistance to the idea of their own death, and strong emotional reactions of loss to the death of loved ones. Viewing death as a loss, as opposed to a natural or tranquil transition, is often considered normal in the United States. What may be surprising is how few studies were conducted on death and dying prior to the 1960s. Death and dying were fields that had received little attention until a psychologist named Elisabeth Kübler-Ross began observing people who were in the process of dying. As Kübler-Ross witnessed people's transition toward death, she found some common threads in their experiences. She observed that the process had five distinct stages: denial, anger, bargaining, depression, and acceptance. She published her findings in a 1969 book called On Death and Dying. The book remains a classic on the topic today. Kübler-Ross found that a person's first reaction to the prospect of dying is denial: this is characterized by the person's not wanting to believe he or she is dying, with common thoughts such as "I feel fine" or "This is not really happening to me." The second stage is anger, when loss of life is seen as unfair and unjust. A person then resorts to the third stage, bargaining: trying to negotiate with a higher power to postpone the inevitable by reforming or changing the way he or she lives. The fourth stage, psychological depression, allows for resignation as the situation begins to seem hopeless. In the final stage, a person adjusts to the idea of death and reaches acceptance. At this point, the person can face death honestly, by regarding it as a natural and inevitable part of life and can make the most of their remaining time. The work of Kübler-Ross was eye-opening when it was introduced. It broke new ground and opened the doors for sociologists, social workers, health practitioners, and therapists to study death and help those who were facing death. Kübler-Ross's work is generally considered a major contribution to thanatology: the systematic study of death and dying. Of special interests to thanatologists is the concept of "dying with dignity." Modern medicine includes advanced medical technology that may prolong life without a parallel improvement to the quality of life one may have. In some cases, people may not want to continue living when they are in constant pain and no longer enjoying life. Should patients have the right to choose to die with dignity? Dr. Jack Kevorkian was a staunch advocate for physician-assisted suicide: the voluntary or physician-assisted use of lethal medication provided by a medical doctor to end one's life. This right to have a doctor help a patient die with dignity is controversial. In the United States, Oregon was the first state to pass a law allowing physician-assisted suicides. In 1997, Oregon instituted the Death with Dignity Act, which required the presence of two physicians for a legal assisted suicide. This law was successfully challenged by U.S. Attorney General John Ashcroft in 2001, but the appeals process ultimately upheld the Oregon law. As of 2019, seven states and the District of Columbia have passed similar laws allowing physician assisted suicide. The controversy surrounding death with dignity laws is emblematic of the way our society tries to separate itself from death. Health institutions have built facilities to comfortably house those who are terminally ill. This is seen as a compassionate act, helping relieve the surviving family members of the burden of caring for the dying relative. But studies almost universally show that people prefer to die in their own homes (Lloyd, White, and Sutton 2011). Is it our social responsibility to care for elderly relatives up until their death? How do we balance the responsibility for caring for an elderly relative with our other responsibilities and obligations? As our society grows older, and as new medical technology can prolong life even further, the answers to these questions will develop and change. The changing concept of hospice is an indicator of our society's changing view of death. Hospice is a type of healthcare that treats terminally ill people when "cure-oriented treatments" are no longer an option (Hospice Foundation of America 2012b). Hospice doctors, nurses, and therapists receive special training in the care of the dying. The focus is not on getting better or curing the illness, but on passing out of this life in comfort and peace. Hospice centers exist as a place where people can go to die in comfort, and increasingly, hospice services encourage at-home care so that someone has the comfort of dying in a familiar environment, surrounded by family (Hospice Foundation of America 2012a). While many of us would probably prefer to avoid thinking of the end of our lives, it may be possible to take comfort in the idea that when we do approach death in a hospice setting, it is in a familiar, relatively controlled place.

challenges families face

As the structure of family changes over time, so do the challenges families face. Events like divorce and remarriage present new difficulties for families and individuals. Other long-standing domestic issues such as abuse continue to strain the health and stability of today's families. Divorce and Remarriage Divorce, while fairly common and accepted in modern U.S. society, was once a word that would only be whispered and was accompanied by gestures of disapproval. In 1960, divorce was generally uncommon, affecting only 9.1 out of every 1,000 married persons. That number more than doubled (to 20.3) by 1975 and peaked in 1980 at 22.6 (Popenoe 2007). Over the last quarter century, divorce rates have dropped steadily and are now similar to those in 1970. The dramatic increase in divorce rates after the 1960s has been associated with the liberalization of divorce laws, as well as the shift in societal makeup due to women increasingly entering the workforce (Michael 1978). The decrease in divorce rates can be attributed to two probable factors: an increase in the age at which people get married, and an increased level of education among those who marry—both of which have been found to promote greater marital stability. Divorce does not occur equally among all people in the United States; some segments of the U.S. population are more likely to divorce than others. According the American Community Survey (ACS), men and women in the Northeast have the lowest rates of divorce at 7.2 and 7.5 per 1,000 people. The South has the highest rate of divorce at 10.2 for men and 11.1 for women. Divorce rates are likely higher in the South because marriage rates are higher and marriage occurs at younger-than-average ages in this region. In the Northeast, the marriage rate is lower and first marriages tend to be delayed; therefore, the divorce rate is lower (U.S. Census Bureau 2011). The rate of divorce also varies by race. In a 2009 ACS study, American Indian and Alaskan Natives reported the highest percentages of currently divorced individuals (12.6 percent) followed by blacks (11.5 percent), whites (10.8 percent), Pacific Islanders (8 percent), Latinos (7.8 percent) and Asians (4.9 percent) (ACS 2011). In general those who marry at a later age and have a college education have lower rates of divorce. YearDivorces and annulmentsPopulationRate per 1,000 total population20111877,000246,273,3663.620101872,000244,122,5293.620091840,000242,610,5613.520081844,000240,545,1633.520071856,000238,352,8503.620061872,000236,094,2773.720051847,000233,495,1633.620042879,000236,402,6563.720033927,000243,902,0903.820024955,000243,108,3033.920015940,000236,416,7624.020005944,000233,550,1434.0 Table14.2 Provisional number of divorces and annulments and rate: United States, 2000-2011 There has been a steady decrease in divorce over the past decade. (National Center for Health Statistics, CDC)1Excludes data for California, Georgia, Hawaii, Indiana, Louisiana, and Minnesota.2Excludes data for California, Georgia, Hawaii, Indiana, and Louisiana.3Excludes data for California, Hawaii, Indiana, and Oklahoma.4Excludes data for California, Indiana, and Oklahoma.5Excludes data for California, Indiana, Louisiana, and Oklahoma.Note: Rates for 2001-2009 have been revised and are based on intercensal population estimates from the 2000 and 2010 censuses. Populations for 2010 rates are based on the 2010 census. So what causes divorce? While more young people are choosing to postpone or opt out of marriage, those who enter into the union do so with the expectation that it will last. A great deal of marital problems can be related to stress, especially financial stress. According to researchers participating in the University of Virginia's National Marriage Project, couples who enter marriage without a strong asset base (like a home, savings, and a retirement plan) are 70 percent more likely to be divorced after three years than are couples with at least $10,000 in assets. This is connected to factors such as age and education level that correlate with low incomes. The addition of children to a marriage creates added financial and emotional stress. Research has established that marriages enter their most stressful phase upon the birth of the first child (Popenoe and Whitehead 2007). This is particularly true for couples who have multiples (twins, triplets, and so on). Married couples with twins or triplets are 17 percent more likely to divorce than those with children from single births (McKay 2010). Another contributor to the likelihood of divorce is a general decline in marital satisfaction over time. As people get older, they may find that their values and life goals no longer match up with those of their spouse (Popenoe and Whitehead 2004). Divorce is thought to have a cyclical pattern. Children of divorced parents are 40 percent more likely to divorce than children of married parents. And when we consider children whose parents divorced and then remarried, the likelihood of their own divorce rises to 91 percent (Wolfinger 2005). This might result from being socialized to a mindset that a broken marriage can be replaced rather than repaired (Wolfinger 2005). That sentiment is also reflected in the finding that when both partners of a married couple have been previously divorced, their marriage is 90 percent more likely to end in divorce (Wolfinger 2005). Figure 14.7 A study from Radford University indicated that bartenders are among the professions with the highest divorce rates (38.4 percent). Other traditionally low-wage industries (like restaurant service, custodial employment, and factory work) are also associated with higher divorce rates. (Aamodt and McCoy 2010). (Photo courtesy of Daniel Lobo/flickr) People in a second marriage account for approximately 19.3 percent of all married persons, and those who have been married three or more times account for 5.2 percent (U.S. Census Bureau 2011). The vast majority (91 percent) of remarriages occur after divorce; only 9 percent occur after death of a spouse (Kreider 2006). Most men and women remarry within five years of a divorce, with the median length for men (three years) being lower than for women (4.4 years). This length of time has been fairly consistent since the 1950s. The majority of those who remarry are between the ages of twenty-five and forty-four (Kreider 2006). The general pattern of remarriage also shows that whites are more likely to remarry than black Americans. Marriage the second time around (or third or fourth) can be a very different process than the first. Remarriage lacks many of the classic courtship rituals of a first marriage. In a second marriage, individuals are less likely to deal with issues like parental approval, premarital sex, or desired family size (Elliot 2010). In a survey of households formed by remarriage, a mere 8 percent included only biological children of the remarried couple. Of the 49 percent of homes that include children, 24 percent included only the woman's biological children, 3 percent included only the man's biological children, and 9 percent included a combination of both spouse's children (U.S. Census Bureau 2006). Children of Divorce and Remarriage Divorce and remarriage can been stressful on partners and children alike. Divorce is often justified by the notion that children are better off in a divorced family than in a family with parents who do not get along. However, long-term studies determine that to be generally untrue. Research suggests that while marital conflict does not provide an ideal childrearing environment, going through a divorce can be damaging. Children are often confused and frightened by the threat to their family security. They may feel responsible for the divorce and attempt to bring their parents back together, often by sacrificing their own well-being (Amato 2000). Only in high-conflict homes do children benefit from divorce and the subsequent decrease in conflict. The majority of divorces come out of lower-conflict homes, and children from those homes are more negatively impacted by the stress of the divorce than the stress of unhappiness in the marriage (Amato 2000). Studies also suggest that stress levels for children are not improved when a child acquires a stepfamily through marriage. Although there may be increased economic stability, stepfamilies typically have a high level of interpersonal conflict (McLanahan and Sandefur 1994). Children's ability to deal with a divorce may depend on their age. Research has found that divorce may be most difficult for school-aged children, as they are old enough to understand the separation but not old enough to understand the reasoning behind it. Older teenagers are more likely to recognize the conflict that led to the divorce but may still feel fear, loneliness, guilt, and pressure to choose sides. Infants and preschool-age children may suffer the heaviest impact from the loss of routine that the marriage offered (Temke 2006). Proximity to parents also makes a difference in a child's well-being after divorce. Boys who live or have joint arrangements with their fathers show less aggression than those who are raised by their mothers only. Similarly, girls who live or have joint arrangements with their mothers tend to be more responsible and mature than those who are raised by their fathers only. Nearly three-fourths of the children of parents who are divorced live in a household headed by their mother, leaving many boys without a father figure residing in the home (U.S. Census Bureau 2011b). Still, researchers suggest that a strong parent-child relationship can greatly improve a child's adjustment to divorce (Temke 2006). There is empirical evidence that divorce has not discouraged children in terms of how they view marriage and family. A blended family has additional stress resulting from combining children from the current and previous relationships. The blended family also has a ex-parent that has different discipline techniques. In a survey conducted by researchers from the University of Michigan, about three-quarters of high school seniors said it was "extremely important" to have a strong marriage and family life. And over half believed it was "very likely" that they would be in a lifelong marriage (Popenoe and Whitehead 2007). These numbers have continued to climb over the last twenty-five years. Violence and Abuse Violence and abuse are among the most disconcerting of the challenges that today's families face. Abuse can occur between spouses, between parent and child, as well as between other family members. The frequency of violence among families is a difficult to determine because many cases of spousal abuse and child abuse go unreported. In any case, studies have shown that abuse (reported or not) has a major impact on families and society as a whole. Domestic Violence Domestic violence is a significant social problem in the United States. It is often characterized as violence between household or family members, specifically spouses. To include unmarried, cohabitating, and same-sex couples, family sociologists have created the term intimate partner violence (IPV). Women are the primary victims of intimate partner violence. It is estimated that one in four women has experienced some form of IPV in her lifetime (compared to one in seven men) (Catalano 2007). IPV may include physical violence, such as punching, kicking, or other methods of inflicting physical pain; sexual violence, such as rape or other forced sexual acts; threats and intimidation that imply either physical or sexual abuse; and emotional abuse, such as harming another's sense of self-worth through words or controlling another's behavior. IPV often starts as emotional abuse and then escalates to other forms or combinations of abuse (Centers for Disease Control 2012). Figure 14.8 Thirty percent of women who are murdered are killed by their intimate partner. What does this statistic reveal about societal patterns and norms concerning intimate relationships and gender roles? (Photo courtesy of Kathy Kimpel/flickr) In 2010, of IPV acts that involved physical actions against women, 57 percent involved physical violence only; 9 percent involved rape and physical violence; 14 percent involved physical violence and stalking; 12 percent involved rape, physical violence, and stalking; and 4 percent involved rape only (CDC 2011). This is vastly different than IPV abuse patterns for men, which show that nearly all (92 percent) physical acts of IVP take the form of physical violence and fewer than 1 percent involve rape alone or in combination (Catalano 2007). IPV affects women at greater rates than men because women often take the passive role in relationships and may become emotionally dependent on their partners. Perpetrators of IPV work to establish and maintain such dependence in order to hold power and control over their victims, making them feel stupid, crazy, or ugly—in some way worthless. IPV affects different segments of the population at different rates. The rate of IPV for black women (4.6 per 1,000 persons over the age of twelve) is higher than that for white women (3.1). These numbers have been fairly stable for both racial groups over the last ten years. However, the numbers have steadily increased for Native Americans and Alaskan Natives (up to 11.1 for females) (Catalano 2007). Those who are separated report higher rates of abuse than those with other marital statuses, as conflict is typically higher in those relationships. Similarly, those who are cohabitating are more likely than those who are married to experience IPV (Stets and Straus 1990). Other researchers have found that the rate of IPV doubles for women in low-income disadvantaged areas when compared to IPV experienced by women who reside in more affluent areas (Benson and Fox 2004). Overall, women ages twenty to twenty-four are at the greatest risk of nonfatal abuse (Catalano 2007). Accurate statistics on IPV are difficult to determine, as it is estimated that more than half of nonfatal IPV goes unreported. It is not until victims choose to report crimes that patterns of abuse are exposed. Most victims studied stated that abuse had occurred for at least two years prior to their first report (Carlson, Harris, and Holden 1999). Sometimes abuse is reported to police by a third party, but it still may not be confirmed by victims. A study of domestic violence incident reports found that even when confronted by police about abuse, 29 percent of victims denied that abuse occurred. Surprisingly, 19 percent of their assailants were likely to admit to abuse (Felson, Ackerman, and Gallagher 2005). According to the National Criminal Victims Survey, victims cite varied reasons why they are reluctant to report abuse, as shown in the table below. Reason Abuse Is Unreported% Females% MalesConsidered a Private Matter2239Fear of Retaliation125To Protect the Abuser1416Belief That Police Won't Do Anything88 Table14.3 This chart shows reasons that victims give for why they fail to report abuse to police authorities (Catalano 2007). Two-thirds of nonfatal IPV occurs inside of the home and approximately 10 percent occurs at the home of the victim's friend or neighbor. The majority of abuse takes place between the hours of 6 p.m. and 6 a.m., and nearly half (42 percent) involves alcohol or drug use (Catalano 2007). Many perpetrators of IVP blame alcohol or drugs for their abuse, though studies have shown that alcohol and drugs do not cause IPV, they may only lower inhibitions (Hanson 2011). IPV has significant long-term effects on individual victims and on society. Studies have shown that IPV damage extends beyond the direct physical or emotional wounds. Extended IPV has been linked to unemployment among victims, as many have difficulty finding or holding employment. Additionally, nearly all women who report serious domestic problems exhibit symptoms of major depression (Goodwin, Chandler, and Meisel 2003). Female victims of IPV are also more likely to abuse alcohol or drugs, suffer from eating disorders, and attempt suicide (Silverman et al. 2001). IPV is indeed something that impacts more than just intimate partners. In a survey, 34 percent of respondents said they have witnessed IPV, and 59 percent said that they know a victim personally (Roper Starch Worldwide 1995). Many people want to help IPV victims but are hesitant to intervene because they feel that it is a personal matter or they fear retaliation from the abuser—reasons similar to those of victims who do not report IPV. Child Abuse Children are among the most helpless victims of abuse. In 2010, there were more than 3.3 million reports of child abuse involving an estimated 5.9 million children (Child Help 2011). Three-fifths of child abuse reports are made by professionals, including teachers, law enforcement personnel, and social services staff. The rest are made by anonymous sources, other relatives, parents, friends, and neighbors. Child abuse may come in several forms, the most common being neglect (78.3 percent), followed by physical abuse (10.8 percent), sexual abuse (7.6 percent), psychological maltreatment (7.6 percent), and medical neglect (2.4 percent) (Child Help 2011). Some children suffer from a combination of these forms of abuse. The majority (81.2 percent) of perpetrators are parents; 6.2 percent are other relatives. Infants (children less than one year old) were the most victimized population with an incident rate of 20.6 per 1,000 infants. This age group is particularly vulnerable to neglect because they are entirely dependent on parents for care. Some parents do not purposely neglect their children; factors such as cultural values, standard of care in a community, and poverty can lead to hazardous level of neglect. If information or assistance from public or private services are available and a parent fails to use those services, child welfare services may intervene (U.S. Department of Health and Human Services). Figure 14.9 The Casey Anthony trial, in which Casey was ultimately acquitted of murder charges against her daughter, Caylee, created public outrage and brought to light issues of child abuse and neglect across the United States. (Photo courtesy of Bruce Tuten/flickr) Infants are also often victims of physical abuse, particularly in the form of violent shaking. This type of physical abuse is referred to as shaken-baby syndrome, which describes a group of medical symptoms such as brain swelling and retinal hemorrhage resulting from forcefully shaking or causing impact to an infant's head. A baby's cry is the number one trigger for shaking. Parents may find themselves unable to soothe a baby's concerns and may take their frustration out on the child by shaking him or her violently. Other stress factors such as a poor economy, unemployment, and general dissatisfaction with parental life may contribute this type of abuse. While there is no official central registry of shaken-baby syndrome statistics, it is estimated that each year 1,400 babies die or suffer serious injury from being shaken (Barr 2007). SOCIAL POLICY AND DEBATE Corporal Punishment Physical abuse in children may come in the form of beating, kicking, throwing, choking, hitting with objects, burning, or other methods. Injury inflicted by such behavior is considered abuse even if the parent or caregiver did not intend to harm the child. Other types of physical contact that are characterized as discipline (spanking, for example) are not considered abuse as long as no injury results (Child Welfare Information Gateway 2008). This issue is rather controversial among modern-day people in the United States. While some parents feel that physical discipline, or corporal punishment, is an effective way to respond to bad behavior, others feel that it is a form of abuse. According to a poll conducted by ABC News, 65 percent of respondents approve of spanking and 50 percent said that they sometimes spank their child. Tendency toward physical punishment may be affected by culture and education. Those who live in the South are more likely than those who live in other regions to spank their child. Those who do not have a college education are also more likely to spank their child (Crandall 2011). Currently, 23 states officially allow spanking in the school system; however, many parents may object and school officials must follow a set of clear guidelines when administering this type of punishment (Crandall 2011). Studies have shown that spanking is not an effective form of punishment and may lead to aggression by the victim, particularly in those who are spanked at a young age (Berlin 2009). Child abuse occurs at all socioeconomic and education levels and crosses ethnic and cultural lines. Just as child abuse is often associated with stresses felt by parents, including financial stress, parents who demonstrate resilience to these stresses are less likely to abuse (Samuels 2011). Young parents are typically less capable of coping with stresses, particularly the stress of becoming a new parent. Teenage mothers are more likely to abuse their children than their older counterparts. As a parent's age increases, the risk of abuse decreases. Children born to mothers who are fifteen years old or younger are twice as likely to be abused or neglected by age five than are children born to mothers ages twenty to twenty-one (George and Lee 1997). Drug and alcohol use is also a known contributor to child abuse. Children raised by substance abusers have a risk of physical abuse three times greater than other kids, and neglect is four times as prevalent in these families (Child Welfare Information Gateway 2011). Other risk factors include social isolation, depression, low parental education, and a history of being mistreated as a child. Approximately 30 percent of abused children will later abuse their own children (Child Welfare Information Gateway 2006). The long-term effects of child abuse impact the physical, mental, and emotional wellbeing of a child. Injury, poor health, and mental instability occur at a high rate in this group, with 80 percent meeting the criteria of one or more psychiatric disorders, such as depression, anxiety, or suicidal behavior, by age twenty-one. Abused children may also suffer from cognitive and social difficulties. Behavioral consequences will affect most, but not all, of child abuse victims. Children of abuse are 25 percent more likely, as adolescents, to suffer from difficulties like poor academic performance and teen pregnancy, or to engage in behaviors like drug abuse and general delinquency. They are also more likely to participate in risky sexual acts that increase their chances of contracting a sexually transmitted disease (Child Welfare Information Gateway 2006). Other risky behaviors include drug and alcohol abuse. As these consequences can affect the health care, education, and criminal systems, the problems resulting from child abuse do not just belong to the child and family, but to society as a whole.

the soc approach to relgion

From the Latin religio (respect for what is sacred) and religare (to bind, in the sense of an obligation), the term religion describes various systems of belief and practice that define what people consider to be sacred or spiritual (Fasching and deChant 2001; Durkheim 1915). Throughout history, and in societies across the world, leaders have used religious narratives, symbols, and traditions in an attempt to give more meaning to life and understand the universe. Some form of religion is found in every known culture, and it is usually practiced in a public way by a group. The practice of religion can include feasts and festivals, intercession with God or gods, marriage and funeral services, music and art, meditation or initiation, sacrifice or service, and other aspects of culture. While some people think of religion as something individual because religious beliefs can be highly personal, religion is also a social institution. Social scientists recognize that religion exists as an organized and integrated set of beliefs, behaviors, and norms centered on basic social needs and values. Moreover, religion is a cultural universal found in all social groups. For instance, in every culture, funeral rites are practiced in some way, although these customs vary between cultures and within religious affiliations. Despite differences, there are common elements in a ceremony marking a person's death, such as announcement of the death, care of the deceased, disposition, and ceremony or ritual. These universals, and the differences in the way societies and individuals experience religion, provide rich material for sociological study. In studying religion, sociologists distinguish between what they term the experience, beliefs, and rituals of a religion. Religious experience refers to the conviction or sensation that we are connected to "the divine." This type of communion might be experienced when people are pray or meditate. Religious beliefs are specific ideas members of a particular faith hold to be true, such as that Jesus Christ was the son of God, or that reincarnation exists. Another illustration of religious beliefs is the creation stories we find in different religions. Religious rituals are behaviors or practices that are either required or expected of the members of a particular group, such as bar mitzvah or confession of sins (Barkan and Greenwood 2003). The History of Religion as a Sociological Concept In the wake of nineteenth century European industrialization and secularization, three social theorists attempted to examine the relationship between religion and society: Émile Durkheim, Max Weber, and Karl Marx. They are among the founding thinkers of modern sociology. As stated earlier, French sociologist Émile Durkheim (1858-1917) defined religion as a "unified system of beliefs and practices relative to sacred things" (1915). To him, sacred meant extraordinary—something that inspired wonder and that seemed connected to the concept of "the divine." Durkheim argued that "religion happens" in society when there is a separation between the profane (ordinary life) and the sacred (1915). A rock, for example, isn't sacred or profane as it exists. But if someone makes it into a headstone, or another person uses it for landscaping, it takes on different meanings—one sacred, one profane. Durkheim is generally considered the first sociologist who analyzed religion in terms of its societal impact. Above all, he believed religion is about community: It binds people together (social cohesion), promotes behavior consistency (social control), and offers strength during life's transitions and tragedies (meaning and purpose). By applying the methods of natural science to the study of society, Durkheim held that the source of religion and morality is the collective mind-set of society and that the cohesive bonds of social order result from common values in a society. He contended that these values need to be maintained to maintain social stability. But what would happen if religion were to decline? This question led Durkheim to posit that religion is not just a social creation but something that represents the power of society: When people celebrate sacred things, they celebrate the power of their society. By this reasoning, even if traditional religion disappeared, society wouldn't necessarily dissolve. Whereas Durkheim saw religion as a source of social stability, German sociologist and political economist Max Weber (1864-1920) believed it was a precipitator of social change. He examined the effects of religion on economic activities and noticed that heavily Protestant societies—such as those in the Netherlands, England, Scotland, and Germany—were the most highly developed capitalist societies and that their most successful business leaders were Protestant. In his writing The Protestant Ethic and the Spirit of Capitalism (1905), he contends that the Protestant work ethic influenced the development of capitalism. Weber noted that certain kinds of Protestantism supported the pursuit of material gain by motivating believers to work hard, be successful, and not spend their profits on frivolous things. (The modern use of "work ethic" comes directly from Weber's Protestant ethic, although it has now lost its religious connotations.) BIG PICTURE The Protestant Work Ethic in the Information Age Max Weber (1904) posited that, in Europe in his time, Protestants were more likely than Catholics to value capitalist ideology, and believed in hard work and savings. He showed that Protestant values directly influenced the rise of capitalism and helped create the modern world order. Weber thought the emphasis on community in Catholicism versus the emphasis on individual achievement in Protestantism made a difference. His century-old claim that the Protestant work ethic led to the development of capitalism has been one of the most important and controversial topics in the sociology of religion. In fact, scholars have found little merit to his contention when applied to modern society (Greeley 1989). What does the concept of work ethic mean today? The work ethic in the information age has been affected by tremendous cultural and social change, just as workers in the mid- to late nineteenth century were influenced by the wake of the Industrial Revolution. Factory jobs tend to be simple, uninvolved, and require very little thinking or decision making on the part of the worker. Today, the work ethic of the modern workforce has been transformed, as more thinking and decision making is required. Employees also seek autonomy and fulfillment in their jobs, not just wages. Higher levels of education have become necessary, as well as people management skills and access to the most recent information on any given topic. The information age has increased the rapid pace of production expected in many jobs. On the other hand, the "McDonaldization" of the United States (Hightower 1975; Ritzer 1993), in which many service industries, such as the fast-food industry, have established routinized roles and tasks, has resulted in a "discouragement" of the work ethic. In jobs where roles and tasks are highly prescribed, workers have no opportunity to make decisions. They are considered replaceable commodities as opposed to valued employees. During times of recession, these service jobs may be the only employment possible for younger individuals or those with low-level skills. The pay, working conditions, and robotic nature of the tasks dehumanizes the workers and strips them of incentives for doing quality work. Working hard also doesn't seem to have any relationship with Catholic or Protestant religious beliefs anymore, or those of other religions; information age workers expect talent and hard work to be rewarded by material gain and career advancement. German philosopher, journalist, and revolutionary socialist Karl Marx (1818-1883) also studied the social impact of religion. He believed religion reflects the social stratification of society and that it maintains inequality and perpetuates the status quo. For him, religion was just an extension of working-class (proletariat) economic suffering. He famously argued that religion "is the opium of the people" (1844). For Durkheim, Weber, and Marx, who were reacting to the great social and economic upheaval of the late nineteenth century and early twentieth century in Europe, religion was an integral part of society. For Durkheim, religion was a force for cohesion that helped bind the members of society to the group, while Weber believed religion could be understood as something separate from society. Marx considered religion inseparable from the economy and the worker. Religion could not be understood apart from the capitalist society that perpetuated inequality. Despite their different views, these social theorists all believed in the centrality of religion to society. Theoretical Perspectives on Religion Figure 15.2 Functionalists believe religion meets many important needs for people, including group cohesion and companionship. (Photo courtesy of James Emery/flickr) Modern-day sociologists often apply one of three major theoretical perspectives. These views offer different lenses through which to study and understand society: functionalism, symbolic interactionism, and conflict theory. Let's explore how scholars applying these paradigms understand religion. Functionalism Functionalists contend that religion serves several functions in society. Religion, in fact, depends on society for its existence, value, and significance, and vice versa. From this perspective, religion serves several purposes, like providing answers to spiritual mysteries, offering emotional comfort, and creating a place for social interaction and social control. In providing answers, religion defines the spiritual world and spiritual forces, including divine beings. For example, it helps answer questions like, "How was the world created?" "Why do we suffer?" "Is there a plan for our lives?" and "Is there an afterlife?" As another function, religion provides emotional comfort in times of crisis. Religious rituals bring order, comfort, and organization through shared familiar symbols and patterns of behavior. One of the most important functions of religion, from a functionalist perspective, is the opportunities it creates for social interaction and the formation of groups. It provides social support and social networking and offers a place to meet others who hold similar values and a place to seek help (spiritual and material) in times of need. Moreover, it can foster group cohesion and integration. Because religion can be central to many people's concept of themselves, sometimes there is an "in-group" versus "out-group" feeling toward other religions in our society or within a particular practice. On an extreme level, the Inquisition, the Salem witch trials, and anti-Semitism are all examples of this dynamic. Finally, religion promotes social control: It reinforces social norms such as appropriate styles of dress, following the law, and regulating sexual behavior. Conflict Theory Conflict theorists view religion as an institution that helps maintain patterns of social inequality. For example, the Vatican has a tremendous amount of wealth, while the average income of Catholic parishioners is small. According to this perspective, religion has been used to support the "divine right" of oppressive monarchs and to justify unequal social structures, like India's caste system. Conflict theorists are critical of the way many religions promote the idea that believers should be satisfied with existing circumstances because they are divinely ordained. This power dynamic has been used by Christian institutions for centuries to keep poor people poor and to teach them that they shouldn't be concerned with what they lack because their "true" reward (from a religious perspective) will come after death. Conflict theorists also point out that those in power in a religion are often able to dictate practices, rituals, and beliefs through their interpretation of religious texts or via proclaimed direct communication from the divine. Figure 15.3 Many religions, including the Catholic faith, have long prohibited women from becoming spiritual leaders. Feminist theorists focus on gender inequality and promote leadership roles for women in religion. (Photo courtesy of Wikimedia Commons) The feminist perspective is a conflict theory view that focuses specifically on gender inequality. In terms of religion, feminist theorists assert that, although women are typically the ones to socialize children into a religion, they have traditionally held very few positions of power within religions. A few religions and religious denominations are more gender equal, but male dominance remains the norm of most. SOCIOLOGY IN THE REAL WORLD Rational Choice Theory: Can Economic Theory Be Applied to Religion? How do people decide which religion to follow, if any? How does one pick a church or decide which denomination "fits" best? Rational choice theory (RCT) is one way social scientists have attempted to explain these behaviors. The theory proposes that people are self-interested, though not necessarily selfish, and that people make rational choices—choices that can reasonably be expected to maximize positive outcomes while minimizing negative outcomes. Sociologists Roger Finke and Rodney Stark (1988) first considered the use of RCT to explain some aspects of religious behavior, with the assumption that there is a basic human need for religion in terms of providing belief in a supernatural being, a sense of meaning in life, and belief in life after death. Religious explanations of these concepts are presumed to be more satisfactory than scientific explanations, which may help to account for the continuation of strong religious connectedness in countries such as the United States, despite predictions of some competing theories for a great decline in religious affiliation due to modernization and religious pluralism. Another assumption of RCT is that religious organizations can be viewed in terms of "costs" and "rewards." Costs are not only monetary requirements, but are also the time, effort, and commitment demands of any particular religious organization. Rewards are the intangible benefits in terms of belief and satisfactory explanations about life, death, and the supernatural, as well as social rewards from membership. RCT proposes that, in a pluralistic society with many religious options, religious organizations will compete for members, and people will choose between different churches or denominations in much the same way they select other consumer goods, balancing costs and rewards in a rational manner. In this framework, RCT also explains the development and decline of churches, denominations, sects, and even cults; this limited part of the very complex RCT theory is the only aspect well supported by research data. Critics of RCT argue that it doesn't fit well with human spiritual needs, and many sociologists disagree that the costs and rewards of religion can even be meaningfully measured or that individuals use a rational balancing process regarding religious affiliation. The theory doesn't address many aspects of religion that individuals may consider essential (such as faith) and further fails to account for agnostics and atheists who don't seem to have a similar need for religious explanations. Critics also believe this theory overuses economic terminology and structure and point out that terms such as "rational" and "reward" are unacceptably defined by their use; they would argue that the theory is based on faulty logic and lacks external, empirical support. A scientific explanation for why something occurs can't reasonably be supported by the fact that it does occur. RCT is widely used in economics and to a lesser extent in criminal justice, but the application of RCT in explaining the religious beliefs and behaviors of people and societies is still being debated in sociology today. Symbolic Interactionism Rising from the concept that our world is socially constructed, symbolic interactionism studies the symbols and interactions of everyday life. To interactionists, beliefs and experiences are not sacred unless individuals in a society regard them as sacred. The Star of David in Judaism, the cross in Christianity, and the crescent and star in Islam are examples of sacred symbols. Interactionists are interested in what these symbols communicate. Because interactionists study one-on-one, everyday interactions between individuals, a scholar using this approach might ask questions focused on this dynamic. The interaction between religious leaders and practitioners, the role of religion in the ordinary components of everyday life, and the ways people express religious values in social interactions—all might be topics of study to an interactionist.

religion in the united states

In examining the state of religion in the United States today, we see the complexity of religious life in our society, plus emerging trends like the rise of the megachurch, secularization, and the role of religion in social change. Religion and Social Change Religion has historically been an impetus to social change. The translation of sacred texts into everyday, nonscholarly language empowered people to shape their religions. Disagreements between religious groups and instances of religious persecution have led to wars and genocides. The United States is no stranger to religion as an agent of social change. In fact, the United States' first European arrivals were acting largely on religious convictions when they were compelled to settle in the United States. Liberation Theology Liberation theology began as a movement within the Roman Catholic Church in the 1950s and 1960s in Latin America, and it combines Christian principles with political activism. It uses the church to promote social change via the political arena, and it is most often seen in attempts to reduce or eliminate social injustice, discrimination, and poverty. A list of proponents of this kind of social justice (although some pre-date liberation theory) could include Francis of Assisi, Leo Tolstoy, Martin Luther King Jr., and Desmond Tutu. Although begun as a moral reaction against the poverty caused by social injustice in that part of the world, today liberation theology is an international movement that encompasses many churches and denominations. Liberation theologians discuss theology from the point of view of the poor and the oppressed, and some interpret the scriptures as a call to action against poverty and injustice. In Europe and North America, feminist theology has emerged from liberation theology as a movement to bring social justice to women. SOCIAL POLICY AND DEBATE Religious Leaders and the Rainbow of Gay Pride What happens when a religious leader officiates a gay marriage against denomination policies? What about when that same minister defends the action in part by coming out and making her own lesbian relationship known to the church? In the case of the Reverend Amy DeLong, it meant a church trial. Some leaders in her denomination assert that homosexuality is incompatible with their faith, while others feel this type of discrimination has no place in a modern church (Barrick 2011). As the LBGT community increasingly advocates for, and earns, basic civil rights, how will religious communities respond? Many religious groups have traditionally discounted LBGT sexualities as "wrong." However, these organizations have moved closer to respecting human rights by, for example, increasingly recognizing females as an equal gender. The Roman Catholic Church drew controversial attention to this issue in 2010 when the Vatican secretary of state suggested homosexuality was in part to blame for pedophilic sexual abuse scandals that have plagued the church (Beck 2010). Because numerous studies have shown there to be no relationship between homosexuality and pedophilia, nor a higher incidence of pedophilia among homosexuals than among heterosexuals (Beck 2010), the Vatican's comments seem suspect. More recently Pope Francis has been pushing for a more open church, and some Catholic bishops have been advocating for a more "gay-friendly" church (McKenna, 2014). This has not come to pass, but some scholars believe these changes are a matter of time. No matter the situation, most religions have a tenuous (at best) relationship with practitioners and leaders in the gay community. As one of the earliest Christian denominations to break barriers by ordaining women to serve as pastors, will Amy DeLong's United Methodist denomination also be a leader in LBGT rights within Christian churchgoing society? Megachurches A megachurch is a Christian church that has a very large congregation averaging more than 2,000 people who attend regular weekly services. As of 2009, the largest megachurch in the United States was in Houston Texas, boasting an average weekly attendance of more than 43,000 (Bogan 2009). Megachurches exist in other parts of the world, especially in South Korea, Brazil, and several African countries, but the rise of the megachurch in the United States is a fairly recent phenomenon that has developed primarily in California, Florida, Georgia, and Texas. Since 1970 the number of megachurches in this country has grown from about fifty to more than 1,000, most of which are attached to the Southern Baptist denomination (Bogan 2009). Approximately six million people are members of these churches (Bird and Thumma 2011). The architecture of these church buildings often resembles a sport or concert arena. The church may include jumbotrons (large-screen televisual technology usually used in sports arenas to show close-up shots of an event). Worship services feature contemporary music with drums and electric guitars and use state-of-the-art sound equipment. The buildings sometimes include food courts, sports and recreation facilities, and bookstores. Services such as child care and mental health counseling are often offered. Typically, a single, highly charismatic pastor leads the megachurch; at present, most are male. Some megachurches and their preachers have a huge television presence, and viewers all around the country watch and respond to their shows and fundraising. Besides size, U.S. megachurches share other traits, including conservative theology, evangelism, use of technology and social networking (Facebook, Twitter, podcasts, blogs), hugely charismatic leaders, few financial struggles, multiple sites, and predominantly white membership. They list their main focuses as youth activities, community service, and study of the Scripture (Hartford Institute for Religion Research b). Critics of megachurches believe they are too large to promote close relationships among fellow church members or the pastor, as could occur in smaller houses of worship. Supporters note that, in addition to the large worship services, congregations generally meet in small groups, and some megachurches have informal events throughout the week to allow for community-building (Hartford Institute for Religion Research a). Secularization Historical sociologists Émile Durkheim, Max Weber, and Karl Marx and psychoanalyst Sigmund Freud anticipated secularization and claimed that the modernization of society would bring about a decrease in the influence of religion. Weber believed membership in distinguished clubs would outpace membership in Protestant sects as a way for people to gain authority or respect. Conversely, some people suggest secularization is a root cause of many social problems, such as divorce, drug use, and educational downturn. One-time presidential contender Michele Bachmann even linked Hurricane Irene and the 2011 earthquake felt in Washington D.C. to politicians' failure to listen to God (Ward 2011). While some scholars see the United States becoming increasingly secular, others observe a rise in fundamentalism. Compared to other democratic, industrialized countries, the United States is generally perceived to be a fairly religious nation. Whereas 65 percent of U.S. adults in a 2009 Gallup survey said religion was an important part of their daily lives, the numbers were lower in Spain (49 percent), Canada (42 percent), France (30 percent), the United Kingdom (27 percent), and Sweden (17 percent) (Crabtree and Pelham 2009). Secularization interests social observers because it entails a pattern of change in a fundamental social institution. SOCIOLOGY IN THE REAL WORLD Thank God for that Touchdown: Separation of Church and State Imagine three public universities with football games scheduled on Saturday. At University A, a group of students in the stands who share the same faith decide to form a circle amid the spectators to pray for the team. For fifteen minutes, people in the circle share their prayers aloud among their group. At University B, the team ahead at halftime decides to join together in prayer, giving thanks and seeking support from God. This lasts for the first ten minutes of halftime on the sidelines of the field while spectators watch. At University C, the game program includes, among its opening moments, two minutes set aside for the team captain to share a prayer of his choosing with the spectators. In the tricky area of separation of church and state, which of these actions is allowed and which is forbidden? In our three fictional scenarios, the last example is against the law while the first two situations are perfectly acceptable. In the United States, a nation founded on the principles of religious freedom (many settlers were escaping religious persecution in Europe), how stringently do we adhere to this ideal? How well do we respect people's right to practice any belief system of their choosing? The answer just might depend on what religion you practice. In 2003, for example, a lawsuit escalated in Alabama regarding a monument to the Ten Commandments in a public building. In response, a poll was conducted by USA Today, CNN, and Gallup. Among the findings: 70 percent of people approved of a Christian Ten Commandments monument in public, while only 33 percent approved of a monument to the Islamic Qur'an in the same space. Similarly, survey respondents showed a 64 percent approval of social programs run by Christian organizations, but only 41 percent approved of the same programs run by Muslim groups (Newport 2003). These statistics suggest that, for most people in the United States, freedom of religion is less important than the religion under discussion. And this is precisely the point made by those who argue for separation of church and state. According to their contention, any state-sanctioned recognition of religion suggests endorsement of one belief system at the expense of all others—contradictory to the idea of freedom of religion. So what violates separation of church and state and what is acceptable? Myriad lawsuits continue to test the answer. In the case of the three fictional examples above, the issue of spontaneity is key, as is the existence (or lack thereof) of planning on the part of event organizers. The next time you're at a state event—political, public school, community—and the topic of religion comes up, consider where it falls in this debate.

world religions

Figure 15.4 The symbols of fourteen religions are depicted here. In no particular order, they represent Judaism, Wicca, Taoism, Christianity, Confucianism, Baha'i, Druidism, Islam, Hinduism, Zoroastrianism, Shinto, Jainism, Sikhism, and Buddhism. Can you match the symbol to the religion? What might a symbolic interactionist make of these symbols? (Photo courtesy of ReligiousTolerance.org) The major religions of the world (Hinduism, Buddhism, Islam, Confucianism, Christianity, Taoism, and Judaism) differ in many respects, including how each religion is organized and the belief system each upholds. Other differences include the nature of belief in a higher power, the history of how the world and the religion began, and the use of sacred texts and objects. Types of Religious Organizations Religions organize themselves—their institutions, practitioners, and structures—in a variety of fashions. For instance, when the Roman Catholic Church emerged, it borrowed many of its organizational principles from the ancient Roman military and turned senators into cardinals, for example. Sociologists use different terms, like ecclesia, denomination, and sect, to define these types of organizations. Scholars are also aware that these definitions are not static. Most religions transition through different organizational phases. For example, Christianity began as a cult, transformed into a sect, and today exists as an ecclesia. Cults, like sects, are new religious groups. In the United States today this term often carries pejorative connotations. However, almost all religions began as cults and gradually progressed to levels of greater size and organization. The term cult is sometimes used interchangeably with the term new religious movement (NRM). In its pejorative use, these groups are often disparaged as being secretive, highly controlling of members' lives, and dominated by a single, charismatic leader. Controversy exists over whether some groups are cults, perhaps due in part to media sensationalism over groups like polygamous Mormons or the Peoples Temple followers who died at Jonestown, Guyana. Some groups that are controversially labeled as cults today include the Church of Scientology and the Hare Krishna movement. A sect is a small and relatively new group. Most of the well-known Christian denominations in the United States today began as sects. For example, the Methodists and Baptists protested against their parent Anglican Church in England, just as Henry VIII protested against the Catholic Church by forming the Anglican Church. From "protest" comes the term Protestant. Occasionally, a sect is a breakaway group that may be in tension with larger society. They sometimes claim to be returning to "the fundamentals" or to contest the veracity of a particular doctrine. When membership in a sect increases over time, it may grow into a denomination. Often a sect begins as an offshoot of a denomination, when a group of members believes they should separate from the larger group. Some sects do not grow into denominations. Sociologists call these established sects. Established sects, such as the Amish or Jehovah's Witnesses fall halfway between sect and denomination on the ecclesia-cult continuum because they have a mixture of sect-like and denomination-like characteristics. A denomination is a large, mainstream religious organization, but it does not claim to be official or state sponsored. It is one religion among many. For example, Baptist, African Methodist Episcopal, Catholic, and Seventh-day Adventist are all Christian denominations. The term ecclesia, originally referring to a political assembly of citizens in ancient Athens, Greece, now refers to a congregation. In sociology, the term is used to refer to a religious group that most all members of a society belong to. It is considered a nationally recognized, or official, religion that holds a religious monopoly and is closely allied with state and secular powers. The United States does not have an ecclesia by this standard; in fact, this is the type of religious organization that many of the first colonists came to America to escape. Figure 15.5 How might you classify the Mennonites? As a cult, a sect, or a denomination? (Photo courtesy of Frenkieb/flickr) One way to remember these religious organizational terms is to think of cults, sects, denominations, and ecclesia representing a continuum, with increasing influence on society, where cults are least influential and ecclesia are most influential. Types of Religions Scholars from a variety of disciplines have strived to classify religions. One widely accepted categorization that helps people understand different belief systems considers what or who people worship (if anything). Using this method of classification, religions might fall into one of these basic categories, as shown in Table 15.1. Religious ClassificationWhat/Who Is DivineExamplePolytheismMultiple godsBelief systems of the ancient Greeks and RomansMonotheismSingle godJudaism, IslamAtheismNo deitiesAtheismAnimismNonhuman beings (animals, plants, natural world)Indigenous nature worship (Shinto)TotemismHuman-natural being connectionOjibwa (Native American) beliefs Table15.1 One way scholars have categorized religions is by classifying what or who they hold to be divine. Note that some religions may be practiced—or understood—in various categories. For instance, the Christian notion of the Holy Trinity (God, Jesus, Holy Spirit) defies the definition of monotheism, which is a religion based on belief in a single deity, to some scholars. Similarly, many Westerners view the multiple manifestations of Hinduism's godhead as polytheistic, which is a religion based on belief in multiple deities,, while Hindus might describe those manifestations are a monotheistic parallel to the Christian Trinity. Some Japanese practice Shinto, which follows animism, which is a religion that believes in the divinity of nonhuman beings, like animals, plants, and objects of the natural world, while people who practice totemism believe in a divine connection between humans and other natural beings. It is also important to note that every society also has nonbelievers, such as atheists, who do not believe in a divine being or entity, and agnostics, who hold that ultimate reality (such as God) is unknowable. While typically not an organized group, atheists and agnostics represent a significant portion of the population. It is important to recognize that being a nonbeliever in a divine entity does not mean the individual subscribes to no morality. Indeed, many Nobel Peace Prize winners and other great humanitarians over the centuries would have classified themselves as atheists or agnostics. The World's Religions Religions have emerged and developed across the world. Some have been short-lived, while others have persisted and grown. In this section, we will explore seven of the world's major religions. Hinduism The oldest religion in the world, Hinduism originated in the Indus River Valley about 4,500 years ago in what is now modern-day northwest India and Pakistan. It arose contemporaneously with ancient Egyptian and Mesopotamian cultures. With roughly one billion followers, Hinduism is the third-largest of the world's religions. Hindus believe in a divine power that can manifest as different entities. Three main incarnations—Brahma, Vishnu, and Shiva—are sometimes compared to the manifestations of the divine in the Christian Trinity. Multiple sacred texts, collectively called the Vedas, contain hymns and rituals from ancient India and are mostly written in Sanskrit. Hindus generally believe in a set of principles called dharma, which refer to one's duty in the world that corresponds with "right" actions. Hindus also believe in karma, or the notion that spiritual ramifications of one's actions are balanced cyclically in this life or a future life (reincarnation). Figure 15.6 Hindu women sometimes apply decorations of henna dye to their hands for special occasions such as weddings and religious festivals. (Photo courtesy of Akash Mazumdar) Figure 15.7 Buddhism promotes peace and tolerance. The 14th Dalai Lama (Tenzin Gyatso) is one of the most revered and influential Tibetan Buddhist leaders. (Photo courtesy of Nancy Pelosi/flickr) Buddhism Buddhism was founded by Siddhartha Gautama around 500 B.C.E. Siddhartha was said to have given up a comfortable, upper-class life to follow one of poverty and spiritual devotion. At the age of thirty-five, he famously meditated under a sacred fig tree and vowed not to rise before he achieved enlightenment (bodhi). After this experience, he became known as Buddha, or "enlightened one." Followers were drawn to Buddha's teachings and the practice of meditation, and he later established a monastic order. Figure 15.8 Meditation is an important practice in Buddhism. A Tibetan monk is shown here engaged in solitary meditation. (Photo courtesy of Prince Roy/flickr) Buddha's teachings encourage Buddhists to lead a moral life by accepting the four Noble Truths: 1) life is suffering, 2) suffering arises from attachment to desires, 3) suffering ceases when attachment to desires ceases, and 4) freedom from suffering is possible by following the "middle way." The concept of the "middle way" is central to Buddhist thinking, which encourages people to live in the present and to practice acceptance of others (Smith 1991). Buddhism also tends to deemphasize the role of a godhead, instead stressing the importance of personal responsibility (Craig 2002). Confucianism Confucianism was the official religion of China from 200 B.C.E. until it was officially abolished when communist leadership discouraged religious practice in 1949. The religion was developed by Kung Fu-Tzu (Confucius), who lived in the sixth and fifth centuries B.C.E. An extraordinary teacher, his lessons—which were about self-discipline, respect for authority and tradition, and jen (the kind treatment of every person)—were collected in a book called the Analects. Some religious scholars consider Confucianism more of a social system than a religion because it focuses on sharing wisdom about moral practices but doesn't involve any type of specific worship; nor does it have formal objects. In fact, its teachings were developed in context of problems of social anarchy and a near-complete deterioration of social cohesion. Dissatisfied with the social solutions put forth, Kung Fu-Tzu developed his own model of religious morality to help guide society (Smith 1991). Taoism In Taoism, the purpose of life is inner peace and harmony. Tao is usually translated as "way" or "path." The founder of the religion is generally recognized to be a man named Laozi, who lived sometime in the sixth century B.C.E. in China. Taoist beliefs emphasize the virtues of compassion and moderation. The central concept of tao can be understood to describe a spiritual reality, the order of the universe, or the way of modern life in harmony with the former two. The ying-yang symbol and the concept of polar forces are central Taoist ideas (Smith 1991). Some scholars have compared this Chinese tradition to its Confucian counterpart by saying that "whereas Confucianism is concerned with day-to-day rules of conduct, Taoism is concerned with a more spiritual level of being" (Feng and English 1972). Judaism After their Exodus from Egypt in the thirteenth century B.C.E., Jews, a nomadic society, became monotheistic, worshipping only one God. The Jews' covenant, or promise of a special relationship with Yahweh (God), is an important element of Judaism, and their sacred text is the Torah, which Christians also follow as the first five books of the Bible. Talmud refers to a collection of sacred Jewish oral interpretation of the Torah. Jews emphasize moral behavior and action in this world as opposed to beliefs or personal salvation in the next world. Figure 15.9 The Islamic house of worship is called a mosque. (Photo courtesy of David Stanley/flickr) Islam Islam is monotheistic religion and it follows the teaching of the prophet Muhammad, born in Mecca, Saudi Arabia, in 570 C.E. Muhammad is seen only as a prophet, not as a divine being, and he is believed to be the messenger of Allah (God), who is divine. The followers of Islam, whose U.S. population is projected to double in the next twenty years (Pew Research Forum 2011), are called Muslims. Islam means "peace" and "submission." The sacred text for Muslims is the Qur'an (or Koran). As with Christianity's Old Testament, many of the Qur'an stories are shared with the Jewish faith. Divisions exist within Islam, but all Muslims are guided by five beliefs or practices, often called "pillars": 1) Allah is the only god, and Muhammad is his prophet, 2) daily prayer, 3) helping those in poverty, 4) fasting as a spiritual practice, and 5) pilgrimage to the holy center of Mecca. Figure 15.10 cornerstones of Muslim practice is journeying to the religion's most sacred place, Mecca. (Photo courtesy of Raeky/flickr) Christianity Today the largest religion in the world, Christianity began 2,000 years ago in Palestine, with Jesus of Nazareth, a charismatic leader who taught his followers about caritas (charity) or treating others as you would like to be treated yourself. The sacred text for Christians is the Bible. While Jews, Christians, and Muslims share many of same historical religious stories, their beliefs verge. In their shared sacred stories, it is suggested that the son of God—a messiah—will return to save God's followers. While Christians believe that he already appeared in the person of Jesus Christ, Jews and Muslims disagree. While they recognize Christ as an important historical figure, their traditions don't believe he's the son of God, and their faiths see the prophecy of the messiah's arrival as not yet fulfilled. Different Christian groups have variations among their sacred texts. For instance, The Church of Jesus Christ of Latter-day Saints, an established Christian sect, also uses the Book of Mormon, which they believe details other parts of Christian doctrine and Jesus' life that aren't included in the Bible. Similarly, the Catholic Bible includes the Apocrypha, a collection that, while part of the 1611 King James translation, is no longer included in Protestant versions of the Bible. Although monotheistic, Christians often describe their god through three manifestations that they call the Holy Trinity: the father (God), the son (Jesus), and the Holy Spirit. The Holy Spirit is a term Christians often use to describe religious experience, or how they feel the presence of the sacred in their lives. One foundation of Christian doctrine is the Ten Commandments, which decry acts considered sinful, including theft, murder, and adultery.

race and ethnicity in the united states

When colonists came to the New World, they found a land that did not need "discovering" since it was already occupied. While the first wave of immigrants came from Western Europe, eventually the bulk of people entering North America were from Northern Europe, then Eastern Europe, then Latin America and Asia. And let us not forget the forced immigration of African slaves. Most of these groups underwent a period of disenfranchisement in which they were relegated to the bottom of the social hierarchy before they managed (for those who could) to achieve social mobility. Today, our society is multicultural, although the extent to which this multiculturality is embraced varies, and the many manifestations of multiculturalism carry significant political repercussions. The sections below will describe how several groups became part of U.S. society, discuss the history of intergroup relations for each faction, and assess each group's status today. Native Americans The only nonimmigrant ethnic group in the United States, Native Americans once numbered in the millions but by 2010 made up only 0.9 percent of U.S. populace; see above (U.S. Census 2010). Currently, about 2.9 million people identify themselves as Native American alone, while an additional 2.3 million identify them as Native American mixed with another ethnic group (Norris, Vines, and Hoeffel 2012). SOCIOLOGY IN THE REAL WORLD Sports Teams with Native American Names Figure 11.6 Many Native Americans (and others) believe sports teams with names like the Indians, Braves, and Warriors perpetuate unwelcome stereotypes. (Photo (a) courtesy of public domain/Wikimedia Commons; Photo (b) courtesy of Chris Brown/flickr) The sports world abounds with team names like the Indians, the Warriors, the Braves, and even the Savages and Redskins. These names arise from historically prejudiced views of Native Americans as fierce, brave, and strong savages: attributes that would be beneficial to a sports team, but are not necessarily beneficial to people in the United States who should be seen as more than just fierce savages. Since the civil rights movement of the 1960s, the National Congress of American Indians (NCAI) has been campaigning against the use of such mascots, asserting that the "warrior savage myth . . . reinforces the racist view that Indians are uncivilized and uneducated and it has been used to justify policies of forced assimilation and destruction of Indian culture" (NCAI Resolution #TUL-05-087 2005). The campaign has met with only limited success. While some teams have changed their names, hundreds of professional, college, and K-12 school teams still have names derived from this stereotype. Another group, American Indian Cultural Support (AICS), is especially concerned with the use of such names at K-12 schools, influencing children when they should be gaining a fuller and more realistic understanding of Native Americans than such stereotypes supply. What do you think about such names? Should they be allowed or banned? What argument would a symbolic interactionist make on this topic? How and Why They Came The earliest immigrants to America arrived millennia before European immigrants. Dates of the migration are debated with estimates ranging from between 45,000 and 12,000 BCE. It is thought that early Indians migrated to this new land in search of big game to hunt, which they found in huge herds of grazing herbivores in the Americas. Over the centuries and then the millennia, Native American culture blossomed into an intricate web of hundreds of interconnected tribes, each with its own customs, traditions, languages, and religions. History of Intergroup Relations Native American culture prior to European settlement is referred to as Pre-Columbian: that is, prior to the coming of Christopher Columbus in 1492. Mistakenly believing that he had landed in the East Indies, Columbus named the indigenous people "Indians," a name that has persisted for centuries despite being a geographical misnomer and one used to blanket 500 distinct groups who each have their own languages and traditions. The history of intergroup relations between European colonists and Native Americans is a brutal one. As discussed in the section on genocide, the effect of European settlement of the Americans was to nearly destroy the indigenous population. And although Native Americans' lack of immunity to European diseases caused the most deaths, overt mistreatment of Native Americans by Europeans was devastating as well. From the first Spanish colonists to the French, English, and Dutch who followed, European settlers took what land they wanted and expanded across the continent at will. If indigenous people tried to retain their stewardship of the land, Europeans fought them off with superior weapons. A key element of this issue is the indigenous view of land and land ownership. Most tribes considered the earth a living entity whose resources they were stewards of, the concepts of land ownership and conquest didn't exist in Native American society. Europeans' domination of the Americas was indeed a conquest; one scholar points out that Native Americans are the only minority group in the United States whose subordination occurred purely through conquest by the dominant group (Marger 1993). After the establishment of the United States government, discrimination against Native Americans was codified and formalized in a series of laws intended to subjugate them and keep them from gaining any power. Some of the most impactful laws are as follows: The Indian Removal Act of 1830 forced the relocation of any native tribes east of the Mississippi River to lands west of the river. The Indian Appropriation Acts funded further removals and declared that no Indian tribe could be recognized as an independent nation, tribe, or power with which the U.S. government would have to make treaties. This made it even easier for the U.S. government to take land it wanted. The Dawes Act of 1887 reversed the policy of isolating Native Americans on reservations, instead forcing them onto individual properties that were intermingled with white settlers, thereby reducing their capacity for power as a group. Native American culture was further eroded by the establishment of Indian boarding schools in the late nineteenth century. These schools, run by both Christian missionaries and the United States government, had the express purpose of "civilizing" Native American children and assimilating them into white society. The boarding schools were located off-reservation to ensure that children were separated from their families and culture. Schools forced children to cut their hair, speak English, and practice Christianity. Physical and sexual abuses were rampant for decades; only in 1987 did the Bureau of Indian Affairs issue a policy on sexual abuse in boarding schools. Some scholars argue that many of the problems that Native Americans face today result from almost a century of mistreatment at these boarding schools. Current Status The eradication of Native American culture continued until the 1960s, when Native Americans were able to participate in and benefit from the civil rights movement. The Indian Civil Rights Act of 1968 guaranteed Indian tribes most of the rights of the United States Bill of Rights. New laws like the Indian Self-Determination Act of 1975 and the Education Assistance Act of the same year recognized tribal governments and gave them more power. Indian boarding schools have dwindled to only a few, and Native American cultural groups are striving to preserve and maintain old traditions to keep them from being lost forever. However, Native Americans (some of whom now wished to be called American Indians so as to avoid the "savage" connotations of the term "native") still suffer the effects of centuries of degradation. Long-term poverty, inadequate education, cultural dislocation, and high rates of unemployment contribute to Native American populations falling to the bottom of the economic spectrum. Native Americans also suffer disproportionately with lower life expectancies than most groups in the United States. African Americans As discussed in the section on race, the term African American can be a misnomer for many individuals. Many people with dark skin may have their more recent roots in Europe or the Caribbean, seeing themselves as Dominican American or Dutch American. Further, actual immigrants from Africa may feel that they have more of a claim to the term African American than those who are many generations removed from ancestors who originally came to this country. This section will focus on the experience of the slaves who were transported from Africa to the United States, and their progeny. Currently, the U.S. Census Bureau (2014) estimates that 13.2 percent of the United States' population is black. How and Why They Came If Native Americans are the only minority group whose subordinate status occurred by conquest, African Americans are the exemplar minority group in the United States whose ancestors did not come here by choice. A Dutch sea captain brought the first Africans to the Virginia colony of Jamestown in 1619 and sold them as indentured servants. This was not an uncommon practice for either blacks or whites, and indentured servants were in high demand. For the next century, black and white indentured servants worked side by side. But the growing agricultural economy demanded greater and cheaper labor, and by 1705, Virginia passed the slave codes declaring that any foreign-born non-Christian could be a slave, and that slaves were considered property. The next 150 years saw the rise of U.S. slavery, with black Africans being kidnapped from their own lands and shipped to the New World on the trans-Atlantic journey known as the Middle Passage. Once in the Americas, the black population grew until U.S.-born blacks outnumbered those born in Africa. But colonial (and later, U.S.) slave codes declared that the child of a slave was a slave, so the slave class was created. By 1808, the slave trade was internal in the United States, with slaves being bought and sold across state lines like livestock. In 1808, during Thomas Jefferson's presidency, Congress prohibited the international importation of humans to be used as slaves. History of Intergroup Relations There is no starker illustration of the dominant-subordinate group relationship than that of slavery. In order to justify their severely discriminatory behavior, slaveholders and their supporters had to view blacks as innately inferior. Slaves were denied even the most basic rights of citizenship, a crucial factor for slaveholders and their supporters. Slavery poses an excellent example of conflict theory's perspective on race relations; the dominant group needed complete control over the subordinate group in order to maintain its power. Whippings, executions, rapes, denial of schooling and health care were all permissible and widely practiced. Slavery eventually became an issue over which the nation divided into geographically and ideologically distinct factions, leading to the Civil War. And while the abolition of slavery on moral grounds was certainly a catalyst to war, it was not the only driving force. Students of U.S. history will know that the institution of slavery was crucial to the Southern economy, whose production of crops like rice, cotton, and tobacco relied on the virtually limitless and cheap labor that slavery provided. In contrast, the North didn't benefit economically from slavery, resulting in an economic disparity tied to racial/political issues. A century later, the civil rights movement was characterized by boycotts, marches, sit-ins, and freedom rides: demonstrations by a subordinate group that would no longer willingly submit to domination. The major blow to America's formally institutionalized racism was the Civil Rights Act of 1964. This Act, which is still followed today, banned discrimination based on race, color, religion, sex, or national origin. Some sociologists, however, would argue that institutionalized racism persists. Current Status Although government-sponsored, formalized discrimination against African Americans has been outlawed, true equality does not yet exist. The National Urban League's 2011 Equality Index reports that blacks' overall equality level with whites has dropped in the past year, from 71.5 percent to 71.1 percent in 2010. The Index, which has been published since 2005, notes a growing trend of increased inequality with whites, especially in the areas of unemployment, insurance coverage, and incarceration. Blacks also trail whites considerably in the areas of economics, health, and education. To what degree do racism and prejudice contribute to this continued inequality? The answer is complex. 2008 saw the election of this country's first African American president: Barack Hussein Obama. Despite being popularly identified as black, we should note that President Obama is of a mixed background that is equally white, and although all presidents have been publicly mocked at times (Gerald Ford was depicted as a klutz, Bill Clinton as someone who could not control his libido), a startling percentage of the critiques of Obama have been based on his race. The most blatant of these was the controversy over his birth certificate, where the "birther" movement questioned his citizenship and right to hold office. Although blacks have come a long way from slavery, the echoes of centuries of disempowerment are still evident. Asian Americans Like many groups this section discusses, Asian Americans represent a great diversity of cultures and backgrounds. The experience of a Japanese American whose family has been in the United States for three generations will be drastically different from a Laotian American who has only been in the United States for a few years. This section primarily discusses Chinese, Japanese, and Vietnamese immigrants and shows the differences between their experiences. The most recent estimate from the U.S. Census Bureau (2014) suggest about 5.3 percent of the population identify themselves as Asian. How and Why They Came The national and ethnic diversity of Asian American immigration history is reflected in the variety of their experiences in joining U.S. society. Asian immigrants have come to the United States in waves, at different times, and for different reasons. The first Asian immigrants to come to the United States in the mid-nineteenth century were Chinese. These immigrants were primarily men whose intention was to work for several years in order to earn incomes to support their families in China. Their main destination was the American West, where the Gold Rush was drawing people with its lure of abundant money. The construction of the Transcontinental Railroad was underway at this time, and the Central Pacific section hired thousands of migrant Chinese men to complete the laying of rails across the rugged Sierra Nevada mountain range. Chinese men also engaged in other manual labor like mining and agricultural work. The work was grueling and underpaid, but like many immigrants, they persevered. Japanese immigration began in the 1880s, on the heels of the Chinese Exclusion Act of 1882. Many Japanese immigrants came to Hawaii to participate in the sugar industry; others came to the mainland, especially to California. Unlike the Chinese, however, the Japanese had a strong government that negotiated with the U.S. government to ensure the well-being of their immigrants. Japanese men were able to bring their wives and families to the United States, and were thus able to produce second- and third-generation Japanese Americans more quickly than their Chinese counterparts. The most recent large-scale Asian immigration came from Korea and Vietnam and largely took place during the second half of the twentieth century. While Korean immigration has been fairly gradual, Vietnamese immigration occurred primarily post-1975, after the fall of Saigon and the establishment of restrictive communist policies in Vietnam. Whereas many Asian immigrants came to the United States to seek better economic opportunities, Vietnamese immigrants came as political refugees, seeking asylum from harsh conditions in their homeland. The Refugee Act of 1980 helped them to find a place to settle in the United States. Figure 11.7 Thirty-five Vietnamese refugees wait to be taken aboard the amphibious USS Blue Ridge (LCC-19). They are being rescued from a thirty-five-foot fishing boat 350 miles northeast of Cam Ranh Bay, Vietnam, after spending eight days at sea. (Photo courtesy of U.S. Navy/Wikimedia Commons) History of Intergroup Relations Chinese immigration came to an abrupt end with the Chinese Exclusion Act of 1882. This act was a result of anti-Chinese sentiment burgeoned by a depressed economy and loss of jobs. White workers blamed Chinese migrants for taking jobs, and the passage of the Act meant the number of Chinese workers decreased. Chinese men did not have the funds to return to China or to bring their families to the United States, so they remained physically and culturally segregated in the Chinatowns of large cities. Later legislation, the Immigration Act of 1924, further curtailed Chinese immigration. The Act included the race-based National Origins Act, which was aimed at keeping U.S. ethnic stock as undiluted as possible by reducing "undesirable" immigrants. It was not until after the Immigration and Nationality Act of 1965 that Chinese immigration again increased, and many Chinese families were reunited. Although Japanese Americans have deep, long-reaching roots in the United States, their history here has not always been smooth. The California Alien Land Law of 1913 was aimed at them and other Asian immigrants, and it prohibited aliens from owning land. An even uglier action was the Japanese internment camps of World War II, discussed earlier as an illustration of expulsion. Current Status Asian Americans certainly have been subject to their share of racial prejudice, despite the seemingly positive stereotype as the model minority. The model minority stereotype is applied to a minority group that is seen as reaching significant educational, professional, and socioeconomic levels without challenging the existing establishment. This stereotype is typically applied to Asian groups in the United States, and it can result in unrealistic expectations, by putting a stigma on members of this group that do not meet the expectations. Stereotyping all Asians as smart and capable can also lead to a lack of much-needed government assistance and to educational and professional discrimination. Hispanic Americans Hispanic Americans have a wide range of backgrounds and nationalities. The segment of the U.S. population that self-identifies as Hispanic in 2013 was recently estimated at 17.1 percent of the total (U.S. Census Bureau 2014). According to the 2010 U.S. Census, about 75 percent of the respondents who identify as Hispanic report being of Mexican, Puerto Rican, or Cuban origin. Of the total Hispanic group, 60 percent reported as Mexican, 44 percent reported as Cuban, and 9 percent reported as Puerto Rican. Remember that the U.S. Census allows people to report as being more than one ethnicity. Not only are there wide differences among the different origins that make up the Hispanic American population, but there are also different names for the group itself. The 2010 U.S. Census states that "Hispanic" or "Latino" refers to a person of Cuban, Mexican, Puerto Rican, South or Central American, or other Spanish culture or origin regardless of race." There have been some disagreements over whether Hispanic or Latino is the correct term for a group this diverse, and whether it would be better for people to refer to themselves as being of their origin specifically, for example, Mexican American or Dominican American. This section will compare the experiences of Mexican Americans and Cuban Americans. How and Why They Came Mexican Americans form the largest Hispanic subgroup and also the oldest. Mexican migration to the United States started in the early 1900s in response to the need for cheap agricultural labor. Mexican migration was often circular; workers would stay for a few years and then go back to Mexico with more money than they could have made in their country of origin. The length of Mexico's shared border with the United States has made immigration easier than for many other immigrant groups. Cuban Americans are the second-largest Hispanic subgroup, and their history is quite different from that of Mexican Americans. The main wave of Cuban immigration to the United States started after Fidel Castro came to power in 1959 and reached its crest with the Mariel boatlift in 1980. Castro's Cuban Revolution ushered in an era of communism that continues to this day. To avoid having their assets seized by the government, many wealthy and educated Cubans migrated north, generally to the Miami area. History of Intergroup Relations For several decades, Mexican workers crossed the long border into the United States, both legally and illegally, to work in the fields that provided produce for the developing United States. Western growers needed a steady supply of labor, and the 1940s and 1950s saw the official federal Bracero Program (bracero is Spanish for strong-arm) that offered protection to Mexican guest workers. Interestingly, 1954 also saw the enactment of "Operation Wetback," which deported thousands of illegal Mexican workers. From these examples, we can see the U.S. treatment of immigration from Mexico has been ambivalent at best. Sociologist Douglas Massey (2006) suggests that although the average standard of living than in Mexico may be lower in the United States, it is not so low as to make permanent migration the goal of most Mexicans. However, the strengthening of the border that began with 1986's Immigration Reform and Control Act has made one-way migration the rule for most Mexicans. Massey argues that the rise of illegal one-way immigration of Mexicans is a direct outcome of the law that was intended to reduce it. Cuban Americans, perhaps because of their relative wealth and education level at the time of immigration, have fared better than many immigrants. Further, because they were fleeing a Communist country, they were given refugee status and offered protection and social services. The Cuban Migration Agreement of 1995 has curtailed legal immigration from Cuba, leading many Cubans to try to immigrate illegally by boat. According to a 2009 report from the Congressional Research Service, the U.S. government applies a "wet foot/dry foot" policy toward Cuban immigrants; Cubans who are intercepted while still at sea will be returned to Cuba, while those who reach the shore will be permitted to stay in the United States. Current Status Mexican Americans, especially those who are here illegally, are at the center of a national debate about immigration. Myers (2007) observes that no other minority group (except the Chinese) has immigrated to the United States in such an environment of illegality. He notes that in some years, three times as many Mexican immigrants may have entered the United States illegally as those who arrived legally. It should be noted that this is due to enormous disparity of economic opportunity on two sides of an open border, not because of any inherent inclination to break laws. In his report, "Measuring Immigrant Assimilation in the United States," Jacob Vigdor (2008) states that Mexican immigrants experience relatively low rates of economic and civic assimilation. He further suggests that "the slow rates of economic and civic assimilation set Mexicans apart from other immigrants, and may reflect the fact that the large numbers of Mexican immigrants residing in the United States illegally have few opportunities to advance themselves along these dimensions." By contrast, Cuban Americans are often seen as a model minority group within the larger Hispanic group. Many Cubans had higher socioeconomic status when they arrived in this country, and their anti-Communist agenda has made them welcome refugees to this country. In south Florida, especially, Cuban Americans are active in local politics and professional life. As with Asian Americans, however, being a model minority can mask the issue of powerlessness that these minority groups face in U.S. society. SOCIAL POLICY AND DEBATE Arizona's Senate Bill 1070 Figure 11.8 Protesters in Arizona dispute the harsh new anti-immigration law. (Photo courtesy of rprathap/flickr) As both legal and illegal immigrants, and with high population numbers, Mexican Americans are often the target of stereotyping, racism, and discrimination. A harsh example of this is in Arizona, where a stringent immigration law—known as SB 1070 (for Senate Bill 1070)—has caused a nationwide controversy. The law requires that during a lawful stop, detention, or arrest, Arizona police officers must establish the immigration status of anyone they suspect may be here illegally. The law makes it a crime for individuals to fail to have documents confirming their legal status, and it gives police officers the right to detain people they suspect may be in the country illegally. To many, the most troublesome aspect of this law is the latitude it affords police officers in terms of whose citizenship they may question. Having "reasonable suspicion that the person is an alien who is unlawfully present in the United States" is reason enough to demand immigration papers (Senate Bill 1070 2010). Critics say this law will encourage racial profiling (the illegal practice of law enforcement using race as a basis for suspecting someone of a crime), making it hazardous to be caught "Driving While Brown," a takeoff on the legal term Driving While Intoxicated (DWI) or the slang reference of "Driving While Black." Driving While Brown refers to the likelihood of getting pulled over just for being nonwhite. SB 1070 has been the subject of many lawsuits, from parties as diverse as Arizona police officers, the American Civil Liberties Union, and even the federal government, which is suing on the basis of Arizona contradicting federal immigration laws (ACLU 2011). The future of SB 1070 is uncertain, but many other states have tried or are trying to pass similar measures. Do you think such measures are appropriate? Arab Americans If ever a category was hard to define, the various groups lumped under the name "Arab American" is it. After all, Hispanic Americans or Asian Americans are so designated because of their counties of origin. But for Arab Americans, their country of origin—Arabia—has not existed for centuries. In addition, Arab Americans represent all religious practices, despite the stereotype that all Arabic people practice Islam. As Myers (2007) asserts, not all Arabs are Muslim, and not all Muslims are Arab, complicating the stereotype of what it means to be an Arab American. Geographically, the Arab region comprises the Middle East and parts of northern Africa. People whose ancestry lies in that area or who speak primarily Arabic may consider themselves Arabs. The U.S. Census has struggled with the issue of Arab identity. The 2010 Census, as in previous years, did not offer an "Arab" box to check under the question of race. Individuals who want to be counted as Arabs had to check the box for "Some other race" and then write in their race. However, when the Census data is tallied, they will be marked as white. This is problematic, however, denying Arab Americans opportunities for federal assistance. According to the best estimates of the U.S. Census Bureau, the Arabic population in the United States grew from 850,000 in 1990 to 1.2 million in 2000, an increase of .07 percent (Asi and Beaulieu 2013). Why They Came The first Arab immigrants came to this country in the late nineteenth and early twentieth centuries. They were predominantly Syrian, Lebanese, and Jordanian Christians, and they came to escape persecution and to make a better life. These early immigrants and their descendants, who were more likely to think of themselves as Syrian or Lebanese than Arab, represent almost half of the Arab American population today (Myers 2007). Restrictive immigration policies from the 1920s until 1965 curtailed all immigration, but Arab immigration since 1965 has been steady. Immigrants from this time period have been more likely to be Muslim and more highly educated, escaping political unrest and looking for better opportunities. History of Intergroup Relations Relations between Arab Americans and the dominant majority have been marked by mistrust, misinformation, and deeply entrenched beliefs. Helen Samhan of the Arab American Institute suggests that Arab-Israeli conflicts in the 1970s contributed significantly to cultural and political anti-Arab sentiment in the United States (2001). The United States has historically supported the State of Israel, while some Middle Eastern countries deny the existence of the Israeli state. Disputes over these issues have involved Egypt, Syria, Iraq, Jordan, Lebanon, and Palestine. As is often the case with stereotyping and prejudice, the actions of extremists come to define the entire group, regardless of the fact that most U.S. citizens with ties to the Middle Eastern community condemn terrorist actions, as do most inhabitants of the Middle East. Would it be fair to judge all Catholics by the events of the Inquisition? Of course, the United States was deeply affected by the events of September 11, 2001. This event has left a deep scar on the American psyche, and it has fortified anti-Arab sentiment for a large percentage of Americans. In the first month after 9/11, hundreds of hate crimes were perpetrated against people who looked like they might be of Arab descent. Figure 11.9 The proposed Park51 Muslim Community Center generated heated controversy due to its close proximity to Ground Zero. In these photos, people march in protest against the center, while counter-protesters demonstrate their support. (Photos (a) and (b) courtesy of David Shankbone/Wikimedia Commons) Current Status Although the rate of hate crimes against Arab Americans has slowed, Arab Americans are still victims of racism and prejudice. Racial profiling has proceeded against Arab Americans as a matter of course since 9/11. Particularly when engaged in air travel, being young and Arab-looking is enough to warrant a special search or detainment. This Islamophobia (irrational fear of or hatred against Muslims) does not show signs of abating. Scholars noted that white domestic terrorists like Timothy McVeigh, who detonated a bomb at an Oklahoma courthouse in 1995, have not inspired similar racial profiling or hate crimes against whites. White Ethnic Americans As we have seen, there is no minority group that fits easily in a category or that can be described simply. While sociologists believe that individual experiences can often be understood in light of their social characteristics (such as race, class, or gender), we must balance this perspective with awareness that no two individuals' experiences are alike. Making generalizations can lead to stereotypes and prejudice. The same is true for white ethnic Americans, who come from diverse backgrounds and have had a great variety of experiences. According to the U.S. Census Bureau (2014), 77.7 percent of U.S. adults currently identify themselves as white alone. In this section, we will focus on German, Irish, Italian, and Eastern European immigrants. Why They Came White ethnic Europeans formed the second and third great waves of immigration, from the early nineteenth century to the mid-twentieth century. They joined a newly minted United States that was primarily made up of white Protestants from England. While most immigrants came searching for a better life, their experiences were not all the same. The first major influx of European immigrants came from Germany and Ireland, starting in the 1820s. Germans came both for economic opportunity and to escape political unrest and military conscription, especially after the Revolutions of 1848. Many German immigrants of this period were political refugees: liberals who wanted to escape from an oppressive government. They were well-off enough to make their way inland, and they formed heavily German enclaves in the Midwest that exist to this day. The Irish immigrants of the same time period were not always as well off financially, especially after the Irish Potato Famine of 1845. Irish immigrants settled mainly in the cities of the East Coast, where they were employed as laborers and where they faced significant discrimination. German and Irish immigration continued into the late 19th century and earlier 20th century, at which point the numbers for Southern and Eastern European immigrants started growing as well. Italians, mainly from the Southern part of the country, began arriving in large numbers in the 1890s. Eastern European immigrants—people from Russia, Poland, Bulgaria, and Austria-Hungary—started arriving around the same time. Many of these Eastern Europeans were peasants forced into a hardscrabble existence in their native lands; political unrest, land shortages, and crop failures drove them to seek better opportunities in the United States. The Eastern European immigration wave also included Jewish people escaping pogroms (anti-Jewish massacres) of Eastern Europe and the Pale of Settlement in what was then Poland and Russia. History of Intergroup Relations In a broad sense, German immigrants were not victimized to the same degree as many of the other subordinate groups this section discusses. While they may not have been welcomed with open arms, they were able to settle in enclaves and establish roots. A notable exception to this was during the lead up to World War I and through World War II, when anti-German sentiment was virulent. Irish immigrants, many of whom were very poor, were more of an underclass than the Germans. In Ireland, the English had oppressed the Irish for centuries, eradicating their language and culture and discriminating against their religion (Catholicism). Although the Irish had a larger population than the English, they were a subordinate group. This dynamic reached into the new world, where Anglo Americans saw Irish immigrants as a race apart: dirty, lacking ambition, and suitable for only the most menial jobs. In fact, Irish immigrants were subject to criticism identical to that with which the dominant group characterized African Americans. By necessity, Irish immigrants formed tight communities segregated from their Anglo neighbors. The later wave of immigrants from Southern and Eastern Europe was also subject to intense discrimination and prejudice. In particular, the dominant group—which now included second- and third-generation Germans and Irish—saw Italian immigrants as the dregs of Europe and worried about the purity of the American race (Myers 2007). Italian immigrants lived in segregated slums in Northeastern cities, and in some cases were even victims of violence and lynchings similar to what African Americans endured. They worked harder and were paid less than other workers, often doing the dangerous work that other laborers were reluctant to take on. Current Status The U.S. Census from 2008 shows that 16.5 percent of respondents reported being of German descent: the largest group in the country. For many years, German Americans endeavored to maintain a strong cultural identity, but they are now culturally assimilated into the dominant culture. There are now more Irish Americans in the United States than there are Irish in Ireland. One of the country's largest cultural groups, Irish Americans have slowly achieved acceptance and assimilation into the dominant group. Myers (2007) states that Italian Americans' cultural assimilation is "almost complete, but with remnants of ethnicity." The presence of "Little Italy" neighborhoods—originally segregated slums where Italians congregated in the nineteenth century—exist today. While tourists flock to the saints' festivals in Little Italies, most Italian Americans have moved to the suburbs at the same rate as other white groups.


संबंधित स्टडी सेट्स

[BR UI 2nd][Unit 1-6] Functions - Revision

View Set

HTML Fill in the blank and Multiple choice Ch. 3

View Set

Lab Quiz #7 (Celiac Trunk, Small and Large Intestines)

View Set

Human Resources Management Chapter 3

View Set