Shared posts

19 Feb 15:15

The Nuclear Family Was a Mistake

by David Brooks
Liz

I think about this *A Lot.* Now that Dave and I live far away from our families, we need our friends more than ever as support, but if our careers necessitate moving again, we lose our support system at a critical time in our lives. I like the idea of, if you can't be near members of your family, cultivating a dedicated community of friends to act as a local one. Life is much harder as an isolated nuclear unit, and costs so much more.

The scene is one many of us have somewhere in our family history: Dozens of people celebrating Thanksgiving or some other holiday around a makeshift stretch of family tables—siblings, cousins, aunts, uncles, great-aunts. The grandparents are telling the old family stories for the 37th time. “It was the most beautiful place you’ve ever seen in your life,” says one, remembering his first day in America. “There were lights everywhere … It was a celebration of light! I thought they were for me.”

To hear more feature stories, get the Audm iPhone app.

The oldsters start squabbling about whose memory is better. “It was cold that day,” one says about some faraway memory. “What are you talking about? It was May, late May,” says another. The young children sit wide-eyed, absorbing family lore and trying to piece together the plotline of the generations.

After the meal, there are piles of plates in the sink, squads of children conspiring mischievously in the basement. Groups of young parents huddle in a hallway, making plans. The old men nap on couches, waiting for dessert. It’s the extended family in all its tangled, loving, exhausting glory.

This particular family is the one depicted in Barry Levinson’s 1990 film, Avalon, based on his own childhood in Baltimore. Five brothers came to America from Eastern Europe around the time of World War I and built a wallpaper business. For a while they did everything together, like in the old country. But as the movie goes along, the extended family begins to split apart. Some members move to the suburbs for more privacy and space. One leaves for a job in a different state. The big blowup comes over something that seems trivial but isn’t: The eldest of the brothers arrives late to a Thanksgiving dinner to find that the family has begun the meal without him.

“You cut the turkey without me?” he cries. “Your own flesh and blood! … You cut the turkey?” The pace of life is speeding up. Convenience, privacy, and mobility are more important than family loyalty. “The idea that they would eat before the brother arrived was a sign of disrespect,” Levinson told me recently when I asked him about that scene. “That was the real crack in the family. When you violate the protocol, the whole family structure begins to collapse.”

As the years go by in the movie, the extended family plays a smaller and smaller role. By the 1960s, there’s no extended family at Thanksgiving. It’s just a young father and mother and their son and daughter, eating turkey off trays in front of the television. In the final scene, the main character is living alone in a nursing home, wondering what happened. “In the end, you spend everything you’ve ever saved, sell everything you’ve ever owned, just to exist in a place like this.”

“In my childhood,” Levinson told me, “you’d gather around the grandparents and they would tell the family stories … Now individuals sit around the TV, watching other families’ stories.” The main theme of Avalon, he said, is “the decentralization of the family. And that has continued even further today. Once, families at least gathered around the television. Now each person has their own screen.”

This is the story of our times—the story of the family, once a dense cluster of many siblings and extended kin, fragmenting into ever smaller and more fragile forms. The initial result of that fragmentation, the nuclear family, didn’t seem so bad. But then, because the nuclear family is so brittle, the fragmentation continued. In many sectors of society, nuclear families fragmented into single-parent families, single-parent families into chaotic families or no families.

If you want to summarize the changes in family structure over the past century, the truest thing to say is this: We’ve made life freer for individuals and more unstable for families. We’ve made life better for adults but worse for children. We’ve moved from big, interconnected, and extended families, which helped protect the most vulnerable people in society from the shocks of life, to smaller, detached nuclear families (a married couple and their children), which give the most privileged people in society room to maximize their talents and expand their options. The shift from bigger and interconnected extended families to smaller and detached nuclear families ultimately led to a familial system that liberates the rich and ravages the working-class and the poor.

[Annie Lowrey: The great affordability crisis breaking America]

This article is about that process, and the devastation it has wrought—and about how Americans are now groping to build new kinds of family and find better ways to live.

Part I


The Era of Extended Clans

Through the early parts of American history, most people lived in what, by today’s standards, were big, sprawling households. In 1800, three-quarters of American workers were farmers. Most of the other quarter worked in small family businesses, like dry-goods stores. People needed a lot of labor to run these enterprises. It was not uncommon for married couples to have seven or eight children. In addition, there might be stray aunts, uncles, and cousins, as well as unrelated servants, apprentices, and farmhands. (On some southern farms, of course, enslaved African Americans were also an integral part of production and work life.)

Steven Ruggles, a professor of history and population studies at the University of Minnesota, calls these “corporate families”—social units organized around a family business. According to Ruggles, in 1800, 90 percent of American families were corporate families. Until 1850, roughly three-quarters of Americans older than 65 lived with their kids and grandkids. Nuclear families existed, but they were surrounded by extended or corporate families.

[Read: What number of kids makes parents happiest?]

Extended families have two great strengths. The first is resilience. An extended family is one or more families in a supporting web. Your spouse and children come first, but there are also cousins, in-laws, grandparents—a complex web of relationships among, say, seven, 10, or 20 people. If a mother dies, siblings, uncles, aunts, and grandparents are there to step in. If a relationship between a father and a child ruptures, others can fill the breach. Extended families have more people to share the unexpected burdens—when a kid gets sick in the middle of the day or when an adult unexpectedly loses a job.

A detached nuclear family, by contrast, is an intense set of relationships among, say, four people. If one relationship breaks, there are no shock absorbers. In a nuclear family, the end of the marriage means the end of the family as it was previously understood.

The second great strength of extended families is their socializing force. Multiple adults teach children right from wrong, how to behave toward others, how to be kind. Over the course of the 18th and 19th centuries, industrialization and cultural change began to threaten traditional ways of life. Many people in Britain and the United States doubled down on the extended family in order to create a moral haven in a heartless world. According to Ruggles, the prevalence of extended families living together roughly doubled from 1750 to 1900, and this way of life was more common than at any time before or since.

During the Victorian era, the idea of “hearth and home” became a cultural ideal. The home “is a sacred place, a vestal temple, a temple of the hearth watched over by Household Gods, before whose faces none may come but those whom they can receive with love,” the great Victorian social critic John Ruskin wrote. This shift was led by the upper-middle class, which was coming to see the family less as an economic unit and more as an emotional and moral unit, a rectory for the formation of hearts and souls.

But while extended families have strengths, they can also be exhausting and stifling. They allow little privacy; you are forced to be in daily intimate contact with people you didn’t choose. There’s more stability but less mobility. Family bonds are thicker, but individual choice is diminished. You have less space to make your own way in life. In the Victorian era, families were patriarchal, favoring men in general and first-born sons in particular.

As factories opened in the big U.S. cities, in the late 19th and early 20th centuries, young men and women left their extended families to chase the American dream. These young people married as soon as they could. A young man on a farm might wait until 26 to get married; in the lonely city, men married at 22 or 23. From 1890 to 1960, the average age of first marriage dropped by 3.6 years for men and 2.2 years for women.

The families they started were nuclear families. The decline of multigenerational cohabiting families exactly mirrors the decline in farm employment. Children were no longer raised to assume economic roles—they were raised so that at adolescence they could fly from the nest, become independent, and seek partners of their own. They were raised not for embeddedness but for autonomy. By the 1920s, the nuclear family with a male breadwinner had replaced the corporate family as the dominant family form. By 1960, 77.5 percent of all children were living with their two parents, who were married, and apart from their extended family.


The Short, Happy Life of the Nuclear Family

For a time, it all seemed to work. From 1950 to 1965, divorce rates dropped, fertility rates rose, and the American nuclear family seemed to be in wonderful shape. And most people seemed prosperous and happy. In these years, a kind of cult formed around this type of family—what McCall’s, the leading women’s magazine of the day, called “togetherness.” Healthy people lived in two-parent families. In a 1957 survey, more than half of the respondents said that unmarried people were “sick,” “immoral,” or “neurotic.”

During this period, a certain family ideal became engraved in our minds: a married couple with 2.5 kids. When we think of the American family, many of us still revert to this ideal. When we have debates about how to strengthen the family, we are thinking of the two-parent nuclear family, with one or two kids, probably living in some detached family home on some suburban street. We take it as the norm, even though this wasn’t the way most humans lived during the tens of thousands of years before 1950, and it isn’t the way most humans have lived during the 55 years since 1965.

Today, only a minority of American households are traditional two-parent nuclear families and only one-third of American individuals live in this kind of family. That 1950–65 window was not normal. It was a freakish historical moment when all of society conspired, wittingly and not, to obscure the essential fragility of the nuclear family.

Photo illustration: Weronika Gęsicka; Alamy

For one thing, most women were relegated to the home. Many corporations, well into the mid-20th century, barred married women from employment: Companies would hire single women, but if those women got married, they would have to quit. Demeaning and disempowering treatment of women was rampant. Women spent enormous numbers of hours trapped inside the home under the headship of their husband, raising children.

For another thing, nuclear families in this era were much more connected to other nuclear families than they are today—constituting a “modified extended family,” as the sociologist Eugene Litwak calls it, “a coalition of nuclear families in a state of mutual dependence.” Even as late as the 1950s, before television and air-conditioning had fully caught on, people continued to live on one another’s front porches and were part of one another’s lives. Friends felt free to discipline one another’s children.

In his book The Lost City, the journalist Alan Ehrenhalt describes life in mid-century Chicago and its suburbs:

To be a young homeowner in a suburb like Elmhurst in the 1950s was to participate in a communal enterprise that only the most determined loner could escape: barbecues, coffee klatches, volleyball games, baby-sitting co-ops and constant bartering of household goods, child rearing by the nearest parents who happened to be around, neighbors wandering through the door at any hour without knocking—all these were devices by which young adults who had been set down in a wilderness of tract homes made a community. It was a life lived in public.

Finally, conditions in the wider society were ideal for family stability. The postwar period was a high-water mark of church attendance, unionization, social trust, and mass prosperity—all things that correlate with family cohesion. A man could relatively easily find a job that would allow him to be the breadwinner for a single-income family. By 1961, the median American man age 25 to 29 was earning nearly 400 percent more than his father had earned at about the same age.

In short, the period from 1950 to 1965 demonstrated that a stable society can be built around nuclear families—so long as women are relegated to the household, nuclear families are so intertwined that they are basically extended families by another name, and every economic and sociological condition in society is working together to support the institution.


Video: How the Nuclear Family Broke Down

David Brooks on the rise and decline of the nuclear family

Disintegration

But these conditions did not last. The constellation of forces that had briefly shored up the nuclear family began to fall away, and the sheltered family of the 1950s was supplanted by the stressed family of every decade since. Some of the strains were economic. Starting in the mid-’70s, young men’s wages declined, putting pressure on working-class families in particular. The major strains were cultural. Society became more individualistic and more self-oriented. People put greater value on privacy and autonomy. A rising feminist movement helped endow women with greater freedom to live and work as they chose.

[Read: Gen-X women are caught in a generational tug-of-war]

A study of women’s magazines by the sociologists Francesca Cancian and Steven L. Gordon found that from 1900 to 1979, themes of putting family before self dominated in the 1950s: “Love means self-sacrifice and compromise.” In the 1960s and ’70s, putting self before family was prominent: “Love means self-expression and individuality.” Men absorbed these cultural themes, too. The master trend in Baby Boomer culture generally was liberation—“Free Bird,” “Born to Run,” “Ramblin’ Man.”

Eli Finkel, a psychologist and marriage scholar at Northwestern University, has argued that since the 1960s, the dominant family culture has been the “self-expressive marriage.” “Americans,” he has written, “now look to marriage increasingly for self-discovery, self-esteem and personal growth.” Marriage, according to the sociologists Kathryn Edin and Maria Kefalas, “is no longer primarily about childbearing and childrearing. Now marriage is primarily about adult fulfillment.”

[Read: An interview with Eli Finkel on how we expect too much from our romantic partners]

This cultural shift was very good for some adults, but it was not so good for families generally. Fewer relatives are around in times of stress to help a couple work through them. If you married for love, staying together made less sense when the love died. This attenuation of marital ties may have begun during the late 1800s: The number of divorces increased about fifteenfold from 1870 to 1920, and then climbed more or less continuously through the first several decades of the nuclear-family era. As the intellectual historian Christopher Lasch noted in the late 1970s, the American family didn’t start coming apart in the 1960s; it had been “coming apart for more than 100 years.”

Americans today have less family than ever before. From 1970 to 2012, the share of households consisting of married couples with kids has been cut in half. In 1960, according to census data, just 13 percent of all households were single-person households. In 2018, that figure was 28 percent. In 1850, 75 percent of Americans older than 65 lived with relatives; by 1990, only 18 percent did.

Over the past two generations, people have spent less and less time in marriage—they are marrying later, if at all, and divorcing more. In 1950, 27 percent of marriages ended in divorce; today, about 45 percent do. In 1960, 72 percent of American adults were married. In 2017, nearly half of American adults were single. According to a 2014 report from the Urban Institute, roughly 90 percent of Baby Boomer women and 80 percent of Gen X women married by age 40, while only about 70 percent of late-Millennial women were expected to do so—the lowest rate in U.S. history. And while more than four-fifths of American adults in a 2019 Pew Research Center survey said that getting married is not essential to living a fulfilling life, it’s not just the institution of marriage they’re eschewing: In 2004, 33 percent of Americans ages 18 to 34 were living without a romantic partner, according to the General Social Survey; by 2018, that number was up to 51 percent.

Over the past two generations, families have also gotten a lot smaller. The general American birth rate is half of what it was in 1960. In 2012, most American family households had no children. There are more American homes with pets than with kids. In 1970, about 20 percent of households had five or more people. As of 2012, only 9.6 percent did.

Over the past two generations, the physical space separating nuclear families has widened. Before, sisters-in-law shouted greetings across the street at each other from their porches. Kids would dash from home to home and eat out of whoever’s fridge was closest by. But lawns have grown more expansive and porch life has declined, creating a buffer of space that separates the house and family from anyone else. As Mandy Len Catron recently noted in The Atlantic, married people are less likely to visit parents and siblings, and less inclined to help them do chores or offer emotional support. A code of family self-sufficiency prevails: Mom, Dad, and the kids are on their own, with a barrier around their island home.

Finally, over the past two generations, families have grown more unequal. America now has two entirely different family regimes. Among the highly educated, family patterns are almost as stable as they were in the 1950s; among the less fortunate, family life is often utter chaos. There’s a reason for that divide: Affluent people have the resources to effectively buy extended family, in order to shore themselves up. Think of all the child-rearing labor affluent parents now buy that used to be done by extended kin: babysitting, professional child care, tutoring, coaching, therapy, expensive after-school programs. (For that matter, think of how the affluent can hire therapists and life coaches for themselves, as replacement for kin or close friends.) These expensive tools and services not only support children’s development and help prepare them to compete in the meritocracy; by reducing stress and time commitments for parents, they preserve the amity of marriage. Affluent conservatives often pat themselves on the back for having stable nuclear families. They preach that everybody else should build stable families too. But then they ignore one of the main reasons their own families are stable: They can afford to purchase the support that extended family used to provide—and that the people they preach at, further down the income scale, cannot.

[Read: ‘Intensive’ parenting is a strategy for an age of inequality]

In 1970, the family structures of the rich and poor did not differ that greatly. Now there is a chasm between them. As of 2005, 85 percent of children born to upper-middle-class families were living with both biological parents when the mom was 40. Among working-class families, only 30 percent were. According to a 2012 report from the National Center for Health Statistics, college-educated women ages 22 to 44 have a 78 percent chance of having their first marriage last at least 20 years. Women in the same age range with a high-school degree or less have only about a 40 percent chance. Among Americans ages 18 to 55, only 26 percent of the poor and 39 percent of the working class are currently married. In her book Generation Unbound, Isabel Sawhill, an economist at the Brookings Institution, cited research indicating that differences in family structure have “increased income inequality by 25 percent.” If the U.S. returned to the marriage rates of 1970, child poverty would be 20 percent lower. As Andrew Cherlin, a sociologist at Johns Hopkins University, once put it, “It is the privileged Americans who are marrying, and marrying helps them stay privileged.”

When you put everything together, we’re likely living through the most rapid change in family structure in human history. The causes are economic, cultural, and institutional all at once. People who grow up in a nuclear family tend to have a more individualistic mind-set than people who grow up in a multigenerational extended clan. People with an individualistic mind-set tend to be less willing to sacrifice self for the sake of the family, and the result is more family disruption. People who grow up in disrupted families have more trouble getting the education they need to have prosperous careers. People who don’t have prosperous careers have trouble building stable families, because of financial challenges and other stressors. The children in those families become more isolated and more traumatized.

[Read: The working-to-afford-child-care conundrum]

Many people growing up in this era have no secure base from which to launch themselves and no well-defined pathway to adulthood. For those who have the human capital to explore, fall down, and have their fall cushioned, that means great freedom and opportunity—and for those who lack those resources, it tends to mean great confusion, drift, and pain.

Over the past 50 years, federal and state governments have tried to mitigate the deleterious effects of these trends. They’ve tried to increase marriage rates, push down divorce rates, boost fertility, and all the rest. The focus has always been on strengthening the nuclear family, not the extended family. Occasionally, a discrete program will yield some positive results, but the widening of family inequality continues unabated.

The people who suffer the most from the decline in family support are the vulnerable—especially children. In 1960, roughly 5 percent of children were born to unmarried women. Now about 40 percent are. The Pew Research Center reported that 11 percent of children lived apart from their father in 1960. In 2010, 27 percent did. Now about half of American children will spend their childhood with both biological parents. Twenty percent of young adults have no contact at all with their father (though in some cases that’s because the father is deceased). American children are more likely to live in a single-parent household than children from any other country.

[Read: The divorce gap]

We all know stable and loving single-parent families. But on average, children of single parents or unmarried cohabiting parents tend to have worse health outcomes, worse mental-health outcomes, less academic success, more behavioral problems, and higher truancy rates than do children living with their two married biological parents. According to work by Richard V. Reeves, a co-director of the Center on Children and Families at the Brookings Institution, if you are born into poverty and raised by your married parents, you have an 80 percent chance of climbing out of it. If you are born into poverty and raised by an unmarried mother, you have a 50 percent chance of remaining stuck.

It’s not just the lack of relationships that hurts children; it’s the churn. According to a 2003 study that Andrew Cherlin cites, 12 percent of American kids had lived in at least three “parental partnerships” before they turned 15. The transition moments, when mom’s old partner moves out or her new partner moves in, are the hardest on kids, Cherlin shows.

While children are the vulnerable group most obviously affected by recent changes in family structure, they are not the only one.

Consider single men. Extended families provided men with the fortifying influences of male bonding and female companionship. Today many American males spend the first 20 years of their life without a father and the next 15 without a spouse. Kay Hymowitz of the Manhattan Institute has spent a good chunk of her career examining the wreckage caused by the decline of the American family, and cites evidence showing that, in the absence of the connection and meaning that family provides, unmarried men are less healthy—alcohol and drug abuse are common—earn less, and die sooner than married men.

For women, the nuclear-family structure imposes different pressures. Though women have benefited greatly from the loosening of traditional family structures—they have more freedom to choose the lives they want—many mothers who decide to raise their young children without extended family nearby find that they have chosen a lifestyle that is brutally hard and isolating. The situation is exacerbated by the fact that women still spend significantly more time on housework and child care than men do, according to recent data. Thus, the reality we see around us: stressed, tired mothers trying to balance work and parenting, and having to reschedule work when family life gets messy.

[Read: The loneliness of early parenthood]

Without extended families, older Americans have also suffered. According to the AARP, 35 percent of Americans over 45 say they are chronically lonely. Many older people are now “elder orphans,” with no close relatives or friends to take care of them. In 2015, The New York Times ran an article called “The Lonely Death of George Bell,” about a family-less 72-year-old man who died alone and rotted in his Queens apartment for so long that by the time police found him, his body was unrecognizable.

Finally, because groups that have endured greater levels of discrimination tend to have more fragile families, African Americans have suffered disproportionately in the era of the detached nuclear family. Nearly half of black families are led by an unmarried single woman, compared with less than one-sixth of white families. (The high rate of black incarceration guarantees a shortage of available men to be husbands or caretakers of children.) According to census data from 2010, 25 percent of black women over 35 have never been married, compared with 8 percent of white women. Two-thirds of African American children lived in single-parent families in 2018, compared with a quarter of white children. Black single-parent families are most concentrated in precisely those parts of the country in which slavery was most prevalent. Research by John Iceland, a professor of sociology and demography at Penn State, suggests that the differences between white and black family structure explain 30 percent of the affluence gap between the two groups.

In 2004, the journalist and urbanist Jane Jacobs published her final book, an assessment of North American society called Dark Age Ahead. At the core of her argument was the idea that families are “rigged to fail.” The structures that once supported the family no longer exist, she wrote. Jacobs was too pessimistic about many things, but for millions of people, the shift from big and/or extended families to detached nuclear families has indeed been a disaster.

As the social structures that support the family have decayed, the debate about it has taken on a mythical quality. Social conservatives insist that we can bring the nuclear family back. But the conditions that made for stable nuclear families in the 1950s are never returning. Conservatives have nothing to say to the kid whose dad has split, whose mom has had three other kids with different dads; “go live in a nuclear family” is really not relevant advice. If only a minority of households are traditional nuclear families, that means the majority are something else: single parents, never-married parents, blended families, grandparent-headed families, serial partnerships, and so on. Conservative ideas have not caught up with this reality.

[Read: How politics in Trump’s America divides families]

Progressives, meanwhile, still talk like self-expressive individualists of the 1970s: People should have the freedom to pick whatever family form works for them. And, of course, they should. But many of the new family forms do not work well for most people—and while progressive elites say that all family structures are fine, their own behavior suggests that they believe otherwise. As the sociologist W. Bradford Wilcox has pointed out, highly educated progressives may talk a tolerant game on family structure when speaking about society at large, but they have extremely strict expectations for their own families. When Wilcox asked his University of Virginia students if they thought having a child out of wedlock was wrong, 62 percent said it was not wrong. When he asked the students how their own parents would feel if they themselves had a child out of wedlock, 97 percent said their parents would “freak out.” In a recent survey by the Institute for Family Studies, college-educated Californians ages 18 to 50 were less likely than those who hadn’t graduated from college to say that having a baby out of wedlock is wrong. But they were more likely to say that personally they did not approve of having a baby out of wedlock.

In other words, while social conservatives have a philosophy of family life they can’t operationalize, because it no longer is relevant, progressives have no philosophy of family life at all, because they don’t want to seem judgmental. The sexual revolution has come and gone, and it’s left us with no governing norms of family life, no guiding values, no articulated ideals. On this most central issue, our shared culture often has nothing relevant to say—and so for decades things have been falling apart.

[Read: Why is it hard for liberals to talk about ‘family values’?]

The good news is that human beings adapt, even if politics are slow to do so. When one family form stops working, people cast about for something new—sometimes finding it in something very old.

Part II


Redefining Kinship

In the beginning was the band. For tens of thousands of years, people commonly lived in small bands of, say, 25 people, which linked up with perhaps 20 other bands to form a tribe. People in the band went out foraging for food and brought it back to share. They hunted together, fought wars together, made clothing for one another, looked after one another’s kids. In every realm of life, they relied on their extended family and wider kin.

Except they didn’t define kin the way we do today. We think of kin as those biologically related to us. But throughout most of human history, kinship was something you could create.

Anthropologists have been arguing for decades about what exactly kinship is. Studying traditional societies, they have found wide varieties of created kinship among different cultures. For the Ilongot people of the Philippines, people who migrated somewhere together are kin. For the New Guineans of the Nebilyer Valley, kinship is created by sharing grease—the life force found in mother’s milk or sweet potatoes. The Chuukese people in Micronesia have a saying: “My sibling from the same canoe”; if two people survive a dangerous trial at sea, then they become kin. On the Alaskan North Slope, the Inupiat name their children after dead people, and those children are considered members of their namesake’s family.

In other words, for vast stretches of human history people lived in extended families consisting of not just people they were related to but people they chose to cooperate with. An international research team recently did a genetic analysis of people who were buried together—and therefore presumably lived together—34,000 years ago in what is now Russia. They found that the people who were buried together were not closely related to one another. In a study of 32 present-day foraging societies, primary kin—parents, siblings, and children—usually made up less than 10 percent of a residential band. Extended families in traditional societies may or may not have been genetically close, but they were probably emotionally closer than most of us can imagine. In a beautiful essay on kinship, Marshall Sahlins, an anthropologist at the University of Chicago, says that kin in many such societies share a “mutuality of being.” The late religion scholar J. Prytz-Johansen wrote that kinship is experienced as an “inner solidarity” of souls. The late South African anthropologist Monica Wilson described kinsmen as “mystically dependent” on one another. Kinsmen belong to one another, Sahlins writes, because they see themselves as “members of one another.”

Back in the 17th and 18th centuries, when European Protestants came to North America, their relatively individualistic culture existed alongside Native Americans’ very communal culture. In his book Tribe, Sebastian Junger describes what happened next: While European settlers kept defecting to go live with Native American families, almost no Native Americans ever defected to go live with European families. Europeans occasionally captured Native Americans and forced them to come live with them. They taught them English and educated them in Western ways. But almost every time they were able, the indigenous Americans fled. European settlers were sometimes captured by Native Americans during wars and brought to live in Native communities. They rarely tried to run away. This bothered the Europeans. They had the superior civilization, so why were people voting with their feet to go live in another way?

When you read such accounts, you can’t help but wonder whether our civilization has somehow made a gigantic mistake.

We can’t go back, of course. Western individualists are no longer the kind of people who live in prehistoric bands. We may even no longer be the kind of people who were featured in the early scenes of Avalon. We value privacy and individual freedom too much.

Our culture is oddly stuck. We want stability and rootedness, but also mobility, dynamic capitalism, and the liberty to adopt the lifestyle we choose. We want close families, but not the legal, cultural, and sociological constraints that made them possible. We’ve seen the wreckage left behind by the collapse of the detached nuclear family. We’ve seen the rise of opioid addiction, of suicide, of depression, of inequality—all products, in part, of a family structure that is too fragile, and a society that is too detached, disconnected, and distrustful. And yet we can’t quite return to a more collective world. The words the historians Steven Mintz and Susan Kellogg wrote in 1988 are even truer today: “Many Americans are groping for a new paradigm of American family life, but in the meantime a profound sense of confusion and ambivalence reigns.”


From Nuclear Families to Forged Families

Yet recent signs suggest at least the possibility that a new family paradigm is emerging. Many of the statistics I’ve cited are dire. But they describe the past—what got us to where we are now. In reaction to family chaos, accumulating evidence suggests, the prioritization of family is beginning to make a comeback. Americans are experimenting with new forms of kinship and extended family in search of stability.

Usually behavior changes before we realize that a new cultural paradigm has emerged. Imagine hundreds of millions of tiny arrows. In times of social transformation, they shift direction—a few at first, and then a lot. Nobody notices for a while, but then eventually people begin to recognize that a new pattern, and a new set of values, has emerged.

That may be happening now—in part out of necessity but in part by choice. Since the 1970s, and especially since the 2008 recession, economic pressures have pushed Americans toward greater reliance on family. Starting around 2012, the share of children living with married parents began to inch up. And college students have more contact with their parents than they did a generation ago. We tend to deride this as helicopter parenting or a failure to launch, and it has its excesses. But the educational process is longer and more expensive these days, so it makes sense that young adults rely on their parents for longer than they used to.

In 1980, only 12 percent of Americans lived in multigenerational households. But the financial crisis of 2008 prompted a sharp rise in multigenerational homes. Today 20 percent of Americans—64 million people, an all-time high—live in multigenerational homes.

The revival of the extended family has largely been driven by young adults moving back home. In 2014, 35 percent of American men ages 18 to 34 lived with their parents. In time this shift might show itself to be mostly healthy, impelled not just by economic necessity but by beneficent social impulses; polling data suggest that many young people are already looking ahead to helping their parents in old age.

Another chunk of the revival is attributable to seniors moving in with their children. The percentage of seniors who live alone peaked around 1990. Now more than a fifth of Americans 65 and over live in multigenerational homes. This doesn’t count the large share of seniors who are moving to be close to their grandkids but not into the same household.

Immigrants and people of color—many of whom face greater economic and social stress—are more likely to live in extended-family households. More than 20 percent of Asians, black people, and Latinos live in multigenerational households, compared with 16 percent of white people. As America becomes more diverse, extended families are becoming more common.

African Americans have always relied on extended family more than white Americans do. “Despite the forces working to separate us—slavery, Jim Crow, forced migration, the prison system, gentrification—we have maintained an incredible commitment to each other,” Mia Birdsong, the author of the forthcoming book How We Show Up, told me recently. “The reality is, black families are expansive, fluid, and brilliantly rely on the support, knowledge, and capacity of ‘the village’ to take care of each other. Here’s an illustration: The white researcher/social worker/whatever sees a child moving between their mother’s house, their grandparents’ house, and their uncle’s house and sees that as ‘instability.’ But what’s actually happening is the family (extended and chosen) is leveraging all of its resources to raise that child.”

[Read: Why black families struggle to build wealth]

The black extended family survived even under slavery, and all the forced family separations that involved. Family was essential in the Jim Crow South and in the inner cities of the North, as a way to cope with the stresses of mass migration and limited opportunities, and with structural racism. But government policy sometimes made it more difficult for this family form to thrive. I began my career as a police reporter in Chicago, writing about public-housing projects like Cabrini-Green. Guided by social-science research, politicians tore down neighborhoods of rickety low-rise buildings—uprooting the complex webs of social connection those buildings supported, despite high rates of violence and crime—and put up big apartment buildings. The result was a horror: violent crime, gangs taking over the elevators, the erosion of family and neighborly life. Fortunately, those buildings have since been torn down themselves, replaced by mixed-income communities that are more amenable to the profusion of family forms.

The return of multigenerational living arrangements is already changing the built landscape. A 2016 survey by a real-estate consulting firm found that 44 percent of home buyers were looking for a home that would accommodate their elderly parents, and 42 percent wanted one that would accommodate their returning adult children. Home builders have responded by putting up houses that are what the construction firm Lennar calls “two homes under one roof.” These houses are carefully built so that family members can spend time together while also preserving their privacy. Many of these homes have a shared mudroom, laundry room, and common area. But the “in-law suite,” the place for aging parents, has its own entrance, kitchenette, and dining area. The “Millennial suite,” the place for boomeranging adult children, has its own driveway and entrance too. These developments, of course, cater to those who can afford houses in the first place—but they speak to a common realization: Family members of different generations need to do more to support one another.

The most interesting extended families are those that stretch across kinship lines. The past several years have seen the rise of new living arrangements that bring nonbiological kin into family or familylike relationships. On the website CoAbode, single mothers can find other single mothers interested in sharing a home. All across the country, you can find co-housing projects, in which groups of adults live as members of an extended family, with separate sleeping quarters and shared communal areas. Common, a real-estate-development company that launched in 2015, operates more than 25 co-housing communities, in six cities, where young singles can live this way. Common also recently teamed up with another developer, Tishman Speyer, to launch Kin, a co-housing community for young parents. Each young family has its own living quarters, but the facilities also have shared play spaces, child-care services, and family-oriented events and outings.

[Read: The hot new Millennial housing trend is a repeat of the Middle Ages]

These experiments, and others like them, suggest that while people still want flexibility and some privacy, they are casting about for more communal ways of living, guided by a still-developing set of values. At a co-housing community in Oakland, California, called Temescal Commons, the 23 members, ranging in age from 1 to 83, live in a complex with nine housing units. This is not some rich Bay Area hipster commune. The apartments are small, and the residents are middle- and working-class. They have a shared courtyard and a shared industrial-size kitchen where residents prepare a communal dinner on Thursday and Sunday nights. Upkeep is a shared responsibility. The adults babysit one another’s children, and members borrow sugar and milk from one another. The older parents counsel the younger ones. When members of this extended family have suffered bouts of unemployment or major health crises, the whole clan has rallied together.

Courtney E. Martin, a writer who focuses on how people are redefining the American dream, is a Temescal Commons resident. “I really love that our kids grow up with different versions of adulthood all around, especially different versions of masculinity,” she told me. “We consider all of our kids all of our kids.” Martin has a 3-year-old daughter, Stella, who has a special bond with a young man in his 20s that never would have taken root outside this extended-family structure. “Stella makes him laugh, and David feels awesome that this 3-year-old adores him,” Martin said. This is the kind of magic, she concluded, that wealth can’t buy. You can only have it through time and commitment, by joining an extended family. This kind of community would fall apart if residents moved in and out. But at least in this case, they don’t.

[Read: The extended family of my two open adoptions]

As Martin was talking, I was struck by one crucial difference between the old extended families like those in Avalon and the new ones of today: the role of women. The extended family in Avalon thrived because all the women in the family were locked in the kitchen, feeding 25 people at a time. In 2008, a team of American and Japanese researchers found that women in multigenerational households in Japan were at greater risk of heart disease than women living with spouses only, likely because of stress. But today’s extended-family living arrangements have much more diverse gender roles.

And yet in at least one respect, the new families Americans are forming would look familiar to our hunter-gatherer ancestors from eons ago. That’s because they are chosen families—they transcend traditional kinship lines.

Photo illustration: Weronika Gęsicka; Alamy

The modern chosen-family movement came to prominence in San Francisco in the 1980s among gay men and lesbians, many of whom had become estranged from their biological families and had only one another for support in coping with the trauma of the AIDS crisis. In her book, Families We Choose: Lesbians, Gays, Kinship, the anthropologist Kath Weston writes, “The families I saw gay men and lesbians creating in the Bay Area tended to have extremely fluid boundaries, not unlike kinship organization among sectors of the African-American, American Indian, and white working class.”

She continues:

Like their heterosexual counterparts, most gay men and lesbians insisted that family members are people who are “there for you,” people you can count on emotionally and materially. “They take care of me,” said one man, “I take care of them.”

These groups are what Daniel Burns, a political scientist at the University of Dallas, calls “forged families.” Tragedy and suffering have pushed people together in a way that goes deeper than just a convenient living arrangement. They become, as the anthropologists say, “fictive kin.”

Over the past several decades, the decline of the nuclear family has created an epidemic of trauma—millions have been set adrift because what should have been the most loving and secure relationship in their life broke. Slowly, but with increasing frequency, these drifting individuals are coming together to create forged families. These forged families have a feeling of determined commitment. The members of your chosen family are the people who will show up for you no matter what. On Pinterest you can find placards to hang on the kitchen wall where forged families gather: “Family isn’t always blood. It’s the people in your life who want you in theirs; the ones who accept you for who you are. The ones who would do anything to see you smile & who love you no matter what.”

Two years ago, I started something called Weave: The Social Fabric Project. Weave exists to support and draw attention to people and organizations around the country who are building community. Over time, my colleagues and I have realized that one thing most of the Weavers have in common is this: They provide the kind of care to nonkin that many of us provide only to kin—the kind of support that used to be provided by the extended family.

Lisa Fitzpatrick, who was a health-care executive in New Orleans, is a Weaver. One day she was sitting in the passenger seat of a car when she noticed two young boys, 10 or 11, lifting something heavy. It was a gun. They used it to shoot her in the face. It was a gang-initiation ritual. When she recovered, she realized that she was just collateral damage. The real victims were the young boys who had to shoot somebody to get into a family, their gang.

She quit her job and began working with gang members. She opened her home to young kids who might otherwise join gangs. One Saturday afternoon, 35 kids were hanging around her house. She asked them why they were spending a lovely day at the home of a middle-aged woman. They replied, “You were the first person who ever opened the door.”

In Salt Lake City, an organization called the Other Side Academy provides serious felons with an extended family. Many of the men and women who are admitted into the program have been allowed to leave prison, where they were generally serving long sentences, but must live in a group home and work at shared businesses, a moving company and a thrift store. The goal is to transform the character of each family member. During the day they work as movers or cashiers. Then they dine together and gather several evenings a week for something called “Games”: They call one another out for any small moral failure—being sloppy with a move; not treating another family member with respect; being passive-aggressive, selfish, or avoidant.

Games is not polite. The residents scream at one another in order to break through the layers of armor that have built up in prison. Imagine two gigantic men covered in tattoos screaming “Fuck you! Fuck you! Fuck you!” At the session I attended, I thought they would come to blows. But after the anger, there’s a kind of closeness that didn’t exist before. Men and women who have never had a loving family suddenly have “relatives” who hold them accountable and demand a standard of moral excellence. Extreme integrity becomes a way of belonging to the clan. The Other Side Academy provides unwanted people with an opportunity to give care, and creates out of that care a ferocious forged family.

I could tell you hundreds of stories like this, about organizations that bring traumatized vets into extended-family settings, or nursing homes that house preschools so that senior citizens and young children can go through life together. In Baltimore, a nonprofit called Thread surrounds underperforming students with volunteers, some of whom are called “grandparents.” In Chicago, Becoming a Man helps disadvantaged youth form family-type bonds with one another. In Washington, D.C., I recently met a group of middle-aged female scientists—one a celebrated cellular biologist at the National Institutes of Health, another an astrophysicist—who live together in a Catholic lay community, pooling their resources and sharing their lives. The variety of forged families in America today is endless.

You may be part of a forged family yourself. I am. In 2015, I was invited to the house of a couple named Kathy and David, who had created an extended-family-like group in D.C. called All Our Kids, or AOK-DC. Some years earlier, Kathy and David had had a kid in D.C. Public Schools who had a friend named James, who often had nothing to eat and no place to stay, so they suggested that he stay with them. That kid had a friend in similar circumstances, and those friends had friends. By the time I joined them, roughly 25 kids were having dinner every Thursday night, and several of them were sleeping in the basement.

I joined the community and never left—they became my chosen family. We have dinner together on Thursday nights, celebrate holidays together, and vacation together. The kids call Kathy and David Mom and Dad. In the early days, the adults in our clan served as parental figures for the young people—replacing their broken cellphones, supporting them when depression struck, raising money for their college tuition. When a young woman in our group needed a new kidney, David gave her one of his.

We had our primary biological families, which came first, but we also had this family. Now the young people in this forged family are in their 20s and need us less. David and Kathy have left Washington, but they stay in constant contact. The dinners still happen. We still see one another and look after one another. The years of eating together and going through life together have created a bond. If a crisis hit anyone, we’d all show up. The experience has convinced me that everybody should have membership in a forged family with people completely unlike themselves.

Ever since I started working on this article, a chart has been haunting me. It plots the percentage of people living alone in a country against that nation’s GDP. There’s a strong correlation. Nations where a fifth of the people live alone, like Denmark and Finland, are a lot richer than nations where almost no one lives alone, like the ones in Latin America or Africa. Rich nations have smaller households than poor nations. The average German lives in a household with 2.7 people. The average Gambian lives in a household with 13.8 people.

That chart suggests two things, especially in the American context. First, the market wants us to live alone or with just a few people. That way we are mobile, unattached, and uncommitted, able to devote an enormous number of hours to our jobs. Second, when people who are raised in developed countries get money, they buy privacy.

For the privileged, this sort of works. The arrangement enables the affluent to dedicate more hours to work and email, unencumbered by family commitments. They can afford to hire people who will do the work that extended family used to do. But a lingering sadness lurks, an awareness that life is emotionally vacant when family and close friends aren’t physically present, when neighbors aren’t geographically or metaphorically close enough for you to lean on them, or for them to lean on you. Today’s crisis of connection flows from the impoverishment of family life.

I often ask African friends who have immigrated to America what most struck them when they arrived. Their answer is always a variation on a theme—the loneliness. It’s the empty suburban street in the middle of the day, maybe with a lone mother pushing a baby carriage on the sidewalk but nobody else around.

For those who are not privileged, the era of the isolated nuclear family has been a catastrophe. It’s led to broken families or no families; to merry-go-round families that leave children traumatized and isolated; to senior citizens dying alone in a room. All forms of inequality are cruel, but family inequality may be the cruelest. It damages the heart. Eventually family inequality even undermines the economy the nuclear family was meant to serve: Children who grow up in chaos have trouble becoming skilled, stable, and socially mobile employees later on.

When hyper-individualism kicked into gear in the 1960s, people experimented with new ways of living that embraced individualistic values. Today we are crawling out from the wreckage of that hyper-individualism—which left many families detached and unsupported—and people are experimenting with more connected ways of living, with new shapes and varieties of extended families. Government support can help nurture this experimentation, particularly for the working-class and the poor, with things like child tax credits, coaching programs to improve parenting skills in struggling families, subsidized early education, and expanded parental leave. While the most important shifts will be cultural, and driven by individual choices, family life is under so much social stress and economic pressure in the poorer reaches of American society that no recovery is likely without some government action.

The two-parent family, meanwhile, is not about to go extinct. For many people, especially those with financial and social resources, it is a great way to live and raise children. But a new and more communal ethos is emerging, one that is consistent with 21st-century reality and 21st-century values.

When we discuss the problems confronting the country, we don’t talk about family enough. It feels too judgmental. Too uncomfortable. Maybe even too religious. But the blunt fact is that the nuclear family has been crumbling in slow motion for decades, and many of our other problems—with education, mental health, addiction, the quality of the labor force—stem from that crumbling. We’ve left behind the nuclear-family paradigm of 1955. For most people it’s not coming back. Americans are hungering to live in extended and forged families, in ways that are new and ancient at the same time. This is a significant opportunity, a chance to thicken and broaden family relationships, a chance to allow more adults and children to live and grow under the loving gaze of a dozen pairs of eyes, and be caught, when they fall, by a dozen pairs of arms. For decades we have been eating at smaller and smaller tables, with fewer and fewer kin.

It’s time to find ways to bring back the big tables.


This article appears in the March 2020 print edition with the headline “The Nuclear Family Was a Mistake.”

11 Feb 17:26

Christina Koch Lands on Earth, and Crosses a Threshold for Women in Space

by Mary Robinette Kowal
Liz

Space content, science lady content.

The astronaut completed three all-female spacewalks and set a record for time in space, but you should remember her for much more.
06 Feb 18:41

Disq – “Loneliness”

by Stereogum
Liz

Repping our local kiddos, Disq, who I discovered by wandering into a park near a friend's apartment barefoot holding a flute of champagne that we had just poured and refused to abandon. The concert was in a band shelter attended mostly by high schoolers. We were the weird old barefoot people. 10/10

CollectorDisq, the promising young Band To Watch centered around the musical chemistry between Wisconsin natives and lifelong friends Isaac deBroux-Slone and Raina Bock, impressed us at at our SXSW party last year. And now, they're finally gearing up to release their debut album for Saddle Creek, Collector, a set of songs … More »
05 Feb 19:36

The Loneliness of Early Parenthood

by Kawther Alfasi
Liz

Reader friends with behbehs or experience with new parents, let's talk about this! I have some friends with new children and I love seeing them & seeing the kiddos, but I think there's this overarching feeling of "we're troubling you" coming from the parents sometimes. No, I'm relatively stress-free compared to you, I get that I can't do a lot of things for your kid, but I will hold them and make funny faces at them while you do laundry and chat with me. It doesn't need to be a special occasion! I guess I should just say that more often! Is that a good approach?

A prospective parent is a magnet for unsolicited advice. During my pregnancy last year, I found myself trying to parse the accurate wisdom from the overblown. One claim seemed especially questionable: My social life would disintegrate, according to my sisters-in-law, co-workers, and everyone else; indeed my very attitude to friendship would change. Any new acquaintances I might make would be dictated by my child’s age, pastimes, and social circle, and my old friends would be alienated by my life’s new focus. After all, who wants to listen to a parent drone on about their offspring’s unrecognized genius?

Much as I intended to defy these assumptions, the social foundations of my life were, as predicted, upended following the birth of my daughter. Every invite I received was now subject to scrutiny and risk assessment. A wedding was doable: Strapped onto my front with a soft cloth sling, my baby was transformed into a delightfully snoozing, unobtrusive mass, and the merriment continued without disruption. A book-club gathering was less successful; I had to bow out midway through when our discussion was disrupted by loud shrieks. And evenings out have been superseded by my daughter’s elaborate bedtime routine. It seems inevitable that new parenthood will continue to affect our former social lives—often negatively.

Research suggests that, just as everyone warned me, new parents commonly experience estrangement from their friends. The charity Action for Children, as part of broader research into loneliness, surveyed 2,000 parents. It found that the majority (68 percent) felt “cut off” from friends, colleagues, and family after the birth of a child. Common reasons for this feeling of isolation included lack of money and the inability to leave the house when caring for small children.

In another study, researchers from the Netherlands found that “the strength of friendships typically decreases after people become parents.” This period of weaker attachment is attributed to exhaustion and tight budgets when children are younger; it bottoms out when children are at the age of 3 and require sustained supervision. Women tend to regain contact with their friends after the child turns 5, whereas men are more likely to remain distant from their former friendships, even after the child turns 19. This is in line with the fact that adult men tend to have fewer close friends than adult women in general, and with research showing that male friendlessness trebles in the period between early adulthood and late middle age.

[Read: How friendships change in adulthood]

Although becoming a parent can be a lonely experience for both mothers and fathers, research suggests that new motherhood can be particularly isolating. In a survey of 2,025 mothers, 54 percent admitted to feeling “friendless” after giving birth, while another survey emphasized that this was a problem for young mothers in particular. Julie Barnett, a professor of health psychology at the University of Bath, co-authored a study of first-time mothers’ experience of loneliness in the U.K. The mothers were interviewed when their babies were four to nine months old; all were on maternity leave during that time while their partners were back at work after having taken a short stint of paternity leave. The mothers’ social isolation was partly due to them being the primary caregivers. “There were fewer opportunities for social interaction,” Barnett told me. “If women are coming from full-time work that suddenly is not there anymore … other people are still going to work but you’re at home with the baby. That sometimes led to a perception that the friends had gone.”

The women in the study also discussed their experiences with breastfeeding, which Barnett said “had a role in accentuating the loneliness of mothers in several different ways.” At times, it constrained their physical ability to interact with others, and isolated them from their partners, who could neither replace them nor relate to their struggles. Meanwhile, the mothers tended to make unfavorable self-comparisons with an ingrained image of “effortless” motherhood. The feeling that they were not coping as well as they should be—physically and emotionally—made relating to other mothers difficult for them. Nevertheless, Barnett notes that the social void in the lives of new mothers was a “transient loneliness”: By and large, things improved within the first year.

[Read: How friends become closer]

The nature of new parenthood can lead to loneliness, but the weakening of new parents’ social circles is also a result of the nature of friendship. “Across adulthood, one of the most important determinants of friendship is how our lives are organized,” says William Rawlins, a communications professor at Ohio University. When your life undergoes a major change, such as the arrival of a new baby, the structure of your friendships can’t help but change, too. “Friendship is always a matter of choice—we choose to spend time together. The role crunch that happens in young adulthood when you’ve become committed to a partner, [or] you have children, perhaps both of you have full-time jobs—all of these things leave very little time and freedom for friendship.”

For new parents, then, the key issue is the extent to which their old friendships can both accommodate, and be accommodated within, their newly organized lives. “With friends who don’t have children, it can be a bit of a litmus test. Are they able to accept and understand that, in some ways, a child changes the center of gravity of our entire lives?” Rawlins asks. Viewed in this way, change may be inevitable, but the loss of our friends may not be, if we and they are both willing to adapt.

This makes sense in theory, but in practice, it can be tricky to recalibrate one’s expectations of friendship after becoming a parent. On whom does the onus of compromise rest? I came across this tension recently on MumsNet, the U.K.’s largest parenting website and discussion board. A mother with a six-week-old breastfed baby was disappointed that her baby wasn’t wanted at her friend’s birthday lunch. She wrote that she was asked to either attend on her own, or not go at all. In the ensuing melee of responses, both parties were described as selfish: One for wanting her newborn to gate-crash an adult occasion, and the other for wanting to wrench such a vulnerable creature from its mother.

This particular scenario—in which the child is so young and the occasion is a social one—does seem to call for the child-free friend to be understanding, in Rawlins’s opinion. An all-or-nothing mind-set can lead to the erosion of a friendship. But he also sees how a request to leave kids at home could actually be a (potentially misguided) sign of investment in the friendship. “There’s a bit of a compliment to someone saying, ‘When I’m with you, I want to experience just you—I don’t want to dilute it, I don’t want you distracted.’”

[Read: Why friendship is like art]

Perhaps it is because I used to relish such uninterrupted time with friends that I now find our meetups frustrating. At one recent lunch with a friend, I found myself endeavoring to sympathize with her family struggles while simultaneously thwarting my daughter’s attempts to escape from her high chair. My perennially divided attention is, for the most part, a reality that I both bewail and accept.

One way of potentially preventing these feelings of social estrangement after the birth of a child is to construct friendships with other parents who are going through similar experiences. After all, “similarity breeds friendship by forming a basis for conversation and joint activities,” argue the Dutch researchers in their study. The new mothers in Barnett’s study reported that some of their most understanding relationships were with other mothers of young babies. And there is, I have found, something immensely comforting about being part of a friendship group with other new parents, and experiencing their unflagging sympathy.

That doesn’t mean abandoning relationships with childless friends. Friendships that speak to our differences, Rawlins says, have value, too. “People who have known us before and after children can kind of curate the person we’re becoming,” he says. Drawing on their knowledge of our pre-parent selves, they can encourage us to keep pursuing our past hobbies or ambitions. In doing so, “they keep us from getting complacent.” Retaining such friendships might be more difficult than defaulting to socializing with other parents, but it’s worth striving for all the same.

As my daughter turns 1 year old, my old feelings of frustration have ebbed with time. Things are getting easier: Breastfeeding is no longer all-consuming; the clouds of early sleep deprivation have cleared, making socializing enjoyable; and my daughter now meets her babysitter with laughter instead of hysteria. I’ve been able to savor the rare dinner with friends where we talk about anything and everything that isn’t child related. But I’ve also reconciled myself to the fact that the structure of my life has indelibly changed—I can momentarily step outside of my parental identity, but I can never entirely cast it off. I have to work harder than I did pre-kids to make my old friendships work. For now, my benchmark for social fulfillment isn’t a state of pre-child “normalcy,” but a constant negotiation: I do my best to make room for the friendships that matter to me while accepting that I—at least occasionally—might have to comply with my child’s dubious taste in playmates.

27 Jan 23:49

Guys, Please Stop Dipping Your Ballsacks In Soy Sauce

by Jelisa Castrodale
Liz

The internet is an amazing place.

Oh Regan, what have you done?

Earlier this week, a Wisconsin teen posted a TikTok, insisting that men can taste food with their testicles, staring intensely into the camera as she cited a study that had been published in The Daily Mail, of all places. "Did y'all know that if a dude puts his balls in something, he can taste it? He can taste it?" she said. "If you have testicles, please dip your balls in something. It's for science, and I must know."

She followed that up with a second clip, reminding everyone that she is a minor and to please stop sending her videos of "your hairy grown man balls"—but she also specified that any potential sack-dips should involve "soy sauce or something sweet like sugar water," because apparently testicles are basically hummingbirds with skin? We have no idea, but we also have no idea why so many men are doing this, and why anyone, male or female, would accept second-hand anatomical knowledge from a random internet teenager, or from a British tabloid, in that order.

Regardless, men are doing this. They are pouring soy sauce into open containers that they can lower their testicles into, and posting the results online. "I just went and got some soy sauce and we're going to do this little science experiment together," Alx James said. "I'ma let my little boys try it out."

James, who opted to wipe the sauce on his junk with his finger, immediately reacted, claiming that he could "taste the salt." But according to an actual doctor, that's probably less about what he'd rubbed on his genitalia and more about opening a plastic cup of soy sauce inside a closed car.

But let's go back to that Daily Mail article that Regan referenced, which was an almost seven-year-old summary of a 2013 research study called "Genetic loss or pharmacological blockade of testes-expressed taste genes causes male sterility." In that paper, which the Daily Mail got slightly wrong, scientists from the Monell Chemical Senses Center in Philadelphia discovered that two proteins involved in taste detection were also expressed inside the testes of mice.

"This paper highlights a connection between the taste system and male reproduction," lead author Dr. Bedrich Mosinger said at the time. "It is one more demonstration that components of the taste system also play important roles in other organ systems."

But it is very, very important to note that Mosinger and his team discovered two proteins that are a component of taste receptors, and they were found inside the testes. They did not find a set of taste buds embedded in the skin of a mouse's miniature ballsack.

"The study (or any that have followed) hasn’t shown that any animal can actually ’taste’ via these receptors like we’d taste something from the mouth. There is no scientific or medical evidence to back up any claims that men can actually taste things through their junk," Dr. Kieran Kennedy told Australian Men's Health. "So in theory, even if we could detect some form of flavor from the testicles (balls), the soy sauce would have to diffuse through the scrotum (sac) and into the testicles, which is largely not possible."

Kennedy also suggested that anyone who believed that they were "tasting" the soy sauce through their scrotum was probably just anticipating the salty flavor of the sauce, or they were having an involuntary reaction to its smell. (Sorry, Alx. You've become the face of soy-basted ballsacks for nothing.)

Popular Science further debunks any 'But I can taste it!" claims, by providing a refresher course into how the complicated sense of taste works. "Taste buds are little pores containing many, many sensory cells with little hairlike projections on them that increase their surface area," the website explains. "Those 'hairs' have many thousands of receptors embedded in their cellular membranes, and it’s these receptors that enable you to perceive taste."

The taste receptors then relay that information to the brain, which then determines which flavor, or combination of flavors, that is being tasting at that moment. (This is the Cliff's Notes explanation but, let's be honest: if you're currently soaking your scrotum in a liquid condiment, we probably don't need to get more advanced than that.)

So, although taste receptors have been discovered elsewhere in the human body, including inside the intestines, in the bladder, and, yes, in the testes, they aren't connected to the brain in the way that the receptors inside the tastebuds are, and they have nothing to do with the sense of taste. Still not convinced? Even Dr. Oz doesn't believe this shit. "What comes out of your testicles may have taste but your testicles themselves don't have tastebuds," he told TMZ.

Now please pour the rest of that soy sauce into the trash. Nobody wants to use it now.

27 Jan 20:14

If You Quit Meat, What Fills the Spiritual Space of the Boneless Skinless Chicken Breast?

by Rochelle Bilow
Liz

Thoughts? I feel like this is always my problem with living vegetarian -- most meals have to be *involved* as a vegetarian ... sure, sometimes it'll just be eggs and toast or a baked sweet potato, but those feel like *giving up* dinners in a way that a simple chicken breast and some broccoli does not.

I would say tofu, chickpeas, lentils, and black beans all do a lot of heavy lifting in my cooking. There's a great Healthy-Ish recipe for chickpeas that can be done in 45 min in an instant pot, no soaking, and those can go into nearly anything or taste great alone with some greens and feta!

All of the simple, nutritious, and back-to-basics healthy vibes. None of the meat. READ MORE...
21 Jan 00:19

We Should Have Bought the DVDs

by Veronica Walsingham
Liz

I really hate this future we're about to live in. We should be better about choosing to support individual media we enjoy, both music and TV/movies, because our current streaming system will increasingly only support large corporations and not the content creators.

It’s 2022. I don’t know if I’ll ever own a house, but I can own my favorite television shows in their entirety.
10 Jan 20:04

These Disturbing, Hyperrealistic Pet Masks Are Truly the Stuff of Nightmares

by River Donaghey
Liz

cursed image

Are you a proud pet owner who secretly dreams of melding your corporeal form with your cat or dog in order to become some kind of half-human, half-beast monstrosity that violates all laws of God and man? If so, look no further!

On Thursday, AV Club uncovered a brand new service in Japan that will craft custom, hyperrealistic masks for pet owners based on their beloved animals—and the prototypes are deeply, deeply fucking terrifying.

Watch this if you dare:

A Japanese press release announcing the launch of this horrific project says that the whole thing was dreamed up by the companies Shindo Rinka and 91, the latter of which appears to specialize in life-like animal masks—so you'll have them to blame when these things catch on.

1551378621868-d32287-11-234332-5
Image via PR Times
1551378640442-d32287-11-384509-2
Image via PR Times
1551380225218-d32287-11-289568-8
Image via PR Times

If, for some reason, you long to look like some kind of faux-Ancient Egyptian god or a knockoff "King" from Tekken and money is no object, just head over to the Shindo Rinka website and sign up.

All you have to do is send in some photos of your pet, and the mask maestros will pick up their fake fur and translucent whiskers and lifeless animal eyes and work their magic. You, uh, also have to shell out upwards of $3,000, according to Grape, which first reported the story—but hey, putting together a disturbingly realistic pet mask probably isn't cheap.

Of course, the true metamorphosis won't be finished until the company starts making tiny human masks for the pets to wear, completing the cursèd union between Man and Beast and inextricably bonding you both forever and always, in this world and the next, but I guess we'll have to wait for that one.

No promises your dog or cat won't fucking hate you after you show up one day wearing its own face, either. That's a risk you'll just have to take.

Sign up for our newsletter to get the best of VICE delivered to your inbox daily.

22 Nov 21:51

An Interview With the Woman Who Strongly Needed Her Coffee During a Live Impeachment Hearing

by Heather Schwedel
Liz

I love this woman.

12 Nov 20:34

9 Photographers Flipping the Script on Trans and Non-Binary Representation

by Laurence Philomène
Liz

Content warning: nudity!

I really enjoyed this, such a wide array of expression and beauty. I especially appreciated seeing the photos of "transfat" people (their term, lol) ... I think that there's a lot of pressure on trans ppl to have "ideal" passing bodies, and being softer/fatter is another way of breaking that societal expectation and I'm *here*for*it*

When I was in high school, every Thursday after class, I would take the metro to the big public library in downtown Montreal. There, I’d go straight to the photography section and grab every book I could find. I was filled with a voracious hunger for images, although unsure of what I was looking for in them. The first time I saw queer bodies in a photograph, I got my answer. The photo was in Wolfgang Tillmans’ book “Truth Study Center.” It was a sweaty picture of two white men kissing. I became obsessed with the book. I’d borrow it over and over and I studied it religiously. In it, I found an intimacy.

The first time I saw trans bodies in that library was in Nan Goldin’s work, the classic “Ballad of Sexual Dependency,” an autobiographical slideshow depicting intimate and mundane scenes in the artist’s life in 1980s New York, including snapshots of post-stonewall queer subculture. Soon after, I encountered Bettina Rheims’s monograph “Modern Lovers,” which offered minimalist black and white portraits of early 1990s gender-non-conforming youth. My life exists in two parts: before I saw these images, and after. It was like seeing myself in the mirror for the first time.

The Nan Goldin, Bettina Rheims, and Wolfgang Tillmans books made me, at 18, feel like I was a part of something bigger than myself. But as time went by, and I began to document myself, my friends, and my own transition, I started to question what it meant that all the images of trans bodies I was exposed to growing up were shot through a cisgender lens. What is left unsaid?

In order to try to answer that question, I reached out to nine gender non-conforming and trans photographers—some close friends of mine, and some artists whose work I’ve admired from afar—and asked each of them to send me a photo or project of their own that focuses on trans and/or non-binary selfhood or community. The idea was to explore the intimate dynamic that manifests when trans individuals witness each other (or themselves). I wanted to know: Why do we document ourselves? What does it mean to be seen? What happens when see each other? Below are their answers. — Laurence Philomene

B. G-Osborne: "waiting for my new skin to bloom"

"These towels have existed in my grandmother’s Union Street house for decades—probably since the house was built in the 1950s. She washes them every week even though they are no longer used; I have no idea how they remain lively after that many washes. Rough like pumice stones, I remember the ritualistic rubbing and scratching myself raw after every bath with my tiny hands, the door closed and locked quietly, waiting for my new skin to bloom.” — B. G-Osborne

1572238999356-b-g-osborne-union
"Union"

Elle Pérez: "alternative possibilities of sex"

“The testosterone vial (bottom) and the platano palm (top) are linked by light, and through their juxtaposition form a different kind of portrait—a version that attempts to show what the experience of testosterone hormone therapy is like outside of the physical changes traceable on the body. The platano as a cultural touchstone is a hallmark of Puerto Rican cultural production, however, almost always the fruit and never the leaf. I am interested in the alternative possibilities of sex this vision of the platano offers when re-imagined as a body.” — Elle Pérez

1572239035157-elle-perez-gabriel
EP-19-PH-035, Elle Pérez, gabriel, 2019, Digital Silver gelatin print, 55 X 38 ⅜ inches (139.70 X 97.47 cm) 56 . X 39 ⅞ inches (143.51 X 101.28 cm) (framed) Edition of 5 plus II AP
1572239048220-elle-perez-t
EP-19-PH-034, Elle Pérez, t, 2019, Digital Silver gelatin print, 55 X 38 ⅜ inches (139.70 X 97.47 cm) 56 . X 39 ⅞ inches (143.51 X 101.28 cm) (framed) Edition of 5 plus II AP

Hobbes Ginsberg: still alive

“These photos are from my recent book/show titled still alive, a series of self portraits that explore what it means to grow up and build a life for yourself. Still alive is a celebration of making it through another year without killing myself and learning to navigate my struggle with mental illness. Meandering through the story of an ever-changing self, these selfies question ideas of grandeur, of being an icon, and our relationship to our constructed environments. Vulnerable and hyper-saturated DIY tableaus explore what it means to find stability and self sufficiency, to become an 'adult' and what it looks like to survive as a queer person. It was really important for me as a trans artist, and especially someone who works a lot with their body, to make work that wasn’t specifically about being trans, and demand that my work be viewed that way. So often it feels like the things we make are only allowed to be about our trans-ness, and more often than not about that as a struggle, and it feels so reductive.” — Hobbes Ginsberg

1572239313030-hobbes-ginsberg-self-portrait-at-25-after-dorothea-tanning-2018
Self Portrait at 25 (after dorothea tanning), 2018

Jess T. Dugan: Every Breath We Drew

“These self-portraits are from my ongoing series Every Breath We Drew, which explores the power of identity, desire, and connection through portraits of myself and others. I have always been driven to make work about my own life and experiences; I believe deeply in the importance of representation and hope that my work can be used as a catalyst to begin larger conversations about gender, identity, and sexuality. When I was coming out as a young queer person, I didn’t see myself represented in the broader culture. I first discovered images of queer and gender nonconforming people in fine art photography books, and this discovery was profoundly influential to me. One of my primary aims is to create, exhibit, and publish photographs depicting queer experience to fill society’s gap in representations of these lived experiences and embodiments.” —Jess T. Dugan

1572239415087-Jess-Dugan-Self-portrait-bath_2012
Self Portrait (Bath), 2013
1572239457397-Jess-Dugan-Self-portraitmuscle-shirt_2013
Self Portrait (Muscle shirt), 2012

June T. Sanders: "photography as an act of love"

“This work initially came from a desire to make portraits of people I admire or care for or look up to in some way. But over time it’s sort of developed into a more complex representation and a posturing towards a radical, queer exchange. Now it’s also a way to make images that might approach a fantasy realm or a framework for past, present, and future embodiments. The working title for this body of work is Some Place Not Yet Here. A lot of the time, I’m thinking about how we as queer and trans people move through space, move through our bodies, and move through the landscapes surrounding us. And how we channel our own social and personal histories within those movements. I’m interested now in how an image might reflect these qualities and the questions, potentials, and emotional weight an image can hold for ourselves and our larger communities. This work is important to me, really, because it’s important to others. And because it’s allowed me to see photography as an act of love. I sometimes feel the most within my body and the most within a queer community when I’m photographing—and I think that comes from the immense amount of care and vulnerability that is offered to me from the people I make photographs with. The cathartic experience of making photographs, and the feedback I’ve gotten from people who see themselves reflected in the work are I think what motivates me the most now.” — June T. Sanders

1572239488588-june-t-sanders-harpoandIke
Harpo & Ike, June Sanders
1572239525957-june-t-sanders-fox
Fox, June Sanders

Lia Clay: "beauty and dimension beyond subversion"

“I’ve been photographing friends for the past year or so. Honestly, there isn’t any intention behind representation or identity … with these images. It’s a place where I want to leave that behind, just for once. I think the catalyst of me being where I am today is more to do with my identity as a trans woman, than that of a photographer. We barely get the chance to actually control how we are represented … most of it is leaning in on mass media and hoping they get it right. Sometimes they do … usually when they take a backseat. As a photographer trying to push further and further into the sphere of what is considered 'successful,' it becomes immediately clear that identity is a dangerous word. We have to work so hard to outgrow it… and push the lens elsewhere. We exist in a place in media where identity from those who identify outside the realm of cis-normativity has become demanded of us.

I don’t think people understand the weight that’s put on you when you’re a creative artist working for money. I feel so protective of the images I make with my friends. There’s this hyper-critical focus that’s come about because I am so scared someone will only see them as representations of identity. I don’t want them to rest on that … this isn’t about subverting the normative. This is about creating a realm of beauty and dimension beyond subversion. It’s about closeness and feeling a sense of safety in working together. It’s about being in control … for fucking once. A friend of mine told me a long time ago that I didn’t photograph her like a ‘trans woman,’ rather a woman. That’s the problem when cis photographers approach us. They aim their lens on what makes us different or piques the interest of the viewer. The reality is that it’s not for our benefit, it’s for theirs. They get praised for their 'brave' and 'challenging' viewpoint. We just get the Diane Arbus subtext.” — Lia Clay

1572239880668-lia-clay-dylan-fort-tilden-2019
Dylan, Fort Tilden, 2019
1572239912063-lia-clay-dylan-fort-tilden-2-2019

Marina Labarthe: Enby Spoken Histories

Enby Spoken Histories is an archival storytelling project documenting the rich and colorful histories of non-binary and transgender individuals. We record, preserve, and disseminate stories told by the community to raise awareness, educate, and normalize our humanity. Although our identities are ancient and our stories have been passed down for generations, they remained undocumented and inaccessible to policy makers and the public at large.

Carter (co-founder of ENBY) and myself feel that our identities have been misunderstood by society and the general public throughout time. We are documenting our own histories, in the ways that we want to tell them, because no one else can do it for us. We are going to be visible and understood for who we truly are—human beings—no matter what it takes.

Being misunderstood by society often means facing violence, especially if you are a trans person of color. Enby Spoken Histories was born out of emotional need—a need for our community to feel seen, heard, reflected. ENBY is not only an archival storytelling project, but a movement striving to disrupt the systems in place that affect queer lives daily.

This piece was titled after a poem written by Bobby Sanchez before going into an interview at the StoryCorps booth in NY. It is one of many examples of ways in which people tell their stories and we work together as a community to lift them up.” — Marina Labarthe

1572239647472-Marina-Labarthe-Ancient-identities-enby-spoken-histories
Ancient Identities, ENBY Spoken Histories. (Pictured—left: Shai; right: Anonymous).

Shoog McDaniel: "star-filled moments of joy and wonder"

“Being a trans fat person means being twice as magical, but it also means attracting a lot of negative attention. Often times when we walk through the streets and people look at us weird, it’s hard to know if it’s about our fatness or our transness. This is compounded if your are a person of color, with disabilities or any other marginalized identities. It is very important for me to shine light on the magic of Transfats because we often exist so far out of the norm that we struggle daily to meet our basic needs in this world. Healthcare? Jobs? Love? Safety from systemic discrimination? A lot of things people take for granted. However, with that struggle comes beautiful, star-filled moments of joy and wonder, and I aim to capture those. I want to highlight the fact that when we come together to share space and time, our intersecting marginalizations actually create universes around us, taking us to a place free from judging eyes—even if just for a moment. When we come together, it is a powerful thing. When we love ourselves, it is writing a new story about how we will live our lives, not dictated by cis white men sleepwalkers. We are the dreamers, because we have to be. We have to imagine something better than this. That’s what I aim to do through my work.” — Shoog McDaniel

1572239752505-Shoog-mcdaniel-Burr-White
Burr White, Shoog McDaniel
1572239711563-Shoog-mcdaniel-Matias-Herrera
Matias Herrera, Shoog McDaniel
1572239843868-shoog-mcdaniel-Vice-self-portrait
Self-Portrait, Shoog McDaniel

Texas Isaiah: "what has always existed"

“This image is of Tashan Lovemore from Black Trans TV. It serves as the genesis for a project I am working on, which explores, honors, and nurtures the contemporary history and presence of Black people who exist underneath a trans masc umbrella. The heart of my ideas, thoughts, and visions are rooted in what already exists and what has always existed. I am interested in contributing images to a visual culture that has not served many Black and Brown individuals. I am interested in inspiring others to document themselves and their communities because we deserve to tell our own stories. This image carries a dream I have been holding for quite some time. I haven't witnessed a ton of pictures of Black trans men taken by other Black trans men. I don't often see photographs of us smiling and engaging in healing remedies and conversations that can contribute to the overall discussions surrounding masculinity and manhood.” — Texas Isaiah

1572239966353-texas-isaiah-tashan-lovemore-2019
Tashan Lovemore, 2019

Sign up for our newsletter to get the best of VICE delivered to your inbox daily.

21 Oct 16:44

Add ‘Pseudo Thumb’ to the Aye-Aye Lemur’s Bizarre Anatomy

by JoAnna Klein
Liz

Extra Aye Aye content! Aye Ayes were forced to develop a thumb because their feeding finger is too specialized to use as a regular finger. Scientists were so interested in the feeding finger they totally didn't notice the extra digit until now.

"The aye-aye’s hands became so specialized at this tap foraging “that they lost the ability to grip,” Dr. Hartstone-Rose said. They compensated, he thinks, with pseudo thumbs.

If the researchers confirm their hypothesis by studying aye-aye lemurs living at the Duke Lemur Center in Durham, N.C., the aye-aye will be the first animal described to have evolved an extra digit for dexterity because of a hyper-specialized trait."

Hiding in the weird creature’s palm was something that scientists had missed.
13 Oct 16:28

Lizzo performs NPR Tiny Desk Concert: Watch

by Lake Schatz
Liz

EVERYONE WATCH THIS. What did we do to deserve Lizzo.

Last week, Lizzo treated fans to her video for “Tempo” featuring Missy Elliott. She’s back today with another exciting visual in the form of an NPR Tiny Desk Concert.

We’ve seen the Detroit-born rapper deliver massive, fully choreographed live performances on the international stage, so it begs the question: Does her energy translate to an intimate setting, or as Lizzo called it, “a tiny ass desk”? The answer is a resounding yes.

(Read: 10 Female Rappers You Should Definitely Know About)

Although confined by the space of four walls, she brought just the right amount of power, flair, and Sasha Flute (!) to her Cuz I Love You set. Specifically, Lizzo threw down the title track, “Truth Hurts”, and “Juice”, one of our top songs of 2019. Replay it down below.

In addition to making her feature film debut in Hustlers, Lizzo will be on tour for the next couple of months. Tickets to all her upcoming shows can be purchased here.

Lizzo performs NPR Tiny Desk Concert: Watch
Lake Schatz

09 Oct 18:18

Stuffed With Sockeye Salmon, 'Holly' Wins 'Fat Bear Week' Heavyweight Title

by Tom Goldman
Liz

fat bear share!

also guten morgen from Heidelberg!

"Holly" is the winner of this year

The annual competition is put on by Alaska's Katmai National Park & Preserve to choose the fattest bear. This year, Holly beat out Lefty in the championship round, winning 80 percent of the votes.

(Image credit: Katmai National Park & Preserve)

24 Sep 19:49

Why MIT Media Lab thought it was doing right by secretly accepting Jeffrey Epstein’s money

by Kelsey Piper
Liz

A thoughtful piece on the arguments for and against MIT Media Lab taking Epstein money.

Truly a shame, but I don't believe Ito is an irreplaceable leader, and maybe the next leader will be on the cutting edge not only in technology and research but also in actually ~paying attention~ when creeps show you who they are.

"[Epstein] even brought young women in tow when visiting the Media Lab to meet with senior researchers in person as a VIP. (“All of us women made it a point to be super nice to them,” Signe Swenson, a former employee at the lab, told the New Yorker. “We literally had a conversation about how, on the off chance that they’re not there by choice, we could maybe help them.”)"

A protester holds up a sign of Jeffrey Epstein’s face in front of the federal courthouse in New York City after his arrest. A protester holds up a sign of Jeffrey Epstein in front of the federal courthouse in New York City after his arrest. | Stephanie Keith/Getty Images

Sometimes very smart people have a very bad idea.

On Friday, a New Yorker piece outlined in vivid, horrific detail how the MIT Media Lab took money from convicted sex offender Jeffrey Epstein — and how hard it worked to keep his contributions anonymous. Over the weekend, director Joi Ito resigned, and employees past and present came forward with stories of a deeply unhealthy internal culture — one that was obviously a product, in part, of its close, secret ties with Epstein.

The obvious question: What on earth were they thinking? The MIT Media Lab — an interdisciplinary research center affiliated with the Massachusetts Institute of Technology — was well regarded, well funded, had great publicity, and was attached to one of the world’s best universities. Why would they risk it all to attract donations from someone like Epstein? And how could people write emails like the ones revealed in the New Yorker piece — “jeffrey money, needs to be anonymous” — without realizing they were on the path to disaster?

On Sunday, we got a partial answer via an essay by Larry Lessig, a professor of law at Harvard Law School and the former director of the Edmond J. Safra Center for Ethics at Harvard University. He knew all along that the MIT Media Lab was taking Epstein’s money, he said. He thought it was the right thing to do. So, he says, did the team at the Media Lab.

Their justification is simple: If someone is a bad person, taking their anonymous donations is actually the best thing you can do. The money gets put to a better use, and they don’t get to accumulate prestige or connections from the donation because the public wouldn’t know about it.

This argument isn’t that eccentric. Within philanthropy, it has been seriously raised as a reasonable answer to the challenging question of how organizations should deal with donations from bad actors.

Now, the mess at the MIT Media Lab is forcing a reckoning.

The hope was that anonymity would make it harder for bad people to benefit reputationally from their giving. What happened instead was that anonymity became a shield to dodge accountability, transparency, and common sense. The secret was corrosive to the internal culture at the Media Lab, and in the end, it was a ticking time bomb guaranteed to eventually, disastrously explode.

In trying to think about doing philanthropy right, we’re back to the drawing board: How should organizations treat donations from bad actors?

The argument that anonymous donations from bad people are good, explained

Who would you rather have $5 million: Jeffrey Epstein, or a scientist who wants to use it for research? Presumably the scientist, right?

In a nutshell, that’s the argument against shaming people and organizations that take money from multimillionaires who arrived at their fortunes unethically — whether they’re involved in a company found to have used enslaved labor, accused of hiding evidence of the opioid crisis, or, in Epstein’s case, convicted of procuring minors for prostitution and suspected of having used their hedge fund to enable and cover for the abuse of girls and young women.

Of course, this argument overlooks some important considerations. Yes, it’s better for Jeffrey Epstein’s money to go to someone else — but what if Epstein is able to leverage the donations in order to accrue power, connections, and influence that lets him dodge additional charges? (He seems to have routinely used his power and connections to do exactly that.)

That’s where anonymous donations seemed like the perfect solution. If a donation is anonymous, the theory goes — that is, anonymous to the public — the giver cannot accrue any prestige or social capital from it. They can’t build connections. The gift doesn’t benefit them. And the money is now in better hands. What’s not to like?

“I think that universities should not be the launderers of reputation,” Lessig wrote in his essay on Sunday, defending the Epstein gifts to the MIT Media Lab. “I think that they should not accept blood money. Or more precisely, I believe that if they are going to accept blood money ... or the money from people convicted of a crime ..., they should only ever accept that money anonymously. Anonymity — or as my colleague Chris Robertson would put it, blinding — is the least a university should do to avoid becoming the mechanism through which great wrong is forgiven.”

Such an argument about anonymity might seem like a stretch. But there’s been some serious thinking about how anonymity might enable donations while reducing the influence of the donors. Take Yale Law professors Bruce Ackerman and Ian Ayres, who proposed that political donations be required to be anonymous — so you could support preferred candidates, but not purchase influence.

When the Met renounced further donations from the Sackler fortune associated with the opioid crisis, the decision spurred renewed interest in the idea that anonymity could separate the donor and the donation. “If museums, universities, opera houses and symphony halls stipulated that all donations had to be anonymous, the morality of the donor — or the need to assess whether ill-gotten gains lurked beneath a specific donation — would be a moot point,” one letter in the New York Times argued.

“Everyone seems to treat it as if the anonymity and secrecy around Epstein’s gift are a measure of some kind of moral failing,” Lessig writes. “I see it as exactly the opposite. ... Secrecy is the only saving virtue of accepting money like this.”

The MIT Media Lab fiasco proves that anonymity to stop donor influence doesn’t work ... at least, not the way they did it

Everyone involved may have gone into their partnership with Epstein with the best of intentions — or, at least, with some justifications in mind — but it’s hard to dispute that disaster resulted.

The New Yorker reports that Epstein was aggressively courted by the Media Lab, which consulted him about the use of funds. He even brought young women in tow when visiting the Media Lab to meet with senior researchers in person as a VIP. (“All of us women made it a point to be super nice to them,” Signe Swenson, a former employee at the lab, told the New Yorker. “We literally had a conversation about how, on the off chance that they’re not there by choice, we could maybe help them.”)

Researchers with concerns say they were ignored. Epstein was credited with securing contributions from other donors — up to $7.5 million in total. Whatever the Media Lab’s intentions, it seems that Epstein leveraged his connections with the institute to increase his personal standing. “Epstein used the status and prestige afforded him by his relationships with élite institutions to shield himself from accountability and continue his alleged predation,” the New Yorker argues.

Clearly, lots of things went wrong here. The first was that while the donations were anonymous in the sense of being secret from the public, they were known to the staff at the MIT Media Lab — meaning Epstein still got much of the influence that the anonymity was supposed to cut him off from. He got a say in how the Media Lab spent the money and got special opportunities for face-to-face time with influential people. The anonymity was meant to separate the man from the money. It did not do that.

“Donations are never fully anonymous,” a paper in Nature Human Behavior looking at the dynamics of anonymous donation points out. “These donations are often revealed to the recipient, the inner circle of friends or fellow do-gooders.”

Another problem was that Epstein’s anonymity was always fragile. On a few occasions detailed in the New Yorker piece, information identifying him as the Media Lab’s secret VIP donor was spread more widely than intended; eventually, of course, the whole situation became explosively public. The case for anonymous donations assumes they’ll remain anonymous forever. In reality, it only takes one disgruntled ex-staffer, strategic leak, or mistaken email forward to lift the thin veil of anonymity.

Trying to keep secrets like this is often bad for an institution’s culture, too. In general, it’s good for nonprofits to be transparent with their staff and with their other donors. The Epstein situation forced MIT Media Lab to evolve the sort of institutional culture that could keep Epstein secret — which meant, in the New Yorker piece, dismissing and ignoring the concerns of researchers who objected to the situation, keeping the scientists who objected to Epstein away when he visited, and floating trial balloons during the hiring process to screen out anyone who’d object to working with Epstein.

Others have pointed out that Epstein’s presence meant a hostile environment for the women who worked in the lab.

These examples make one thing clear: Keeping a big secret limits who can work for you. It has to be people who will keep the secret, either because they agree or because they’re intimidated into staying quiet. It requires you to pressure anyone who doesn’t agree with keeping the secret. This is bad for the internal culture of any nonprofit.

Finally, even if the intent of keeping a donation anonymous is shutting the donor off from influence, it’s easy to suspect there’s another motive at play: dodging controversy. If the MIT Media Lab had been required to disclose the donations from Epstein, they would also have been required to explain their justifications for taking money from him, their assessment of whether he posed a risk to his young “assistants,” and their assessment of whether they were giving him more contacts to leverage.

Now that the whole situation has come to light, those justifications look confused, ill-conceived, and incomplete. If they’d been subject to public scrutiny from the start, maybe that would have been apparent much sooner. The lack of scrutiny meant less need for the lab to justify itself — but more thought about the justification for taking the money might actually have been healthy and badly needed.

So how should we take money from bad actors?

For all those reasons, I’m no longer on board with the argument that it’s fine to take money from bad actors as long as it’s anonymous. It seems to have been incredibly destructive to the MIT Media Lab, for reasons that would apply elsewhere, too.

But what do we do instead?

I’d still prefer Epstein’s money be in any hands other than his. One thing worth noting is that a genuinely, truly anonymous donation would have been fine. If Epstein had sent a series of monthly small-dollar donations to the Media Lab, in varying amounts, and no one at the lab had ever learned that all these donations were from the same person let alone who that person was, then there’d have been no potential for institutional misconduct, undue influence, and connections to leverage. Of course, it’s hard to move huge amounts of money that way, and Epstein would almost certainly never have done this, precisely because he’d get nothing in return. But this kind of genuine anonymity seems fine to me.

What about when a bad actor approaches an organization about a donation they want to make?

There are downsides to any approach here. Declining the money has downsides, especially in the case of charities that desperately need it to continue providing essential services. Accepting it risks legitimatizing the donor and empowering them to evade justice as Epstein did for so long.

On the whole, I think this situation makes the case for letting the donations happen openly in the light of day — and face appropriate public scrutiny and criticism. If the organization isn’t willing to weather the public criticism, that suggests that perhaps the trade-off is the wrong one.

This, of course, is a system that only works if there’s a robust public dialogue critiquing philanthropy while recognizing the good it has the potential to do in the right circumstances. Luckily, there’s been a lot more conversation about the role and effects of philanthropy in the last few years, and such a robust dialogue seems possible. And openness is protective against the things that went wrong at MIT — a culture of secrecy that enabled people to intellectualize the indefensible and end up giving cover to a sexual predator.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

20 Sep 21:15

Male Dolphins Make Friends Based on Mutual Interests, Just Like Humans

by Sarah Emerson
Liz

SPONGERS.

Dolphins are socially complex creatures. They form cliques, shun rivals, and even dole out nicknames. They also create friendships over common interests, much like humans do, according to new research.

An investigation of Indo-Pacific bottlenose dolphins in Western Australia’s Shark Bay revealed that males prefer each other’s company while foraging for food, and forge friendships based on their preferred tool for the job.

Shark Bay’s dolphins are known for their tool use—specifically the use of sea sponges to cushion their beaks as they root around the ocean floor for fish. The behavior of these “spongers” was observed by scientists in the new study.

A bottlenose dolphin with a sea sponge in Shark Bay, Western Australia.
A bottlenose dolphin using a sponge to forage for prey in Shark Bay. Image: Stephanie King

Data collected from 124 male dolphins over nine years showed that spongers associated with one another based on their mutual interest in foraging techniques. Those who fed this way spent more time together than those who did not, according to findings published on Wednesday in the Proceedings of the Royal Society B.

“Foraging with a sponge is a time-consuming and largely solitary activity so it was long thought incompatible with the needs of male dolphins in Shark Bay—to invest time in forming close alliances with other males,” Simon Allen, a senior research associate at the University of Bristol’s School of Biological Sciences and co-author of the study, said in a statement.

The team looked at a subset of 13 spongers and 24 non-spongers between 2007 to 2015. Their analysis showed that while spongers were attracted, non-spongers also formed alliances, and both groups socialized for equal proportions of time.

This type of bonding has been previously observed in Shark Bay’s female dolphins. Janet Mann, a behavioral ecologist at Georgetown University who has studied this population, suggests that sponges are mostly used by females, due to “selective pressures they face while raising a calf as long as they do.” Foraging around rocks could yield prey that other dolphins are unable to find, Mann said.

The new research offers new insight into the lives of male dolphins.

“These strong bonds between males can last for decades and are critical to each male’s mating success,” Manuela Bizzozzero, an evolutionary geneticist at the University of Zurich and lead author of the study, said in a statement.

“We were very excited to discover alliances of spongers, dolphins forming close friendships with others with similar traits,” Bizzozzero said.

20 Sep 11:08

The Great Socialist Bake Off

by Josh Freedman
Liz

I want to curl up in the bosom of the great British baking show and never exit. Also a great way to deal with climate change anxiety. Warning: unwelcome season 7 spoilers!

The Great British Bake Off isn't just wonderful entertainment. By prizing cooperation over cutthroat competition and solidarity over selfishness, it's also quietly radical.


Great British Bake Off hosts Mel Giedroyc (left) and Sue Perkins (right) with judges Paul Hollywood (center left) and Mary Berry (center right). (PBS)

By any standard logic of entertainment, nobody should enjoy The Great British Bake Off. There is no fighting, no cursing, no backstabbing. The contestants seem to genuinely like each other. The B-roll footage between segments features extended shots of assorted flora and fauna. In short, the show is exactly what the title says it is: a bunch of ordinary Brits standing around their ovens trying to bake. Yet, somehow, this is a recipe for extraordinary television, even for the American appetite.

Plenty of critics have tried to parse the surprising allure of The Great British Bake Off — or, as it’s known in the United States, The Great British Baking Show, because the company Pillsbury somehow trademarked the word “bake-off.” (In America, even the cutest of doughboys are ruthless capitalists.) Many have suggested the show is popular because it provides a refuge from the mess of contemporary life: it offers “delight, distraction, and a dash of happiness” amid a nightmarish political and economic landscape. It is, they argue, “a panacea for wounded British souls in recent years — a reminder that, no matter how bad things get, the fabric of the nation is built upon cups of tea and feather-light sponges.”

I think the opposite is true: the Bake Off is quietly radical. I know this might sound ridiculous, but hear me out. Underneath the heaps of flour and steady stream of baking puns is a challenge to the assumptions we often make about competition, incentives, and power in the contemporary world order. The Bake Off is not a smooth buttercream frosting lubricating the ravages of modern capitalism, but a reproach to its very premises. It offers a vision of creativity, ambition, and hard work that holds up the beauty of individual flourishing without extolling ruthless, interpersonal competition.

What goes on inside the giant white tent — among the flora and fauna, somewhere in the British countryside — might better be called “The Great Socialist Bakeoff.”


Torte a Better Future

From the outside, The Great British Bake Off seems like any garden-variety cooking competition. Each season begins with twelve amateur bakers from around the UK competing in a series of weekly baking challenges. In each episode, the contestants are asked to produce three different baked items, all of which must be completed — baked, decorated, plated — within a given time limit. Two of these challenges are revealed ahead of time to allow bakers to practice at home; one, judged blind, is revealed on the set. At the end of an episode, the two judges anoint one person “star baker” of the week, while also eliminating one person from the competition. The last contestant standing is crowned the winner. He or she is, for all intents and purposes, the best amateur baker in Britain that year.

The structure of the Bake Off is clearly one of competition. There are judges, time limits, elimination, and, of course, winners. Yet it conspicuously lacks many of the elements we normally associate with competition. The title of “star baker” carries no practical value. Star bakers do not get immunity from elimination in the next round, or money, or brownie points from the judges. If anything, winning star baker makes a contestant more anxious, as he or she now expects the judges’ expectations to rise like a well-proofed sourdough.

Nor does the overall competition provide obvious incentives. The contestants pour their blood, sweat, and tears (often literally) into completing the tasks. One baker commented that the time in the tent was far and away the most stressful ten weeks of her entire life.

For their hard work, winners receive no money. They don’t get a guaranteed book deal or television show. All they win is a decorative cake stand — and a pretty underwhelming one at that. Yet when contestants reflect on their experience on the show, emotion overwhelms them. They talk about baking the way runners talk about finishing a marathon or setting a new personal best. They are satisfied not because they have pushed others aside and risen to the top, but simply because they are proud of what they have baked. They are humans doing what they enjoy, to the best of their ability — not solipsistic maximizers who will only pursue their goals if they have the right monetary incentives to do so.

Capitalism’s ideologues often extol the extraordinary individual. They insist that only by offering unlimited incentives to the most economically productive (the job creators, the entrepreneurs, the prodigies) can we maximize growth and thus have the most prosperous society. The outsize focus on the extraordinary individual is the backbone of reality television, too. Contestants are often selected for their uncommon traits, which are then exaggerated and weaponized. To artificially inflate the level of drama, shows play up the extreme differences between individuals. What could be more dramatic, or foster more creativity, than a tiny bagpipe player competing for scarce resources against an eight-foot-tall ninja?

Bake Off contestants are almost impossibly ordinary. All of the contestants are amateur bakers. Their hobbies run the gamut from spending time with family to walking their dogs. None of the bakers were raised by wolves or suffer from a rare form of cancer whose only cure is baking a Victoria sponge. As Alex Clark writes in the Guardian, “Bake Off’s charm has always lain in its refusal to conform to the rampant back-story prurience of other reality television shows, the high-definition equivalent of curtain-twitching combined with a vapid belief in personal journeys.”

There are no ninjas, only nurses or teachers or college students — people who struggle to find time to practice their baking because they are grading papers or working a second job to make ends meet. (Even an appearance on television cannot excuse them from day-to-day economic exploitation.)

Yet these amateur contestants are not just baking run-of-the mill cakes — they are creating impressive, unique, edible works of art. Bake Off is a celebration of the average worker, of the man or woman on the street. Contra the trickle-down ideology of neoliberal capitalism, Bake Off reminds us that ordinary individuals are capable of extraordinary creativity. Excessive, pernicious competition not only fails to incentivize regular individuals, it diminishes their spirit. And while the white tent is a far cry from a worker’s paradise, the attitude the bakers express is reminiscent of Orwell’s description of his 1936 visit to leftist-run Barcelona: “Human beings were trying to behave as human beings and not as cogs in the capitalist machine.”


There’s No Pie in Team

In The Great British Bake Off, compassion and sympathy take center stage. Rather than suppress those human impulses, as capitalist competition often encourages, the Bake Off allows them to flourish. A failure in the baking tent is not greeted with schadenfreude but with understanding and support.

Take the third episode of the seventh season. Candice, the eventual season winner, struggles to get her brioche loaf out of the oven and onto the plate. She isn’t sure if it is cooked, and time is running out. She has no choice but to take it out and pray that it is baked all the way through. More immediately, she needs to find a way to get the piping hot brioche out of the pan before the clock hits zero.

The other contestants could have watched with glee as they saw one of the most talented bakers struggle, knowing their own chances of success would increase. Without hesitation, however, three fellow bakers rush over to assist Candice with her brioche. They manage to extract the cake from the pan and present it to the judges. (It was still undercooked, but it would have been worse if it was stuck in the pan.) Contestants — ostensibly competing for the same prize — assist each other so often that BuzzFeed even made a list of the times that bakers stepped in to offer a helping hand.

Selasi, a semifinalist in the same season, explains: “Everyone is very, very friendly and helping each other, and it doesn’t actually feel like a competition. It ends up feeling like a group of friends, baking together in the same kitchen and just having fun.”

It turns out this is not accidental — breaking the vise grip of vindictive competition requires active, determined effort. The initial impetus came from the show’s original hosts, the comedians Mel Giedroyc and Sue Perkins, whose job is to introduce challenges, make baking puns, and offer sympathy to the bakers by reminding them that the hosts themselves know nothing about baking.

Mel and Sue apparently walked off the set in the first season “because of the attempts to manufacture drama, X Factor-style, which had left contestants in tears.” Perkins elaborated: “What we were saying was ‘Let’s try and do this a different way’ — and no one ever cried again. Maybe they cry because their soufflé collapsed, but nobody’s crying because someone’s going, ‘Does this mean a lot about your grandmother?’ Many of the bakers have sad stories, but guess what? We never touch on any of them.”

We normally assume that the drama of reality television comes from interpersonal competition, as only one contestant can attain the coveted prize. For contestants, the competitors are their peers: some people have to win, so other people have to lose. Once contestants identify the target of competition, the sabotage and subterfuge begins: if you deviate in any way from the ruthless path of maximal efficiency, someone else will swoop in and take your place, leaving you with nothing but a soggy bottom.

So how does the Bake Off manage to achieve riveting television without cutthroat competition? Mel and Sue’s supportive environment does not eliminate the thrill of competition, but redirects it away from petty feuds between different contestants. The competition is between contestants and themselves (to do their best) and between contestants and the judges: namely, Paul Hollywood — officious cook, bread aficionado, and bearer of the fakest-sounding real name of all time.

Mel and Sue are constantly on the prowl for material to lampoon, but their most prominent target is Hollywood: it is as if their job is not only to introduce challenges and make puns but to cut Paul Hollywood’s ego down to size. Paul Hollywood has power, and arrogance oozes out of his pores. Mel and Sue are there to give the rest of us a sort of winking acknowledgment that nobody likes people like that. There’s no need to make fun of contestants when you can make fun of the egotistical man-boy in the front of the room.

All of the contestants feel they are together in battling the menace of Paul Hollywood. Nobody wants anybody to be subject to a withering criticism from such an unsympathetic man. When Candice’s brioche turns out to be raw inside, the other contestants aren’t elated that their foe has been knocked down; rather, they shoot her sympathetic looks and rush over to console her, knowing full well that a similar unsparing cake knife may slice through their own creations soon enough.

Val, another contestant on season seven, suggests that the bakers were not only teammates but fellow soldiers. Referring to Selasi, she says, “I remember I was doing my gingerbread, and it collapsed. I’m going ‘Ahhh!’ and I think it was you who said, ‘I’ve got some glitter.’ You threw some glitter over to me and put icing sugar and glitter all over it and we just kept it going. And that’s what bakers do. You shout out that you need a bowl or a whisk, and one of your baking friends will get it for you. You know those trenches in the war? It’s kind of like that.”

Although war may be a bit of an extreme comparison — despite the scores of gingerbread men that have fallen in the Bake Off line of duty — the solidarity that Val is referring to is so palpable you can taste it. The bakers are more like teammates than competitors, more partners than rivals. Being part of a team shifts the pressure off of any individual, giving each person the space to succeed. Creativity (and puns) abound.


Choux Me the Money

Unfortunately, even the giant white tent of socialism cannot completely escape the onslaught of neoliberal capitalism. In 2016, the production company decided to move the show to a new channel in pursuit of higher profits (BBC, Britain’s public broadcaster and the original channel, offered to double the price it had been paying, but the production company refused to accept anything lower than quadruple). Mel and Sue quit, saying they were “not going with the dough.” So did Mary Berry, the judge with the second-fakest-sounding real name of all time. Only Paul Hollywood stayed on.

The new version of the show features another Mary Berry (Prue Leith) and two new hosts. I hesitated to watch it, feeling almost like an anti-union scab betraying the radical vision of Mel and Sue. Thankfully, even if the new version can never fully shake off the stench of corporate greed enshrouding its rebirth, it still maintains the fundamental spirit of the original show.

The hosts are more outlandish with their antics but still promote cooperation and solidarity. The contestants still view Paul Hollywood with the same mix of fear, respect, and defiance. And the bakers are still mind-bogglingly regular people whose unabashed ordinariness would disqualify them from any other show on television.

They are still just human beings trying to behave as human beings, not cogs in the capitalist machine.


10 Sep 00:42

In Which My Mom Ranks My Best and Worst Tattoos

by Crystal Anderson
Liz

this is endearing and also an interesting look at the complete history and motivations of another person's tattoos.


I got my first tattoo on February 19, 2000, a week after my 18th birthday. Tattooing was illegal in South Carolina at the time, where I lived, so I knew I’d need to drive 90 miles north to North Carolina to get it. My dad was on board, but I had to beg and plead with my mom—it never occurred to me to go against her wishes. When she finally agreed, my dad drove me to a trailer right on the North Carolina/South Carolina border with metal siding and a neon sign that said, “Closest tats to SC!”

I arrived with my design in hand: a crown that looked dangerously similar to the Hallmark logo (in that it was the Hallmark logo) with “Princess” written in a bold cursive font found on Microsoft Word underneath it. A biker put me in a chair and told me if I twitched or moved, he’d leave me with ½ a tattoo and make me pay for the whole thing. Paul, my dad, chuckled the entire time. I left feeling like a grown-ass woman—a Queen if you will—even though I was sporting a tattoo that said otherwise.

In hindsight, my parents are the fucking coolest. A lot of my friends hid their first tattoos until they flew the coop, and there I was, crossing state lines with my dad to get mine done. And yet, 20 tattoos later, my mom is still shocked every time I get a new one. Curious to see where she stands on all my body art today, I asked for her honest appraisals of some of my best and worst tattoos. Below, meet the inimitable Lydia Delores Anderson, first of her name, thrower of precision shade, holder of the hottest of takes. If you know my mom, you know she’s never short on absurd anecdotes or quotes or general advice that is either life-changing or makes zero sense.

1. The aforementioned PRINCESS tramp stamp, age 18

man repeller tattoo

Mom and I are having a text convo about my tats. I remind her that she told me when I was 18 that I ran the risk of being paralyzed if the tattoo needle went into my spine. (1. Lydia is not a doctor and 2. that risk is literally not a possibility!) When I ask her what she thinks about it she says she’s fine with it now, because you can’t see it with clothes on and that it represented who I was at that time. Can’t tell if that’s shade or not, but here we are….

2. The crucifix tattoo on my ankle, age 22

Obviously me and every other elder millennial has this tattoo after seeing early-aughts haute gal Nicole Richie with her version. I switched mine up a bit and got my family’s names all tattooed around my ankle like links on a chain. Lyds of course likes this one the most because “it has my name on it.” My mom is vain and ridiculous.

3. “The World is Mine” Scarface tattoo between my shoulder blades, age 23

I really hate this tattoo. It’s of the “The World is Mine” sculpture in the film Scarface and it’s my least favorite of the whole lot. I still don’t know why I got it. I think I had this convoluted idea that it was to celebrate my Italian heritage, but spoiler alert, Tony was Cuban and the movie was violent and I’m an idiot. Also, the hand on this tattoo looks like it belongs to the Crypt Keeper and the globe is not the least bit geographically accurate. Like not even a little bit. Not a spoiler alert, because everyone saw this one coming: Mama HATES this tattoo. Also she has jokes! “Not a fan, and the world isn’t yours, it’s ours.”

4. “Stay Woke” on the top of my left wrist, age 34

I mean, this phrase really resonated with me when I got it, but then got ran into the ground, so I’m now thinking about getting it covered up. I’ve never asked my mom about this one, so I’m intrigued to see what she has to say! She is nothing if not a political junkie and believer in creating a better world for black folks, so I think she might be down for this one. (Three minutes pass, enough time for her to craft a sweet and shady response.) Yup, as I thought: She loves this one because of the social justice significance, but says she’d love it more if I had none of the rest.

5. My weird tribal-ish hand tat, age 35

I dunno, I can take or leave this guy. I got it when I was on vacation in Utah. All of my friends went to Colorado for the day to check out the legal weed and, as a non-smoker, I had to find something to occupy my time. An hour later I ended up with this. I don’t dislike it, per se, I just don’t care about it. Lydia does not mince words: “No, definitely not. You didn’t tell me you were getting it and I thought it would hinder you from getting a job.” My mom thinks I’m an investment banker or a hand model, you guys. Also, she double-replies to let me know that she’s just waiting patiently through this exercise so that I can show her my knuckle tattoos and she can then unleash the fury of 1000 tongues on me.

6. Alfred E. Newman of Mad magazine, age 33

I got this as an ode to my Dad. He loves Mad magazine and always wanted this tattoo but Mama Anderson forbade him. So I decided to get it for two reasons: 1. To honor him by getting something inked that meant so much to him, and also have him hand write this phrase that he said every morning before I left for school: “Don’t take any wooden nickels” and 2. To get under my Mom’s skin, lol. It didn’t work because she really, really loves this one. She loves how much it means to my dad and how much pain I went through to honor him in that way. That said, I know she’s a little annoyed that she loves it so much.

7. “High Life” tattooed across my knuckles, age 37

I joke a lot about my mom being a shade queen, but she has always given me the freedom to express myself both in how I speak and how I present myself visually. My parents let me wear whatever I wanted growing up—no matter how kooky—and let me be my truest self (including speaking exclusively in an English accent for weeks when I was seven). So while the tattoos annoy her, she never gives me too much grief about them.

That said, while I was sitting in the tattoo chair for this one, I knew she was going to be pissed. We always have a tattoo agreement that stretches and shifts after every new bit of ink. First it was, “Okay, get ‘em as long as they are on your back and can be covered with clothes,” then it was, “It’s fine as long as your sleeves can cover the ones on your arms,” and so on and so forth until we reached: “For the love of god, Crystal, please, please, please no face, neck, or knuckle tattoos.”

Cut to my hand tats. I got my High Life tattoo in New Orleans and was actually scared to tell her, so I did what any self-respecting kid would do: I posted it on Instagram and waited for the call…..but alas, no call came. What did come was a video that my sister took the moment mom found out: You can see the utter confusion and dread overcome my mom’s face and hear my sister cackling in the background while my mom shrieks, “Is this real? This isn’t real! Why the hell would she do that?!” Then you see her reach for her phone to give me more than a piece of her mind. Sounds awful, but is truly hilarious.

Which is to say: Lydia says she hates this one, not because of the words but because I look like I just got out of the clink (her words): “I mean, Christina, you need to grow up and do better. Gal, why would you do that? You know what…I know what the issue is! You have too much money. That’s the problem.” (My mom calls me Christina sometimes and that is NOT my name. Never has been.)

I know she means well and isn’t being mean—she’s just old school and southern and thinks the tattoo is unladylike, but surely she knows I’m not a “lady” by now.


I wanna thank Lydia a.k.a. Mama a.k.a. Lyds for being so sweet and amazing and down to walk down memory lane with me on this. I shall repay her by never getting a face tattoo. But also, like, never say never, ammirite?

The post In Which My Mom Ranks My Best and Worst Tattoos appeared first on Man Repeller.

05 Sep 21:40

Not a Human, but a Dancer

by Ed Yong
Liz

in which the Killers are invoked to talk about why animals (including us) might develop dancing as a behavior.

Before he became an internet sensation, before he made scientists reconsider the nature of dancing, before the children’s book and the Taco Bell commercial, Snowball was just a young parrot, looking for a home.

His owner had realized that he couldn’t care for the sulfur-crested cockatoo any longer. So in August 2007, he dropped Snowball off at the Bird Lovers Only rescue center in Dyer, Indiana—along with a Backstreet Boys CD, and a tip that the bird loved to dance. Sure enough, when the center’s director, Irena Schulz, played “Everybody,” Snowball “immediately broke out into his headbanging, bad-boy dance,” she recalls. She took a grainy video, uploaded it to YouTube, and sent a link to some bird-enthusiast friends. Within a month, Snowball became a celebrity. When a Tonight Show producer called to arrange an interview, Schulz thought it was a prank.

Among the video’s 6.2 million viewers was Aniruddh Patel, and he was was blown away. Patel, a neuroscientist, had recently published a paper asking why dancing—a near-universal trait among human cultures—was seemingly absent in other animals. Some species jump excitedly to music, but not in time. Some can be trained to perform dancelike actions, as in canine freestyle, but don’t do so naturally. Some birds make fancy courtship “dances,” but “they’re not listening to another bird laying down a complex beat,” says Patel, who is now at Tufts University. True dancing is spontaneous rhythmic movement to external music. Our closest companions, dogs and cats, don’t do that. Neither do our closest relatives, monkeys and other primates.

Patel reasoned that dancing requires strong connections between brain regions involved in hearing and movement, and that such mental hardware would only exist in vocal learners—animals that can imitate the sounds they hear. That elite club excludes dogs, cats, and other primates, but includes elephants, dolphins, songbirds, and parrots. “When someone sent me a video of Snowball, I was primed to jump on it,” Patel says.

In 2008, he tested Snowball’s ability to keep time with versions of “Everybody” that had been slowed down or sped up. In almost every case, the parrot successfully banged his head and lifted his feet in time. Much like human children, he often went offbeat, but his performance was consistent enough to satisfy Patel. Another team, led by Adena Schachner, came to the same conclusion after similar experiments with Snowball and another celebrity parrot—the late Alex. Both studies, published in 2009, reshaped our understanding of animal dance.

[Read: Can science teach us how to dance sexier?]

Meanwhile, Snowball was going through his own dance dance revolution. Schulz kept exposing him to new music, and learned that he likes Pink, Lady Gaga, Queen, and Bruno Mars. He favored songs with a strong 4/4 beat, but could also cope with the unorthodox 5/4 time signature of Dave Brubeck’s “Take Five.” “For the first half, Snowball struggled to find a dance that would fit,” Schulz says, “but about halfway through, he found moves that would work. The more that he was exposed to different music, the more creative he became.”

Snowball wasn’t copying Schulz. When she danced with him, she’d only ever sway or wave her arms. He, meanwhile, kept innovating. In 2008, Patel’s undergraduate student R. Joanne Jao Keehn filmed these moves, while Snowball danced to “Another One Bites the Dust” and “Girls Just Want to Have Fun.” And recently, after a long delay caused by various life events, she combed through the muted footage and cataloged 14 individual moves (plus two combinations). Snowball strikes poses. He body rolls, and swings his head through half circles, and headbangs with a raised foot. To the extent that a parrot can, he vogues.

Compare these two videos. First up, classic Snowball:

And now a medley of new-and-improved Snowball:

“Coding his movements was more challenging than I thought,” says Keehn, now a professor at San Diego State University, and a classically trained dancer herself. “I’m used to thinking about my body, but I had to solve the correspondence problem and work out what he’s doing with his. Headbangs were easy: I have a head. But sometimes, he’d use his crest. Unfortunately, I don’t have one.”

These newly published observations cement the human-ness of Snowball’s dancing. His initial headbangs and foot-lifts are movements that parrots naturally make while walking or courting. But his newer set aren’t based on any standard, innate behaviors. He came up with them himself, and he uses them for different kinds of music. “This is what we would genuinely refer to as dance, both in the scientific community and in the dance profession,” says Nicola Clayton of the University of Cambridge, who studies bird cognition. “It’s amazing.”

“Snowball’s style is like any human who would go out regularly to a nightclub,” adds Erich Jarvis, a neuroscientist at Rockefeller University. “We rarely repeat the same moves on the same parts of the same song. We are more flexible than that.” (Both Jarvis and Clayton are dancers themselves, and both danced with Snowball at a 2009 science festival.)

The Snowball studies are “a rare type that we should do more of,” Jarvis adds. “Someone with a pet animal that performs interesting behaviors is approached by a scientist to study that behavior. If we did more of these, we’d gain a much better appreciation of nonhuman species.”

[Read: A journey into the animal mind]

Snowball’s abilities are all the more impressive because they’re so rare. Ronan the sea lion, for example, was recently filmed bobbing her head to music (including, again, the Backstreet Boys), but she was trained. And when Schachner combed through thousands of YouTube videos in search of animals that could be charitably described as dancing, she found only 15 species that fit the bill. One was the Asian elephant, which sometimes sways and swings its trunk to music. The other 14 species were parrots.

“Parrots are more closely related to dinosaurs than to us,” Patel says, and yet they are the only other animals known to show both spontaneous and diverse dancing to music. “This suggests to me that dancing in human cultures isn’t a purely arbitrary invention,” Patel says. Instead, he suggests that it arises when animals have a particular quintet of mental skills and predilections:

  1. They must be complex vocal learners, with the accompanying ability to connect sound and movement.
  2. They must be able to imitate movements.
  3. They must be able to learn complex sequences of actions.
  4. They must be attentive to the movements of others.
  5. They must form long-term social bonds.

A brain that checks off all five traits is “the kind of brain that has the impulse to move to music,” Patel says. “In our own evolution, when these five things came together, we were primed to become dancers.” If he’s right, that settles the eternal question posed by The Killers. Are we human, or are we dancer? We’re both.

Parrots also tick off all five traits, as do elephants and dolphins. But outside of trained performances, “do you ever see a dolphin do anything to music spontaneously, creatively, and diversely?” Patel asks. “I don’t know if it’s been studied.” He wonders whether animals need not only five traits that create an impulse to dance, but also a lot of exposure to humans and our music. Captive dolphins don’t get much musical experience, and even though they interact with trainers, their main social bonds are still with other dolphins. But Snowball, from an early age, lived with humans. He seemingly dances for attention, rather than for food or other rewards. And he appears to dance more continuously when Schulz dances with him—something that Patel will formally analyze in a future study.

Fortunately, he has plenty of time. Snowball is in his 20s, and in captivity, his species has an average life span of 65 years. “They have a personality of a 3-year-old, but they live for 50 years,” Patel says. For that reason, “Irena keeps on telling people to be cautious about getting a parrot because they want to see if it dances.”

03 Sep 18:56

The magical thinking of guys who love logic

Liz

I think this is hitting on so many things I'm trying to wrap my head around right now.

There is an ongoing bastardization of "logic" and "rational thinking" in the alt right movement. I am trying to understand this whole thing a bit more because I mentor a student who is very alt-right leaning (I refer to him as debate boy privately) and he wants to engage on a lot of this stuff and the things he finds "rational" and "logical" just seem so off to me.

Why so many men online love to use “logic” to win an argument, and then disappear before they can find out they're wrong.
03 Sep 18:18

White Claw Is What Happens When Being Cool Becomes Exhausting

by Amanda Mull
Liz

My previous feelings on the White Claw Summer were that it was a weird product of health-obsessed insta culture, but this article made me feel a lot more warm to it. I especially like the insight about this alcohol trend being accessible, as opposed to the last however many years of chasing rare beer and expensive cocktails.

I've defintely found myself just looking for tasty, lo-cal, low ABV beer options more and more. Unsure I will make the switch to WC, but I think I'm more inclined to try it now.

Last week, the local police department in Portland, Maine, delivered a reminder to its community via Twitter: There are, in fact, laws when you’re drinking Claws. A few days later, cops in Kenosha, Wisconsin, did the same on Facebook. Authorities in Bath Township, Michigan, then took the warning one step further, eliciting more than 1,000 Facebook comments on a post that reminded people there are indeed consequences to getting “white girl wasted” on the popular brand of boozy seltzer.

In each instance, police were referring to a viral joke about the hard-seltzer brand White Claw: “Ain’t no laws when you’re drinking Claws.” The phrase, often accompanied by a doctored version of the brand’s wave logo, has been emblazoned on shirts, koozies, and flags since it emerged from YouTube a couple of months ago. And it’s only the most popular in a litany of White Claw–centric memes that have popped up all over the internet, as Claws themselves have made their way into the hands of beachgoers and cookout attendees across America. Even summer itself has become a White Claw meme. Instead of Megan Thee Stallion’s “Hot Girl Summer,” seltzer acolytes have renamed the season “White Claw Summer.”

If all this enthusiasm for getting absolutely twisted on a lightly flavored, low-alcohol grocery-store beverage sounds sort of lame to you, you’re not wrong. But you’ve missed the point. Tweeting or Instagramming about spiking your already spiked seltzer with a little (gasp) vodka or the rapturous joy of a buy-one-get-one-free sale on Claws is about as basic as lining up at your local Starbucks on the first day of pumpkin-spice-latte season. That’s also why it’s fun.

Americans under 40 are wary of the calories and carbs associated with beers and sugary concoctions. These concerns have contributed to a decline of nearly 3 percent in the American beer market since 2015 and a general stagnation in the country’s alcohol sales. But ready-to-drink canned beverages have become a beacon of hope for American booze brands. The sales of distilled-liquor cocktails, flavored malt beverages, and hard seltzers have skyrocketed in the past year. White Claw, which has more than half of hard seltzer’s U.S. market share, has been especially buoyant. Through May, sales of the brand’s assorted-flavor multipacks alone surged 320 percent over the same period last year.

A major factor in hard seltzer’s current popularity is what it’s not: difficult or aspirational. Being a cool young drinker has had a lot of arbitrary rules in the past decade. For much of the 2010s, booze trends have centered around limited-edition, high-alcohol craft beers and booze-heavy, professionally assembled cocktails. These trends have demanded that young people learn the ins and outs of booze culture; have a willingness to pursue the stores, bars, and breweries that meet their very particular tastes; and have the ability to spend some money to try new things. To get the full experience, those drinks also have to be aesthetically pleasing—all the better to document on Instagram, to show off your generationally and socioeconomically appropriate good taste.

White Claw’s appeal, meanwhile, is that it rejects standards. Hard seltzer is exactly what it sounds like: fizzy water in a can with a pinch of sugar, a dash of fruit flavor, and roughly the same amount of alcohol as light beer. It’s cold, drinkable, and doesn’t taste like much. It neatly satisfies young consumers’ desires for affordable, convenient, portable, low-calorie, healthy-seeming alcohol options. In other words, it’s the perfect drink for people exhausted by rules. Maybe “Ain’t no rules when you’re drinking Claws” would be a more accurate meme, but that doesn’t rhyme.

Trends are born when groups of people grow bored with the things they’re supposed to like. When it comes to fashionable foods and beverages, interest in laborious traditional methods usually lasts 10 to 15 years before people revive their curiosity in the quick-and-easy virtues of culinary technology, as Ken Albala, a food historian at the University of the Pacific, previously told me. White Claw, introduced in 2016, came along at just the right time for people in their 20s and 30s to want something new. (White Claw did not respond to a request for comment on its summer of memes.)

Hard seltzer was sometimes dismissed as “girly” when it debuted, but the Claw has managed to transcend the drink’s gendered beginnings. The memes themselves both embrace and poke fun at bro-y stereotypes, and the simple black-and-white cans are seemingly as beloved by too-cool coastal creatives as they are by aging frat boys and young parents. All these groups have lived much of their adult life under the aesthetic tyranny of Instagram-determined good taste. After craft cocktails, funky IPAs, and attempts to acquire an affinity for whiskey neat, maybe nothing tastes better than giving up.

White Claw cans aren’t ugly, but they also aren’t particularly cute, and there’s a layer of ironic remove in Instagramming one the way the 2014 version of you might have photographed a frosty, copper-mugged Moscow mule. This time you’re in on the joke that got pulled on your former self, who was somewhat embarrassingly trying to cosplay as an influencer without getting paid for it. As long as you can avoid committing any crimes, maybe your White Claw Summer never has to end.

27 Aug 16:34

Matt Groening confirms Apu won’t be written off The Simpsons

by Lake Schatz
Liz

I'm mostly sharing this so I can instead share this far more interesting look into the Quebec dub of the Simpsons. Read it!
https://montreal.ctvnews.ca/how-do-you-say-d-oh-in-joual-tweeter-breaks-down-quebec-s-simpsons-translation-1.4540015

Also welcome your thoughts on what to do about Apu though.

Since last fall, the future of The Simpsons character Apu Nahasapeemapetilon has been up in the air due to mounting controversy over the show’s racial stereotyping. Now, creator Matt Groening has officially confirmed that our favorite Kwik-E-Mart employee won’t be written out of the series.

Groening made the announcement during Disney’s D23 Expo in Anaheim over the weekend. According to Variety, when asked about Apu’s status going forward, he assured that the character was here to say. “Yes. We love Apu,” he reportedly said. “We’re proud of Apu.”

The racial debate surrounding Apu reached a boiling point following Hari Kondabolu’s 2017 documentary The Problem with Apu, while the rumors regarding Apu’s exit came last October via YouTube personality and Castlevania producer Adi Shankar. In an interview for IndieWire, he said The Simpsons planned to “drop the Apu character” altogether “just to avoid the controversy.”

(Read: The Top 30 Episodes of The Simpsons)

Al Jean, the long-running TV show’s executive producer, responded at the time by saying Shankar did “not speak for our show.” However, he offered no definitive clarification regarding Apu’s status on the series.

Hank Azaria, who voices Apu as well as other Simpsons characters, chimed into the conversation by saying he’d be “perfectly happy and willing to step aside,” adding, “listening to voices means inclusion in the writer’s room. I really want to see Indians, South Asian writers in the room. Not in a token way, but genuinely informing whatever new direction this character may take. Including how it is voiced, or not voiced.”

The 31st season of The Simpsons premieres September 29th. An early look at the season was revealed last week and featured a parody of Donald Trump.

Matt Groening confirms Apu won’t be written off The Simpsons
Lake Schatz

22 Aug 08:47

Imagine Turning Your Motorcycle Accident Into a Photo Shoot

by Amanda Arnold
Liz

also gloriously ridiculous

With every passing day, a new influencer finds themselves embroiled in incredibly absurd drama — nude vegan bloggers are forced to respond to backlash that they’re faking their off-the-grid life... More »
21 Aug 16:23

I Think About This a Lot: The Time Robert Pattinson Blatantly Lied on Today Show

by Dana Schwartz
Liz

Holy shit this is hilarious and makes me like Rob Pattinson a lot more.

I Think About This a Lot is a series dedicated to private memes: images, videos, and other random trivia we are doomed to play forever on loop in our minds. More »
16 Aug 00:12

The Anthropocene Is a Joke

by Peter Brannen
Liz

A healthy dose of humility for the consequences of our actions.

"If, in the final 7,000 years of their reign, dinosaurs became hyperintelligent, built a civilization, started asteroid mining, and did so for centuries before forgetting to carry the one on an orbital calculation, thereby sending that famous valedictory six-mile space rock hurtling senselessly toward the Earth themselves—it would be virtually impossible to tell. All we do know is that an asteroid did hit, and that the fossils in the millions of years afterward look very different than in the millions of years prior.


So that’s what 180 million years of complete dominance buys you in the fossil record. "

Humans are now living in a new geological epoch of our own making: the Anthropocene. Or so we’re told. Whereas some epochs in Earth history stretch more than 40 million years, this new chapter started maybe 400 years ago, when carbon dioxide dipped by a few parts per million in the atmosphere. Or perhaps, as a panel of scientists voted earlier this year, the epoch started as recently as 75 years ago, when atomic weapons began to dust the planet with an evanescence of strange radioisotopes.

These are unusual claims about geology, a field that typically deals with mile-thick packages of rock stacked up over tens of millions of years, wherein entire mountain ranges are born and weather away to nothing within a single unit of time, in which extremely precise rock dates—single-frame snapshots from deep time—can come with 50,000-year error bars, a span almost 10 times as long as all of recorded human history. If having an epoch shorter than an error bar seems strange, well, so is the Anthropocene.

So what to make of this new “epoch” of geological time? Do we deserve it? Sure, humans move around an unbelievable amount of rock every year, profoundly reshaping the world in our own image. And, yes, we’re currently warping the chemistry of the atmosphere and oceans violently, and in ways that have analogues in only a few terrifying chapters buried deep in Earth’s history. Each year we spew more than 100 times as much CO2 into the air as volcanoes do, and we’re currently overseeing the biggest disruption to the planet’s nitrogen cycle in 2.5 billion years. But despite this incredible effort, all is vanity. Very little of our handiwork will survive the obliteration of the ages. If 100 million years can easily wear the Himalayas flat, what chance will San Francisco or New York have?

Read: The cataclysmic break that (maybe) occurred in 1950

The idea of the Anthropocene is an interesting thought experiment. For those invested in the stratigraphic arcana of this infinitesimal moment in time, it serves as a useful catalog of our junk. But it can also serve to inflate humanity’s legacy on an ever-churning planet that will quickly destroy—or conceal forever—even our most awesome creations.

What paltry smudge of artifacts we do leave behind, in those rare corners of the continents where sediment accumulates and is quickly buried—safe from erosion’s continuous defacing—will be extremely unlikely to be exposed at the surface, at any given time, at any given place, tens of millions or hundreds of millions of years in the geological future. Sic transit gloria mundi.

Perhaps, someday, our signal in the rocks will be found, but only if eagle-eyed stratigraphers, from God knows where on the tree of life, crisscross their own rearranged Earth, assiduously trying to find us. But they would be unlikely to be rewarded for their effort. At the end of all their travels—after cataloging all the bedrock of the entire planet—they might finally be led to an odd, razor-thin stratum hiding halfway up some eroding, far-flung desert canyon. If they then somehow found an accompanying plaque left behind by humanity that purports to assign this unusual layer its own epoch—sandwiched in these cliffs, and embarrassed above and below by gigantic edifices of limestone, siltstone, and shale—this claim would amount to evidence of little more than our own species’ astounding anthropocentrism. Unless we fast learn how to endure on this planet, and on a scale far beyond anything we’ve yet proved ourselves capable of, the detritus of civilization will be quickly devoured by the maw of deep time.

Geological time is deep beyond all comprehension. If you were to run a 26.2-mile marathon covering the entire retrospective sweep of Earth’s history, the first five-foot stride would land you two Ice Ages ago and more than 150,000 years before the whole history of human civilization. In other words, geologically and to a first approximation, all of recorded human history is irrelevant: a subliminally fast 5,000-year span that is over almost as soon as you first lift up your heel, crammed entirely into the very end of an otherwise humdrum Pleistocene Ice Age interglacial. (NB: That this otherwise typical and temporary warm spell of the Pleistocene has also been strangely given its own epoch, the so-called Holocene—quite unlike the dozens of similar interglacials that came before it—is the original sin of anthropocentric geology.)

If instead your marathon was forward in time, after one stride the oceans and atmosphere would have just about recovered from our wild chemistry experiment on the planet, and no surface record of human civilization would yet remain. Another stride would plunge you into another true Pleistocene-style Ice Age, with seas 400 feet lower than they are today. That missing water would instead be locked up in massive ice sheets that now bear down on the continents, plowing moraines into future islands, obliterating everything in their path, and spewing glaciers at their crumbling margins. These plots of land were once, in some forgotten time, called New York City, or Illinois. All of future time would stretch out before you. After a mile and a half, the continents would reunite in one of their many iterations of the supercontinent cycle, its shores and mountain valleys hosting creatures beyond imagining. Not only will humanity not be a part of this picture, but virtually no geological record will remain of us whatsoever. Not plastic birthday balloons, not piles of denuded chicken bones, not Charlton Heston shaking his fist at some littoral colossus. It will all be worn away, destroyed, or hidden forever.

Read: Geology’s timekeepers are feuding

For context, let’s compare the eventual geological legacy of humanity (somewhat unfairly) to that of the dinosaurs, whose reign spanned many epochs and lasted a functionally eternal 180 million years—36,000 times as long as recorded human history so far. But you would never know this near-endless age was so thoroughly dominated by the terrible reptiles by looking to the rock record of the entire eastern half of North America. Here, dinosaurs scarcely left behind a record at all. And not because they weren’t here the entire time—with millions of generations of untold dinosaurs living, hunting, mating, dying, foraging, migrating, evolving, and enduring throughout, up and down the continent, in great herds and in solitary ambushes. But the number of sites within that entire yawning span, and over these thousands of square miles, where they could have been preserved—or that weren’t destroyed by later erosion, or that happen to be exposed at the surface today—was vanishingly small.

Yes, billions of dinosaur bodies died and fell to the Earth here in this span, and trillions more dinosaur footsteps pressed into the Earth, but hardly a trace remains today. A cryptic smattering of lakeside footprints represents their entire contribution to the Triassic period. A few bones and footsteps miraculously preserved in New England and Nova Scotia are all that remains from the entire 27-million-year Early Jurassic epoch. No trace of dinosaurs remains whatsoever from the 18-million-year Late Jurassic. A handful of bones from one layer in Maryland represents the entire 45-million-year Early Cretaceous; the Late Cretaceous gives up a Hadrosaurus in New Jersey, and part of a tyrannosaur in Alabama, but mostly comprises unimpressive fragments of bone and teeth that cover the remaining 34 million years of the Earth’s most storied age, until doomsday. If one wanted to know what a particular 10-, 100-, or 1,000-year span was like, buried in this vastness of time (or, even worse, in some particular region of the continent), good luck.

This astounding paucity can be explained by the fact that there just aren’t that many rocks that survived these extreme gulfs of time, over this vast province. And even among those rocks that did survive, and which are exposed today, the conditions for fossil preservation were rare beyond measure. Each fossil was its own miracle, sampled randomly from almost 200 million years of history—a few stray, windblown pages of a library.

If, in the final 7,000 years of their reign, dinosaurs became hyperintelligent, built a civilization, started asteroid mining, and did so for centuries before forgetting to carry the one on an orbital calculation, thereby sending that famous valedictory six-mile space rock hurtling senselessly toward the Earth themselves—it would be virtually impossible to tell. All we do know is that an asteroid did hit, and that the fossils in the millions of years afterward look very different than in the millions of years prior.

Read: What caused the dinosaur extinction?

So that’s what 180 million years of complete dominance buys you in the fossil record. What, then, will a few decades of industrial civilization get us? This is the central question of the Anthropocene—an epoch that supposedly started, not tens of millions of years ago, but perhaps during the Truman administration. Will our influence on the rock record really be so profound to geologists 100 million years from now, whoever they are, that they would look back and be tempted to declare the past few decades or centuries a bona fide epoch of its own?

An important thing to keep in mind about paleontology is that most fossil-bearing rock outcrops are marine—that is, they’re from the bottom of the sea. As a result, we have a much higher resolution of the history of life in the oceans than on land. That’s because the sea, for the most part, is where sediment goes to accumulate. Things fall apart on land and in general get destroyed by weathering and erosion, and get carried to the sea as sand grains and silt and in solution. If it weren’t for the ceaseless creation of new mountain ranges, the surface of the Earth would quickly be rendered flat. Yes, some cities, such as New Orleans, Dhaka, and Beijing, sit in subsiding sedimentary basins and, at first pass, seem promising candidates for preservation. But as the example of the dinosaurs shows, the chance that any city-swallowing delta deposit from a window of time only a few centuries wide would be lucky enough to be not only buried and preserved for safekeeping, but then subsequently not destroyed—in the ravenous maw of a subduction zone, or sinking too close to the cleansing metamorphic forge of Earth’s mantle, or mutilated in some mountain-making continental collision—and then, after all that, find itself, at a given point in the far future, fantastically lucky enough to have been serendipitously pushed up just enough so as to be exposed at the surface, but not too high as to have been quickly destroyed by erosion … is virtually nil. In the Grand Canyon, and over much of the Southwest U.S. (and even across the entire world), there’s a billion-year gap between rock formations. That history—that former forever—as marvelous as it may have been in that region of the world, will never be recovered.

Even worse for our long-term preservation—long after humanity’s brief, artificial greenhouse fever—we’re very likely to return to our regularly scheduled programming and dive back into a punishing Ice Age in the next half-million years. This means that sea level—after shooting up in the coming millennia by our own hand, and potentially burying coastal settlements in sediment (good for fossilization)—will eventually fall hundreds of feet below where it is today, and subject the shallow continental shelves, along with our once submerged cities and magnificent seams of garbage, to the cold winds of erosion (bad for fossilization), where they’ll be mostly reduced to nothing. Meanwhile, the top half of our continent will be scoured clean by ice sheets. The lone and level sands stretch far away.

But what would we leave on the seafloor, where most sedimentary rock is made, where most of the fossils are, and where we have a slightly better chance of recording our decades-long “epoch” in the rocks? Well, many marine sediments in the fossil record accumulated, over untold eons, from the diaphanous snowfall of plankton and silt, at a rate of little more than a centimeter per thousand years. Given this loose metric (and our current maturity as a species), a dozen centimeters of muck seems an optimistic goal for civilization.

A dozen centimeters is a pathetic epoch, but epoch or not, it would be an extremely interesting layer. It’s tempting to think a whisper of atomic-weapons testing would remain. The Promethean fire unleashed by the Manhattan Project was an earth-changing invention, its strange fallout destined to endure in some form as an unmistakable geological marker of the Anthropocene. But the longest-lived radioisotope from radioactive fallout, iodine-129, has a half-life of less than 16 million years. If there were a nuclear holocaust in the Triassic, among warring prosauropods, we wouldn’t know about it.

What else of us could be sampled from this sliver of deep-sea-muck-turned-rock—these Anthropocene clays and shale layers? Pass it through a mass spectrometer and you would see, encoded in its elements, the story of the entire planet in this strange interval, the Great Derangement of the Earth’s systems by civilization. You would see our lightning-fast injection of hundreds of gigatons of light carbon into the atmosphere written in the strange skew of carbon isotopes in this rock—as you do in rocks from the many previous carbon-cycle disasters of Earth history. The massive global-warming pulse created by this carbon disaster would be written in oxygen isotopes. The sulfur, nitrogen, thallium, and uranium isotopes in these rocks (to mention just a few) would whisper to you—again, in squiggles on a graph—that the global ocean lost much of its oxygen during this brief but enigmatic interval. Strontium isotopes would tell you that rock weathering dramatically accelerated worldwide for a few tens of thousands of years as sweltering, violent storms attacked the rocks and wore down the continents during a brief, CO2-driven fever.

These trace isotopes may be the most enduring signals of humanity, together telling much of the story of our strange centuries, in only a few centimeters of ocean rock. They will speak, to those who know how to listen, of life-supporting geochemical cycles going haywire in an eyeblink of geological time, hinted at in small samples from our seam of strange strata that interrupts mile-thick formations of otherwise normal rock. Plastic, that ubiquitous pollutant of the oceans, might be detectable by analyzing small samples of this sediment—appearing, like many organic biomarkers in the fossil record, as a rumor of strangely heavy hydrocarbons. Unassuming peaks on a chromatograph would stand in for all of modernity. Perhaps, perhaps, if one was extremely lucky in surveying this strange layer, across miles of desert-canyon walls, a lone, carbonized, and unrecognizable piece of fishing equipment may sit perplexingly embedded in this dark line in the cliffs. Some “epoch” this.

The most enduring geological legacy, instead, will be the extinctions we cause. The first wave of human-driven extinctions, and the largest hit to terrestrial megafauna since the extinction of the dinosaurs, began tens of thousands of years ago, as people began to spread out into new continents and islands, wiping out everything we tend to think of as “Ice Age” fauna—mammoths, mastodons, giant wombats, giant ground sloths, giant armadillos, woolly rhinoceroses, giant beavers, etc. This early, staggered, human-driven extinction event is as reasonable a starting date as any for the Anthropocene and one that has, in fact, been proposed. However, a few thousand years—or even a few tens of thousands of years—will be virtually indistinguishable in the rocks a hundred million years hence. That is, it would not be obvious to the geologists of the far future that these prehistoric human-caused extinctions were not simultaneous with our own modern-day depredations on the environment. The clear-cutting of the rain forest to build roads and palm-oil plantations, the plowing of the seabed on a continental scale, the rapid changes to the ocean and atmosphere’s chemistry, and all the rest would appear simultaneous with the extinction of the woolly mammoth. To future geologists, the modern debate about whether the Anthropocene started 10 minutes ago or 10,000 years ago will be a bit like arguing with your spouse on your 50th wedding anniversary about which nanosecond you got married.

Read: In a few centuries, cows could be the largest land animals left

What humans are doing on the planet, then, unless we endure for millions to tens of millions of years, is extremely transient. In fact, there exists a better word in geology than epoch to describe our moment in the sun thus far: event. Indeed, there have been many similarly disruptive, rapid, and unusual episodes scattered throughout Earth history—wild climate fluctuations, dramatic sea-level rises and falls, global ocean-chemistry disasters, and biodiversity catastrophes. They appear as strange lines in the rock, but no one calls them epochs. Some reach the arbitrary threshold of “mass extinction,” but many have no name. Moreover, lasting only a few tens of thousands to hundreds of thousands of years in duration, they’re all considered events. In our marathon of Earth history, the epochs would occasionally pass by on the side of the road like towns, while these point-like “events” would present themselves to us only fleetingly, like pebbles underfoot.

Fifty-six million years ago, the Earth belched 5,000 gigatons of carbon (the equivalent of burning all our fossil-fuel reserves) over roughly 5,000 years into the oceans and atmosphere, and the planet warmed 5 to 8 degrees Celsius. The warming set off megafloods and storms, and wiped out coral reefs globally. It took the planet more than 150,000 years to cool off. But this “Paleocene-Eocene Thermal Maximum” is considered an event.

Thirty-eight million years before that, buried in the backwaters of the late Cretaceous, CO2 jumped as many as 2,400 parts per million, the planet warmed perhaps 8 degrees Celsius, the ocean lost half its oxygen (in our own time, the ocean has lost a—still alarming—2 percent of its oxygen), and seawater reached 36 degrees Celsius (97 degrees Fahrenheit) over much of the globe. Extinction swept through the seas. In all, it took more than half a million years. This was Cretaceous Oceanic Anoxic Event 2. Though it was no epoch, if you had been born 200,000 years into this event, you’d die roughly 300,000 years before it was over.

A similar catastrophe struck 28 million years before, in the early Cretaceous, and again 60 million years earlier still in the Jurassic. And, again, 201 million years ago. And halfway through the Triassic, 234 million years ago. And 250 million, 252 million, and 262 million years ago. The first major mass extinction, 445 million years ago, took place in multiple pulses across a million years. An event. The second major mass extinction, 70 million years later, took place over 600,000 years—400,000 years longer than the evolutionary history of Homo sapiens. These are transformative, planet-changing paroxysms that last on the order of hundreds of thousands of years, reroute the trajectory of life, and leave little more than strange black lines in the rocks, buried within giant stacks of rocks that make up the broader epochs. But none of them constitute epochs in and of themselves. All were events, and all—at only a few tens of thousands, to hundreds of thousands of years—were blisteringly short.

The idea that we’re in a new epoch is a profoundly optimistic one, for it implies that we’ll persist into the future as an industrial technological civilization on something like a geological timescale. It implies that we are at the dawning of the astrobiologist David Grinspoon’s “Sapiezoic Eon”—that expansive, creative, open-ended future in which human technology represents a new and enduring feature of the planet on par with the biological innovations of the Cambrian Explosion—rather than heading for the impending, terminal consummation of a major mass extinction, ending with all the conclusive destruction of apocalypses past.

Until we prove ourselves capable of an Anthropocene worthy of the name, perhaps we should more humbly refer to this provisional moment of Earth history that we’re living through as we do the many other disruptive spasms in Earth history. Though dreadfully less catchy, perhaps we could call it the “Mid-Pleistocene Thermal Maximum.” After all, though the mammoths are gone, their Ice Age is only on hold, delayed as it is for a few tens of thousands of years by the coming greenhouse fever. Or perhaps we’re living through the “Pleistocene Carbon Isotope Excursion,” as we call many of the mysterious global paroxysms from the earliest era of animal life, the Paleozoic. Or maybe we’re even at the dawning of the “Quaternary Anoxic Event” or, God forbid, the “End-Pleistocene Mass Extinction” if shit really hits the fan in the next few centuries. But please, not the Anthropocene. You wouldn’t stand next to a T. rex being vaporized 66 million years ago and be tempted to announce to the dawning of the hour-long Asteroidocene. You would at least wait for the dust to settle before declaring the dawn of the age of mammals.

The idea of the Anthropocene inflates our own importance by promising eternal geological life to our creations. It is of a thread with our species’ peculiar, self-styled exceptionalism—from the animal kingdom, from nature, from the systems that govern it, and from time itself. This illusion may, in the long run, get us all killed. We haven’t earned an Anthropocene epoch yet. If someday in the distant future we have, it will be an astounding testament to a species that, after a colicky, globe-threatening infancy, learned that it was not separate from Earth history, but a contiguous part of the systems that have kept this miraculous marble world habitable for billions of years.

30 Jul 00:01

Papa Roach roasts Trump

by Alex Young
Liz

This is the most I've ever liked Papa Roach.

26 Jun 21:52

Your Professional Decline Is Coming (Much) Sooner Than You Think

by Arthur C. Brooks
Liz

Long but interesting read. Reinforces something I've been thinking a lot about lately, which is to diversify your identity as a person.

"It’s not true that no one needs you anymore.”

These words came from an elderly woman sitting behind me on a late-night flight from Los Angeles to Washington, D.C. The plane was dark and quiet. A man I assumed to be her husband murmured almost inaudibly in response, something to the effect of “I wish I was dead.”

Again, the woman: “Oh, stop saying that.”

To hear more feature stories, see our full list or get the Audm iPhone app.

I didn’t mean to eavesdrop, but couldn’t help it. I listened with morbid fascination, forming an image of the man in my head as they talked. I imagined someone who had worked hard all his life in relative obscurity, someone with unfulfilled dreams—perhaps of the degree he never attained, the career he never pursued, the company he never started.

At the end of the flight, as the lights switched on, I finally got a look at the desolate man. I was shocked. I recognized him—he was, and still is, world-famous. Then in his mid‑80s, he was beloved as a hero for his courage, patriotism, and accomplishments many decades ago.

As he walked up the aisle of the plane behind me, other passengers greeted him with veneration. Standing at the door of the cockpit, the pilot stopped him and said, “Sir, I have admired you since I was a little boy.” The older man—apparently wishing for death just a few minutes earlier—beamed with pride at the recognition of his past glories.

For selfish reasons, I couldn’t get the cognitive dissonance of that scene out of my mind. It was the summer of 2015, shortly after my 51st birthday. I was not world-famous like the man on the plane, but my professional life was going very well. I was the president of a flourishing Washington think tank, the American Enterprise Institute. I had written some best-selling books. People came to my speeches. My columns were published in The New York Times.

But I had started to wonder: Can I really keep this going? I work like a maniac. But even if I stayed at it 12 hours a day, seven days a week, at some point my career would slow and stop. And when it did, what then? Would I one day be looking back wistfully and wishing I were dead? Was there anything I could do, starting now, to give myself a shot at avoiding misery—and maybe even achieve happiness—when the music inevitably stops?

Though these questions were personal, I decided to approach them as the social scientist I am, treating them as a research project. It felt unnatural—like a surgeon taking out his own appendix. But I plunged ahead, and for the past four years, I have been on a quest to figure out how to turn my eventual professional decline from a matter of dread into an opportunity for progress.

Here’s what I’ve found.

The field of “happiness studies” has boomed over the past two decades, and a consensus has developed about well-being as we advance through life. In The Happiness Curve: Why Life Gets Better After 50, Jonathan Rauch, a Brookings Institution scholar and an Atlantic contributing editor, reviews the strong evidence suggesting that the happiness of most adults declines through their 30s and 40s, then bottoms out in their early 50s. Nothing about this pattern is set in stone, of course. But the data seem eerily consistent with my experience: My 40s and early 50s were not an especially happy period of my life, notwithstanding my professional fortunes.

So what can people expect after that, based on the data? The news is mixed. Almost all studies of happiness over the life span show that, in wealthier countries, most people’s contentment starts to increase again in their 50s, until age 70 or so. That is where things get less predictable, however. After 70, some people stay steady in happiness; others get happier until death. Others—men in particular—see their happiness plummet. Indeed, depression and suicide rates for men increase after age 75.

Luci Gutiérrez

This last group would seem to include the hero on the plane. A few researchers have looked at this cohort to understand what drives their unhappiness. It is, in a word, irrelevance. In 2007, a team of academic researchers at UCLA and Princeton analyzed data on more than 1,000 older adults. Their findings, published in the Journal of Gerontology, showed that senior citizens who rarely or never “felt useful” were nearly three times as likely as those who frequently felt useful to develop a mild disability, and were more than three times as likely to have died during the course of the study.

One might think that gifted and accomplished people, such as the man on the plane, would be less susceptible than others to this sense of irrelevance; after all, accomplishment is a well-documented source of happiness. If current accomplishment brings happiness, then shouldn’t the memory of that accomplishment provide some happiness as well?

Maybe not. Though the literature on this question is sparse, giftedness and achievements early in life do not appear to provide an insurance policy against suffering later on. In 1999, Carole Holahan and Charles Holahan, psychologists at the University of Texas, published an influential paper in The International Journal of Aging and Human Development that looked at hundreds of older adults who early in life had been identified as highly gifted. The Holahans’ conclusion: “Learning at a younger age of membership in a study of intellectual giftedness was related to … less favorable psychological well-being at age eighty.”

This study may simply be showing that it’s hard to live up to high expectations, and that telling your kid she is a genius is not necessarily good parenting. (The Holahans surmise that the children identified as gifted might have made intellectual ability more central to their self-appraisal, creating “unrealistic expectations for success” and causing them to fail to “take into account the many other life influences on success and recognition.”) However, abundant evidence suggests that the waning of ability in people of high accomplishment is especially brutal psychologically. Consider professional athletes, many of whom struggle profoundly after their sports career ends. Tragic examples abound, involving depression, addiction, or suicide; unhappiness in retired athletes may even be the norm, at least temporarily. A study published in the Journal of Applied Sport Psychology in 2003, which charted the life satisfaction of former Olympic athletes, found that they generally struggled with a low sense of personal control when they first stopped competing.

Recently, I asked Dominique Dawes, a former Olympic gold-medal gymnast, how normal life felt after competing and winning at the highest levels. She told me that she is happy, but that the adjustment wasn’t easy—and still isn’t, even though she won her last Olympic medal in 2000. “My Olympic self would ruin my marriage and leave my kids feeling inadequate,” she told me, because it is so demanding and hard-driving. “Living life as if every day is an Olympics only makes those around me miserable.”

Why might former elite performers have such a hard time? No academic research has yet proved this, but I strongly suspect that the memory of remarkable ability, if that is the source of one’s self-worth, might, for some, provide an invidious contrast to a later, less remarkable life. “Unhappy is he who depends on success to be happy,” Alex Dias Ribeiro, a former Formula 1 race-car driver, once wrote. “For such a person, the end of a successful career is the end of the line. His destiny is to die of bitterness or to search for more success in other careers and to go on living from success to success until he falls dead. In this case, there will not be life after success.”

Call it the Principle of Psychoprofessional Gravitation: the idea that the agony of professional oblivion is directly related to the height of professional prestige previously achieved, and to one’s emotional attachment to that prestige. Problems related to achieving professional success might appear to be a pretty good species of problem to have; even raising this issue risks seeming precious. But if you reach professional heights and are deeply invested in being high up, you can suffer mightily when you inevitably fall. That’s the man on the plane. Maybe that will be you, too. And, without significant intervention, I suspect it will be me.

The Principle of Psychoprofessional Gravitation can help explain the many cases of people who have done work of world-historical significance yet wind up feeling like failures. Take Charles Darwin, who was just 22 when he set out on his five-year voyage aboard the Beagle in 1831. Returning at 27, he was celebrated throughout Europe for his discoveries in botany and zoology, and for his early theories of evolution. Over the next 30 years, Darwin took enormous pride in sitting atop the celebrity-scientist pecking order, developing his theories and publishing them as books and essays—the most famous being On the Origin of Species, in 1859.

But as Darwin progressed into his 50s, he stagnated; he hit a wall in his research. At the same time an Austrian monk by the name of Gregor Mendel discovered what Darwin needed to continue his work: the theory of genetic inheritance. Unfortunately, Mendel’s work was published in an obscure academic journal and Darwin never saw it—and in any case, Darwin did not have the mathematical ability to understand it. From then on he made little progress. Depressed in his later years, he wrote to a close friend, “I have not the heart or strength at my age to begin any investigation lasting years, which is the only thing which I enjoy.”

Presumably, Darwin would be pleasantly surprised to learn how his fame grew after his death, in 1882. From what he could see when he was old, however, the world had passed him by, and he had become irrelevant. That could have been Darwin on the plane behind me that night.

It also could have been a younger version of me, because I have had precocious experience with professional decline.

As a child, I had just one goal: to be the world’s greatest French-horn player. I worked at it slavishly, practicing hours a day, seeking out the best teachers, and playing in any ensemble I could find. I had pictures of famous horn players on my bedroom wall for inspiration. And for a while, I thought my dream might come true. At 19, I left college to take a job playing professionally in a touring chamber-music ensemble. My plan was to keep rising through the classical-music ranks, joining a top symphony orchestra in a few years or maybe even becoming a soloist—the most exalted job a classical musician can hold.

But then, in my early 20s, a strange thing happened: I started getting worse. To this day, I have no idea why. My technique began to suffer, and I had no explanation for it. Nothing helped. I visited great teachers and practiced more, but I couldn’t get back to where I had been. Pieces that had been easy to play became hard; pieces that had been hard became impossible.

Perhaps the worst moment in my young but flailing career came at age 22, when I was performing at Carnegie Hall. While delivering a short speech about the music I was about to play, I stepped forward, lost my footing, and fell off the stage into the audience. On the way home from the concert, I mused darkly that the experience was surely a message from God.

But I sputtered along for nine more years. I took a position in the City Orchestra of Barcelona, where I increased my practicing but my playing gradually deteriorated. Eventually I found a job teaching at a small music conservatory in Florida, hoping for a magical turnaround that never materialized. Realizing that maybe I ought to hedge my bets, I went back to college via distance learning, and earned my bachelor’s degree shortly before my 30th birthday. I secretly continued my studies at night, earning a master’s degree in economics a year later. Finally I had to admit defeat: I was never going to turn around my faltering musical career. So at 31 I gave up, abandoning my musical aspirations entirely, to pursue a doctorate in public policy.

Life goes on, right? Sort of. After finishing my studies, I became a university professor, a job I enjoyed. But I still thought every day about my beloved first vocation. Even now, I regularly dream that I am onstage, and wake to remember that my childhood aspirations are now only phantasms.

I am lucky to have accepted my decline at a young enough age that I could redirect my life into a new line of work. Still, to this day, the sting of that early decline makes these words difficult to write. I vowed to myself that it wouldn’t ever happen again.

Will it happen again? In some professions, early decline is inescapable. No one expects an Olympic athlete to remain competitive until age 60. But in many physically nondemanding occupations, we implicitly reject the inevitability of decline before very old age. Sure, our quads and hamstrings may weaken a little as we age. But as long as we retain our marbles, our quality of work as a writer, lawyer, executive, or entrepreneur should remain high up to the very end, right? Many people think so. I recently met a man a bit older than I am who told me he planned to “push it until the wheels came off.” In effect, he planned to stay at the very top of his game by any means necessary, and then keel over.

But the odds are he won’t be able to. The data are shockingly clear that for most people, in most fields, decline starts earlier than almost anyone thinks.

Luci Gutiérrez

According to research by Dean Keith Simonton, a professor emeritus of psychology at UC Davis and one of the world’s leading experts on the trajectories of creative careers, success and productivity increase for the first 20 years after the inception of a career, on average. So if you start a career in earnest at 30, expect to do your best work around 50 and go into decline soon after that.

The specific timing of peak and decline vary somewhat depending on the field. Benjamin Jones, a professor of strategy and entrepreneurship at Northwestern University’s Kellogg School of Management, has spent years studying when people are most likely to make prizewinning scientific discoveries and develop key inventions. His findings can be summarized by this little ditty:

Age is, of course, a fever chill
that every physicist must fear.
He’s better dead than living still
when once he’s past his thirtieth year.

The author of those gloomy lines? Paul Dirac, a winner of the 1933 Nobel Prize in Physics.

Dirac overstates the point, but only a little. Looking at major inventors and Nobel winners going back more than a century, Jones has found that the most common age for producing a magnum opus is the late 30s. He has shown that the likelihood of a major discovery increases steadily through one’s 20s and 30s and then declines through one’s 40s, 50s, and 60s. Are there outliers? Of course. But the likelihood of producing a major innovation at age 70 is approximately what it was at age 20—almost nonexistent.

Much of literary achievement follows a similar pattern. Simonton has shown that poets peak in their early 40s. Novelists generally take a little longer. When Martin Hill Ortiz, a poet and novelist, collected data on New York Times fiction best sellers from 1960 to 2015, he found that authors were likeliest to reach the No. 1 spot in their 40s and 50s. Despite the famous productivity of a few novelists well into old age, Ortiz shows a steep drop-off in the chance of writing a best seller after the age of 70. (Some nonfiction writers—especially historians—peak later, as we shall see in a minute.)

Entrepreneurs peak and decline earlier, on average. After earning fame and fortune in their 20s, many tech entrepreneurs are in creative decline by age 30. In 2014, the Harvard Business Review reported that founders of enterprises valued at $1 billion or more by venture capitalists tend to cluster in the 20-to-34 age range. Subsequent research has found that the clustering might be slightly later, but all studies in this area have found that the majority of successful start-ups have founders under age 50.

This research concerns people at the very top of professions that are atypical. But the basic finding appears to apply more broadly. Scholars at Boston College’s Center for Retirement Research studied a wide variety of jobs and found considerable susceptibility to age-related decline in fields ranging from policing to nursing. Other research has found that the best-performing home-plate umpires in Major League Baseball have 18 years less experience and are 23 years younger than the worst-performing umpires (who are 56.1 years old, on average). Among air traffic controllers, the age-related decline is so sharp—and the potential consequences of decline-related errors so dire—that the mandatory retirement age is 56.

In sum, if your profession requires mental processing speed or significant analytic capabilities—the kind of profession most college graduates occupy—noticeable decline is probably going to set in earlier than you imagine.

Sorry.

If decline not only is inevitable but also happens earlier than most of us expect, what should we do when it comes for us?

Whole sections of bookstores are dedicated to becoming successful. The shelves are packed with titles like The Science of Getting Rich and The 7 Habits of Highly Effective People. There is no section marked “Managing Your Professional Decline.”

But some people have managed their declines well. Consider the case of Johann Sebastian Bach. Born in 1685 to a long line of prominent musicians in central Germany, Bach quickly distinguished himself as a musical genius. In his 65 years, he published more than 1,000 compositions for all the available instrumentations of his day.

Early in his career, Bach was considered an astoundingly gifted organist and improviser. Commissions rolled in; royalty sought him out; young composers emulated his style. He enjoyed real prestige.

But it didn’t last—in no small part because his career was overtaken by musical trends ushered in by, among others, his own son, Carl Philipp Emanuel, known as C.P.E. to the generations that followed. The fifth of Bach’s 20 children, C.P.E. exhibited the musical gifts his father had. He mastered the baroque idiom, but he was more fascinated with a new “classical” style of music, which was taking Europe by storm. As classical music displaced baroque, C.P.E.’s prestige boomed while his father’s music became passé.

Luci Gutiérrez

Bach easily could have become embittered, like Darwin. Instead, he chose to redesign his life, moving from innovator to instructor. He spent a good deal of his last 10 years writing The Art of Fugue, not a famous or popular work in his time, but one intended to teach the techniques of the baroque to his children and students—and, as unlikely as it seemed at the time, to any future generations that might be interested. In his later years, he lived a quieter life as a teacher and a family man.

What’s the difference between Bach and Darwin? Both were preternaturally gifted and widely known early in life. Both attained permanent fame posthumously. Where they differed was in their approach to the midlife fade. When Darwin fell behind as an innovator, he became despondent and depressed; his life ended in sad inactivity. When Bach fell behind, he reinvented himself as a master instructor. He died beloved, fulfilled, and—though less famous than he once had been—respected.

The lesson for you and me, especially after 50: Be Johann Sebastian Bach, not Charles Darwin.

How does one do that?

A potential answer lies in the work of the British psychologist Raymond Cattell, who in the early 1940s introduced the concepts of fluid and crystallized intelligence. Cattell defined fluid intelligence as the ability to reason, analyze, and solve novel problems—what we commonly think of as raw intellectual horsepower. Innovators typically have an abundance of fluid intelligence. It is highest relatively early in adulthood and diminishes starting in one’s 30s and 40s. This is why tech entrepreneurs, for instance, do so well so early, and why older people have a much harder time innovating.

Crystallized intelligence, in contrast, is the ability to use knowledge gained in the past. Think of it as possessing a vast library and understanding how to use it. It is the essence of wisdom. Because crystallized intelligence relies on an accumulating stock of knowledge, it tends to increase through one’s 40s, and does not diminish until very late in life.

Careers that rely primarily on fluid intelligence tend to peak early, while those that use more crystallized intelligence peak later. For example, Dean Keith Simonton has found that poets—highly fluid in their creativity—tend to have produced half their lifetime creative output by age 40 or so. Historians—who rely on a crystallized stock of knowledge—don’t reach this milestone until about 60.

Here’s a practical lesson we can extract from all this: No matter what mix of intelligence your field requires, you can always endeavor to weight your career away from innovation and toward the strengths that persist, or even increase, later in life.

Like what? As Bach demonstrated, teaching is an ability that decays very late in life, a principal exception to the general pattern of professional decline over time. A study in The Journal of Higher Education showed that the oldest college professors in disciplines requiring a large store of fixed knowledge, specifically the humanities, tended to get evaluated most positively by students. This probably explains the professional longevity of college professors, three-quarters of whom plan to retire after age 65—more than half of them after 70, and some 15 percent of them after 80. (The average American retires at 61.) One day, during my first year as a professor, I asked a colleague in his late 60s whether he’d ever considered retiring. He laughed, and told me he was more likely to leave his office horizontally than vertically.

Our dean might have chuckled ruefully at this—college administrators complain that research productivity among tenured faculty drops off significantly in the last decades of their career. Older professors take up budget slots that could otherwise be used to hire young scholars hungry to do cutting-edge research. But perhaps therein lies an opportunity: If older faculty members can shift the balance of their work from research to teaching without loss of professional prestige, younger faculty members can take on more research.

Patterns like this match what I’ve seen as the head of a think tank full of scholars of all ages. There are many exceptions, but the most profound insights tend to come from those in their 30s and early 40s. The best synthesizers and explainers of complicated ideas—that is, the best teachers—tend to be in their mid-60s or older, some of them well into their 80s.

That older people, with their stores of wisdom, should be the most successful teachers seems almost cosmically right. No matter what our profession, as we age we can dedicate ourselves to sharing knowledge in some meaningful way.

A few years ago, I saw a cartoon of a man on his deathbed saying, “I wish I’d bought more crap.” It has always amazed me that many wealthy people keep working to increase their wealth, amassing far more money than they could possibly spend or even usefully bequeath. One day I asked a wealthy friend why this is so. Many people who have gotten rich know how to measure their self-worth only in pecuniary terms, he explained, so they stay on the hamster wheel, year after year. They believe that at some point, they will finally accumulate enough to feel truly successful, happy, and therefore ready to die.

This is a mistake, and not a benign one. Most Eastern philosophy warns that focusing on acquisition leads to attachment and vanity, which derail the search for happiness by obscuring one’s essential nature. As we grow older, we shouldn’t acquire more, but rather strip things away to find our true selves—and thus, peace.

At some point, writing one more book will not add to my life satisfaction; it will merely stave off the end of my book-writing career. The canvas of my life will have another brushstroke that, if I am being forthright, others will barely notice, and will certainly not appreciate very much. The same will be true for most other markers of my success.

What I need to do, in effect, is stop seeing my life as a canvas to fill, and start seeing it more as a block of marble to chip away at and shape something out of. I need a reverse bucket list. My goal for each year of the rest of my life should be to throw out things, obligations, and relationships until I can clearly see my refined self in its best form.

And that self is … who, exactly?

Luci Gutiérrez

Last year, the search for an answer to this question took me deep into the South Indian countryside, to a town called Palakkad, near the border between the states of Kerala and Tamil Nadu. I was there to meet the guru Sri Nochur Venkataraman, known as Acharya (“Teacher”) to his disciples. Acharya is a quiet, humble man dedicated to helping people attain enlightenment; he has no interest in Western techies looking for fresh start-up ideas or burnouts trying to escape the religious traditions they were raised in. Satisfied that I was neither of those things, he agreed to talk with me.

I told him my conundrum: Many people of achievement suffer as they age, because they lose their abilities, gained over many years of hard work. Is this suffering inescapable, like a cosmic joke on the proud? Or is there a loophole somewhere—a way around the suffering?

Acharya answered elliptically, explaining an ancient Hindu teaching about the stages of life, or ashramas. The first is Brahmacharya, the period of youth and young adulthood dedicated to learning. The second is Grihastha, when a person builds a career, accumulates wealth, and creates a family. In this second stage, the philosophers find one of life’s most common traps: People become attached to earthly rewards—money, power, sex, prestige—and thus try to make this stage last a lifetime.

The antidote to these worldly temptations is Vanaprastha, the third ashrama, whose name comes from two Sanskrit words meaning “retiring” and “into the forest.” This is the stage, usually starting around age 50, in which we purposefully focus less on professional ambition, and become more and more devoted to spirituality, service, and wisdom. This doesn’t mean that you need to stop working when you turn 50—something few people can afford to do—only that your life goals should adjust.

Vanaprastha is a time for study and training for the last stage of life, Sannyasa, which should be totally dedicated to the fruits of enlightenment. In times past, some Hindu men would leave their family in old age, take holy vows, and spend the rest of their life at the feet of masters, praying and studying. Even if sitting in a cave at age 75 isn’t your ambition, the point should still be clear: As we age, we should resist the conventional lures of success in order to focus on more transcendentally important things.

I told Acharya the story about the man on the plane. He listened carefully, and thought for a minute. “He failed to leave Grihastha,” he told me. “He was addicted to the rewards of the world.” He explained that the man’s self-worth was probably still anchored in the memories of professional successes many years earlier, his ongoing recognition purely derivative of long-lost skills. Any glory today was a mere shadow of past glories. Meanwhile, he’d completely skipped the spiritual development of Vanaprastha, and was now missing out on the bliss of Sannyasa.

There is a message in this for those of us suffering from the Principle of Psychoprofessional Gravitation. Say you are a hard-charging, type-A lawyer, executive, entrepreneur, or—hypothetically, of course—president of a think tank. From early adulthood to middle age, your foot is on the gas, professionally. Living by your wits—by your fluid intelligence—you seek the material rewards of success, you attain a lot of them, and you are deeply attached to them. But the wisdom of Hindu philosophy—and indeed the wisdom of many philosophical traditions—suggests that you should be prepared to walk away from these rewards before you feel ready. Even if you’re at the height of your professional prestige, you probably need to scale back your career ambitions in order to scale up your metaphysical ones.

When the New York Times columnist David Brooks talks about the difference between “résumé virtues” and “eulogy virtues,” he’s effectively putting the ashramas in a practical context. Résumé virtues are professional and oriented toward earthly success. They require comparison with others. Eulogy virtues are ethical and spiritual, and require no comparison. Your eulogy virtues are what you would want people to talk about at your funeral. As in He was kind and deeply spiritual, not He made senior vice president at an astonishingly young age and had a lot of frequent-flier miles.

You won’t be around to hear the eulogy, but the point Brooks makes is that we live the most fulfilling life—especially once we reach midlife—by pursuing the virtues that are most meaningful to us.

I suspect that my own terror of professional decline is rooted in a fear of death—a fear that, even if it is not conscious, motivates me to act as if death will never come by denying any degradation in my résumé virtues. This denial is destructive, because it leads me to ignore the eulogy virtues that bring me the greatest joy.

How can I overcome this tendency? The Buddha recommends, of all things, corpse meditation: Many Theravada Buddhist monasteries in Thailand and Sri Lanka display photos of corpses in various states of decomposition for the monks to contemplate. “This body, too,” students are taught to say about their own body, “such is its nature, such is its future, such is its unavoidable fate.” At first this seems morbid. But its logic is grounded in psychological principles—and it’s not an exclusively Eastern idea. “To begin depriving death of its greatest advantage over us,” Michel de Montaigne wrote in the 16th century, “let us deprive death of its strangeness, let us frequent it, let us get used to it; let us have nothing more often in mind than death.”

Psychologists call this desensitization, in which repeated exposure to something repellent or frightening makes it seem ordinary, prosaic, not scary. And for death, it works. In 2017, a team of researchers at several American universities recruited volunteers to imagine they were terminally ill or on death row, and then to write blog posts about either their imagined feelings or their would-be final words. The researchers then compared these expressions with the writings and last words of people who were actually dying or facing capital punishment. The results, published in Psychological Science, were stark: The words of the people merely imagining their imminent death were three times as negative as those of the people actually facing death—suggesting that, counterintuitively, death is scarier when it is theoretical and remote than when it is a concrete reality closing in.

For most people, actively contemplating our demise so that it is present and real (rather than avoiding the thought of it via the mindless pursuit of worldly success) can make death less frightening; embracing death reminds us that everything is temporary, and can make each day of life more meaningful. “Death destroys a man,” E. M. Forster wrote, but “the idea of Death saves him.”

Decline is inevitable, and it occurs earlier than almost any of us wants to believe. But misery is not inevitable. Accepting the natural cadence of our abilities sets up the possibility of transcendence, because it allows the shifting of attention to higher spiritual and life priorities.

But such a shift demands more than mere platitudes. I embarked on my research with the goal of producing a tangible road map to guide me during the remaining years of my life. This has yielded four specific commitments.

JUMP

The biggest mistake professionally successful people make is attempting to sustain peak accomplishment indefinitely, trying to make use of the kind of fluid intelligence that begins fading relatively early in life. This is impossible. The key is to enjoy accomplishments for what they are in the moment, and to walk away perhaps before I am completely ready—but on my own terms.

So: I’ve resigned my job as president of the American Enterprise Institute, effective right about the time this essay is published. While I have not detected deterioration in my performance, it was only a matter of time. Like many executive positions, the job is heavily reliant on fluid intelligence. Also, I wanted freedom from the consuming responsibilities of that job, to have time for more spiritual pursuits. In truth, this decision wasn’t entirely about me. I love my institution and have seen many others like it suffer when a chief executive lingered too long.

Luci Gutiérrez

Leaving something you love can feel a bit like a part of you is dying. In Tibetan Buddhism, there is a concept called bardo, which is a state of existence between death and rebirth—“like a moment when you step toward the edge of a precipice,” as a famous Buddhist teacher puts it. I am letting go of a professional life that answers the question Who am I?

I am extremely fortunate to have the means and opportunity to be able to walk away from a job. Many people cannot afford to do that. But you don’t necessarily have to quit your job; what’s important is striving to detach progressively from the most obvious earthly rewards—power, fame and status, money—even if you continue to work or advance a career. The real trick is walking into the next stage of life, Vanaprastha, to conduct the study and training that prepare us for fulfillment in life’s final stage.

SERVE

Time is limited, and professional ambition crowds out things that ultimately matter more. To move from résumé virtues to eulogy virtues is to move from activities focused on the self to activities focused on others. This is not easy for me; I am a naturally egotistical person. But I have to face the fact that the costs of catering to selfishness are ruinous—and I now work every day to fight this tendency.

Fortunately, an effort to serve others can play to our strengths as we age. Remember, people whose work focuses on teaching or mentorship, broadly defined, peak later in life. I am thus moving to a phase in my career in which I can dedicate myself fully to sharing ideas in service of others, primarily by teaching at a university. My hope is that my most fruitful years lie ahead.

WORSHIP

Because I’ve talked a lot about various religious and spiritual traditions—and emphasized the pitfalls of overinvestment in career success—readers might naturally conclude that I am making a Manichaean separation between the worlds of worship and work, and suggesting that the emphasis be on worship. That is not my intention. I do strongly recommend that each person explore his or her spiritual self—I plan to dedicate a good part of the rest of my life to the practice of my own faith, Roman Catholicism. But this is not incompatible with work; on the contrary, if we can detach ourselves from worldly attachments and redirect our efforts toward the enrichment and teaching of others, work itself can become a transcendental pursuit.

“The aim and final end of all music,” Bach once said, “should be none other than the glory of God and the refreshment of the soul.” Whatever your metaphysical convictions, refreshment of the soul can be the aim of your work, like Bach’s.

Bach finished each of his manuscripts with the words Soli Deo gloria—“Glory to God alone.” He failed, however, to write these words on his last manuscript, “Contrapunctus 14,” from The Art of Fugue, which abruptly stops mid-measure. His son C.P.E. added these words to the score: “Über dieser Fuge … ist der Verfasser gestorben” (“At this point in the fugue … the composer died”). Bach’s life and work merged with his prayers as he breathed his last breath. This is my aspiration.

CONNECT

Throughout this essay, I have focused on the effect that the waning of my work prowess will have on my happiness. But an abundance of research strongly suggests that happiness—not just in later years but across the life span—is tied directly to the health and plentifulness of one’s relationships. Pushing work out of its position of preeminence—sooner rather than later—to make space for deeper relationships can provide a bulwark against the angst of professional decline.

Dedicating more time to relationships, and less to work, is not inconsistent with continued achievement. “He is like a tree planted by streams of water,” the Book of Psalms says of the righteous person, “yielding its fruit in season, whose leaf does not wither, and who prospers in all he does.” Think of an aspen tree. To live a life of extraordinary accomplishment is—like the tree—to grow alone, reach majestic heights alone, and die alone. Right?

Wrong. The aspen tree is an excellent metaphor for a successful person—but not, it turns out, for its solitary majesty. Above the ground, it may appear solitary. Yet each individual tree is part of an enormous root system, which is together one plant. In fact, an aspen is one of the largest living organisms in the world; a single grove in Utah, called Pando, spans 106 acres and weighs an estimated 13 million pounds.

The secret to bearing my decline—to enjoying it—is to become more conscious of the roots linking me to others. If I have properly developed the bonds of love among my family and friends, my own withering will be more than offset by blooming in others.

When I talk about this personal research project I’ve been pursuing, people usually ask: Whatever happened to the hero on the plane?

I think about him a lot. He’s still famous, popping up in the news from time to time. Early on, when I saw a story about him, I would feel a flash of something like pity—which I now realize was really only a refracted sense of terror about my own future. Poor guy really meant I’m screwed.

But as my grasp of the principles laid out in this essay has deepened, my fear has declined proportionately. My feeling toward the man on the plane is now one of gratitude for what he taught me. I hope that he can find the peace and joy he is inadvertently helping me attain.

20 Jun 17:42

Craft Beers Without The Buzz: Brewing New Options For The 'Sober Curious'

by Allison Aubrey
Liz

Hop on the NA train! We've tried Wellbeing and Athletic -- would definitely order again from Athletic, I think it felt the closest to a normal beer experience.

Athletic Brewing Co. co-founders Bill Shufelt (right) and John Walker, here at the company

More people are choosing to drink less, driven by growing concerns about health and wellness. But there haven't been many high-quality nonalcoholic beers available. Booming demand has forced a change.

(Image credit: Spencer Platt/Getty Images)

12 Jun 15:27

Instagram influencers are flocking to Chernobyl

by Ben Kaye
Liz

Oh noooooo.

HBO’s Chernobyl has been successful at a lot of things. It’s a fairly gripping “examination of fatal bureaucracy,” a lesson for burgeoning nuclear powers, and kindling for the Russian propaganda machine. It also appears to be the inspiration for social media users to go out and do it for the gram.

The Chernobyl site in Ukraine has been an open tourist destination since 2011, with tours taking visitors around the power plant and the nearby abandoned town of Pripyat. As often happens when a specific location becomes the subject of a media event, tourism has spiked sharply since HBO aired Chernobyl. In fact, CNN reports that tour bookings have jumped some 35% from the same time last year.

And of course, many of those tourists are posting about their trips on social media. The unfathomable horror of a manmade disaster that cost the lives and damaged the health of untold thousands is not immune to the spectacle of Instagram.

It’s hard to flatly fault someone for taking a picture at a tourist destination — even a tragic one like Chernobyl. But in the age of social media, where influencers and wannabe influencers seem to be using the disaster site as an “aesthetic,” it’s kind of two-thousand-and-gross. Especially when some are taking pictures of their underwear-clad butts in front of the ruins of buildings… because nothing says “sexy” like the gruesome death of innocent people.

Even Craig Mazin, the creator of the Chernobyl series, isn’t thrilled with the social media influx. In response to some of the images, he tweeted, “If you visit, please remember that a terrible tragedy occurred there. Comport yourselves with respect for all who suffered and sacrificed.”

Find some examples of Chernobyl visitors doing it for the gram below (h/t @komacore), followed by Mazin’s full tweet (via Buzzfeed).

Instagram influencers are flocking to Chernobyl
Ben Kaye

20 Apr 15:08

List: Radiohead or Mueller Report?

by Kimberly Harrington
Liz

hahaha this is great

1. Thing-of-Value
2. Knives Out
3. Talk Show Host
4. How I Made My Millions
5. Harm to Ongoing Matter
6. No Surprises
7. House of Cards
8. Witch Hunt
9. Dirt
10. These are My Twisted Words
11. Information Warfare
12. High and Dry
13. Electioneering
14. Down is the New Up
15. Social Discord
16. The National Anthem
17. High Confidence
18. Closed Matters
19. I Want None of This
20. How Can You Be Sure?
21. Encouragement of Contacts
22. Ill Wind
23. Ridiculous and Petty
24. Moscow Project
25. Million Dollar Question
26. I Can’t
27. I’m Fucked
28. Thefts and Disclosures
29. Open the Floodgates
30. A Wolf at the Door
31. Redacted
32. Videotape
33. Personal Privacy
34. Give Up the Ghost
35. Let Down

- - -

Radiohead song: 2, 4, 6, 7, 10, 12, 14, 16, 19, 20, 22, 25, 29, 30, 32, 34
Mueller Report: 1, 5, 8, 9, 11, 15, 17, 18, 21, 23, 24, 27, 28, 31, 33
Both: 3, 13, 26, 35

20 Apr 14:03

So What If Lincoln Was Gay?

by Louis Bayard
Liz

Distantly related to The Favourite and also quite sweet and sad:

"As with Queen Anne and King James, the best place to find Lincoln’s bared heart is in letters—the letters, specifically, that he wrote to Joshua Speed in 1842. Read them yourself and you will find two men who are frankly terrified by the prospect of marriage—in particular, the wedding night—and who are coaching and coaxing each other into normative heterosexual lifestyles. You will also find a tenderness rare in Lincoln’s correspondence: “I do not feel my own sorrows more keenly than I do yours … You know my desire to befriend you is everlasting—that I will never cease, while I know how to do anything.”"

“Why do you need him to be gay?”

This is how a friend (urban, liberal, male) responded when I told him I was working on a historical novel about Abraham Lincoln’s relationship with Joshua Speed. The implication of his question was clear. If I was going to go there, if I was going to plant my rainbow flag on the Great Emancipator’s grave, I would have to account for my private agenda.

Now that I type it out, that phrase sounds an awful lot like “gay agenda” and peels away to reveal the same fear at its base—that our received notions about historical figures might crumble under too close an inspection. And yet, in many cases, the evidence is often hiding in plain sight. Queen Anne, as the recent movie The Favourite underscored, wrote passionate letters to the Duchess of Marlborough. Michelangelo composed love poems for his male models. King James addressed his beloved Duke of Buckingham as “my sweet child and wife,” and Shakespeare publicly directed his first 126 sonnets to a “Fair Youth,” theorized by some scholars to be Henry Wriothesley, the 3rd Earl of Southampton.

Lincoln may look like he played things closer to the vest, but even his contemporaries, pondering his youthful aversion to girls, his lack of female conquests, and his relatively late marriage, struggled to come up with face-saving explanations. Judge David Davis, a friend of Lincoln’s from his circuit-riding days, insisted it was only the great man’s conscience that “kept him from seduction” and “saved many—many a woman.” William Herndon, Lincoln’s biographer and law partner, spread rumors (almost certainly unfounded) that Lincoln had caught syphilis from a girl in Beardstown, and went so far as to resurrect a long-dead New Salem maiden named Ann Rutledge, who emerged under Herndon’s burnishing as the love of Abe’s life.

This was news to many of Lincoln’s friends and family—to Mary Todd Lincoln, most of all—but the Rutledge story persisted. So, too, did Joshua Speed, the handsome storekeeper who shared a small bed with Lincoln for three and a half years; who once declared that “no two men were ever more intimate”; who confessed that, if he himself hadn’t gotten married, Lincoln wouldn’t have; who promised Lincoln that he would write back the morning after his wedding night to report how it had gone; and who, for reasons unclear, never had children. Even conservative biographers had to acknowledge Speed’s primacy in Lincoln’s heart, and Carl Sandburg, tiptoeing as far out on the limb as a hagiographer could in the twenties, suggested that the Lincoln-Speed relationship had “a streak of lavender and spots soft as May violets.”

It took the late C.A. Tripp, a Kinsey Institute sex researcher, to front the question in 2005 with The Intimate World of Abraham Lincoln. The book was roundly savaged at the time and would probably have been more coherent had he lived to revise it, but its collection of raw evidence was, if not dispositive, deeply revealing. Here in one place was the full gamut of Lincoln’s bedmates, from Billy Greene in New Salem (he and Lincoln, according to one neighbor, “had an awful hankerin’, one for t’other”) to, late in life, David Derickson, the bodyguard who supposedly shared Lincoln’s bed while the first lady was away. Here, too, was the humorous doggerel that a twenty-year-old Lincoln wrote about two boys who, having tried “the girlies,” decide to wed each other. It is, as far as we know, the first suggestion of same-sex marriage in U.S. history, and maybe we shouldn’t be surprised that, having made it into the first edition of Herndon’s biography, it was dropped from subsequent editions.

“Dropped” is a good way of describing how the historical establishment initially dealt with Tripp’s contentions. We were told in stern tones that lots of bachelors shared beds in those days (though rarely for so long) and that Lincoln couldn’t have been gay because he fathered children (so did Oscar Wilde). It’s inevitable, I guess, that a unitary establishment should struggle with the binary, and in the case of the white, male, heterosexual historians who have predominantly shaped our narratives, the notion that a man can be both a father of children and a lover of men has been as hard to accept as the idea that Thomas Jefferson could be an apostle of liberty and an impregnator of slaves.

Maybe the strangest counterargument made to Tripp’s claims was that it was pointless even to raise the subject because, in the end, Lincoln’s sex life doesn’t matter. And if that were true—Lordy, if that were true—then my book could have been written 150 years ago, and we could avoid all discussion of Lincoln’s heterosexual life, right up to and including his wife and children.

In recent years, a younger cohort of historians have begun actively grappling with the question of Lincoln’s sexuality. That development is welcome, but as a novelist, I found myself caring less about individual sex acts than the deeper mystery of where Lincoln’s heart lay. The nominal claimant would, of course, be Mary Todd, who, when she first met Lincoln, was an attractive, vivacious, and intelligent young woman, unusually well-educated for her time, with an abiding love for politics and enough prescience to guess that Lincoln, an uneducated, debt-ridden lawyer on nobody’s short list to become president, would one day reward her efforts.

Yet, if Lincoln was enamored of Mary, it didn’t stop him from terminating their engagement or from being, over the course of their marriage, a distant husband—often literally distant, traveling the Eighth Judicial Circuit for ten weeks at a time, communing with fellow lawyers in cramped inns.

As with Queen Anne and King James, the best place to find Lincoln’s bared heart is in letters—the letters, specifically, that he wrote to Joshua Speed in 1842. Read them yourself and you will find two men who are frankly terrified by the prospect of marriage—in particular, the wedding night—and who are coaching and coaxing each other into normative heterosexual lifestyles. You will also find a tenderness rare in Lincoln’s correspondence: “I do not feel my own sorrows more keenly than I do yours … You know my desire to befriend you is everlasting—that I will never cease, while I know how to do anything.”

And how does he close his letters? With “Yours forever,” a salutation he bestowed on no other mortal, least of all his wife.

The book I ended up writing, Courting Mr. Lincoln, takes no definitive stand on its subject’s sexuality, but neither does it shy away from the question. It lives in the land of the spoken and unspoken, which is the realm where Lincoln himself almost certainly dwelt. When all is said and done, do I need Abraham Lincoln to be gay? No. I just need him to be something more complicated than he’s been allowed to be. I would argue we all need that.

Louis Bayard is a novelist and reviewer who lives in Washington, D.C. His most recent book is Courting Mr. Lincoln.