Why You Should Stop Caring What Other People Think (Taming the Mammoth) – Wait But Why

We all care way too much what other people think of us. Here’s why.

Source: Why You Should Stop Caring What Other People Think (Taming the Mammoth) – Wait But Why

By Tim Urban

Part 1: Meet Your Mammoth

The first day I was in second grade, I came to school and noticed that there was a new, very pretty girl in the class—someone who hadn’t been there the previous two years. Her name was Alana and within an hour, she was everything to me.

When you’re seven, there aren’t really any actionable steps you can take when you’re in love with someone. You’re not even sure what you want from the situation. There’s just this amorphous yearning that’s a part of your life, and that’s that.

But for me, it became suddenly relevant a few months later, when during recess one day, one of the girls in the class started asking each of the boys, “Who do youuu want to marry?” When she asked me, it was a no-brainer. “Alana.”

Disaster.

I was still new to being a human and didn’t realize that the only socially acceptable answer was, “No one.”

The second I answered, the heinous girl ran toward other students, telling each one, “Tim said he wants to marry Alana!” Each person she told covered their mouth with uncontrollable laughter. I was finished. Life was over.

The news quickly got back to Alana herself, who stayed as far away from me as possible for days after. If she knew what a restraining order was, she’d have taken one out.

This horrifying experience taught me a critical life lesson—it can be mortally dangerous to be yourself, and you should exercise extreme social caution at all times.

Now this sounds like something only a traumatized second grader would think, but the weird thing, and the topic of this post, is that this lesson isn’t just limited to me and my debacle of a childhood—it’s a defining paranoia of the human species. We share a collective insanity that pervades human cultures throughout the world:

An irrational and unproductive obsession with what other people think of us.

Evolution does everything for a reason, and to understand the origin of this particular insanity, let’s back up for a minute to 50,000BC in Ethiopia, where your Great2,000 Grandfather lived as part of a small tribe.

Back then, being part of a tribe was critical to survival. A tribe meant food and protection in a time when neither was easy to come by. So for your Great2,000Grandfather, almost nothing in the world was more important than being accepted by his fellow tribe members, especially those in positions of authority. Fitting in with those around him and pleasing those above him meant he could stay in the tribe, and about the worst nightmare he could imagine would be people in the tribe starting to whisper about how annoying or unproductive or weird he was—because if enough people disapproved of him, his ranking within the tribe would drop, and if it got really bad, he’d be kicked out altogether and left for dead. He also knew that if he ever embarrassed himself by pursuing a girl in the tribe and being rejected, she’d tell the other girls about it—not only would he have blown his chance with that girl, but he might never have a mate at all now because every girl that would ever be in his life knew about his lame, failed attempt. Being socially accepted was everything.

Because of this, humans evolved an over-the-top obsession with what others thought of them—a craving for social approval and admiration, and a paralyzing fear of being disliked. Let’s call that obsession a human’s Social Survival Mammoth. It looks something like this:

Mammoth

Your Great2,000 Grandfather’s Social Survival Mammoth was central to his ability to endure and thrive. It was simple—keep the mammoth well fed with social approval and pay close attention to its overwhelming fears of nonacceptance, and you’ll be fine.

And that was all well and fine in 50,000BC. And 30,000BC. And 10,000BC. But something funny has happened for humans in the last 10,000 years—their civilization has dramatically changed. Sudden, quick change is something civilization has the ability to do, and the reason that can be awkward is that our evolutionary biology can’t move nearly as fast. So while for most of history, both our social structure and our biology evolved and adjusted at a snail’s pace together, civilization has recently developed the speed capabilities of a hare while our biology has continued snailing along.

Our bodies and minds are built to live in a tribe in 50,000BC, which leaves modern humans with a number of unfortunate traits, one of which is a fixation with tribal-style social survival in a world where social survival is no longer a real concept. We’re all here in 2014, accompanied by a large, hungry, and easily freaked-out woolly mammoth who still thinks it’s 50,000BC.

Why else would you try on four outfits and still not be sure what to wear before going out?

Trying on Shirts

Trying on Shirts

Trying on Shirts

Trying on Shirts

The mammoth’s nightmares about romantic rejection made your ancestors cautious and savvy, but in today’s world, it just makes you a coward:

Pursuing a Girl

Pursuing a Girl

Pursuing a Girl

Pursuing a Girl

Pursuing a Girl

And don’t even get the mammoth started on the terror of artistic risks:

Karaoke

singing 2

The mammoth’s hurricane of fear of social disapproval plays a factor in most parts of most people’s lives. It’s what makes you feel weird about going to a restaurant or a movie alone; it’s what makes parents care a little too much about where their child goes to college; it’s what makes you pass up a career you’d love in favor of a more lucrative career you’re lukewarm about; it’s what makes you get married before you’re ready to a person you’re not in love with.

And while keeping your highly insecure Social Survival Mammoth feeling calm and safe takes a lot of work, that’s only one half of your responsibilities. The mammoth also needs to be fed regularly and robustly—with praise, approval, and the feeling of being on the right side of any social or moral dichotomy.

Why else would you be such an image-crafting douchebag on Facebook?

Or brag when you’re out with friends even though you always regret it later?

Brag

Society has evolved to accommodate this mammoth-feeding frenzy, inventing things like accolades and titles and the concept of prestige in order to keep our mammoths satisfied—and often to incentivize people to do meaningless jobs and live unfulfilling lives they wouldn’t otherwise consider taking part in.

Above all, mammoths want to fit in—that’s what tribespeople had always needed to do so that’s how they’re programmed. Mammoths look around at society to figure out what they’re supposed to do, and when it becomes clear, they jump right in. Just look at any two college fraternity pictures taken ten years apart:

frat

Or all those subcultures where every single person has one of the same three socially-acceptable advanced degrees:

Diploma

Diploma

Sometimes, a mammoth’s focus isn’t on wider society as much as it’s on winning the approval of a Puppet Master in your life. A Puppet Master is a person or group of people whose opinion matters so much to you that they’re essentially running your life. A Puppet Master is often a parent, or maybe your significant other, or sometimes an alpha member of your group of friends. A Puppet Master can be a person you look up to who you don’t know very well—maybe even a celebrity you’ve never met—or a group of people you hold in especially high regard.

We crave the Puppet Master’s approval more than anyone’s, and we’re so horrified at the thought of upsetting the Puppet Master or feeling their nonacceptance or ridicule that we’ll do anything to avoid it. When we get to this toxic state in our relationship with a Puppet Master, that person’s presence hangs over our entire decision-making process and pulls the strings of our opinions and our moral voice.

puppet master

With so much thought and energy dedicated to the mammoth’s needs, you often end up neglecting someone else in your brain, someone all the way at the center—your Authentic Voice.

AV

Your Authentic Voice, somewhere in there, knows all about you. In contrast to the black-and-white simplicity of the Social Survival Mammoth, your Authentic Voice is complex, sometimes hazy, constantly evolving, and unafraid. Your AV has its own, nuanced moral code, formed by experience, reflection, and its own personal take on compassion and integrity. It knows how you feel deep down about things like money and family and marriage, and it knows which kinds of people, topics of interest, and types of activities you truly enjoy, and which you don’t. Your AV knows that it doesn’tknow how your life will or should play out, but it tends to have a strong hunch about the right step to take next.

And while the mammoth looks only to the outside world in its decision-making process, your Authentic Voice uses the outside world to learn and gather information, but when it’s time for a decision, it has all the tools it needs right there in the core of your brain.

Your AV is also someone the mammoth tends to ignore entirely. A strong opinion from a confident person in the outside world? The mammoth is all ears. But a passionate plea from your AV is largely dismissed until someone else validates it.

And since our 50,000-year-old brains are wired to give the mammoth a whole lot of sway in things, your Authentic Voice starts to feel like it’s irrelevant. Which makes it shrink and fade and lose motivation.

AV

Eventually, a mammoth-run person can lose touch with their AV entirely.

In tribal times, AVs often spent their lives in quiet obscurity, and this was largely okay. Life was simple, and conformity was the goal—and the mammoth had conformity covered just fine.

But in today’s large, complex world of varying cultures and personalities and opportunities and options, losing touch with your AV is dangerous. When you don’t know who you are, the only decision-making mechanism you’re left with is the crude and outdated needs and emotions of your mammoth. When it comes to the most personal questions, instead of digging deep into the foggy center of what you really believe in to find clarity, you’ll look to others for the answers. Who you are becomes some blend of the strongest opinions around you.

Losing touch with your AV also makes you fragile, because when your identity is built on the approval of others, being criticized or rejected by others really hurts. A bad break-up is painful for everyone, but it stings in a much deeper place for a mammoth-run person than for a person with a strong AV. A strong AV makes a stable core, and after a break-up, that core is still holding firm—but since the acceptance of others is all a mammoth-run person has, being dumped by a person who knows you well is a far more shattering experience.

Likewise, you know those people who react to being criticized by coming back with a nasty low-blow? Those tend to be severely mammoth-run people, and criticism makes them so mad because mammoths cannot handle criticism.

Low Blow

Low Blow

Low Blow

Low Blow

At this point, the mission should be clear—we need to figure out a way to override the wiring of our brain and tame the mammoth. That’s the only way to take our lives back.

 

Part 2: Taming the Mammoth

Some people are born with a reasonably tame mammoth or raised with parenting that helps keep the mammoth in check. Others die without ever reining their mammoth in at all, spending their whole lives at its whim. Most of us are somewhere in the middle—we’ve got control of our mammoth in certain areas of our lives while it wreaks havoc in others. Being run by your mammoth doesn’t make you a bad or weak person—it just means you haven’t yet figured out how to get a grip on it. You might not even be aware that you have a mammoth at all or of the extent to which your Authentic Voice has been silenced.

Whatever your situation, there are three steps to getting your mammoth under your control:

Step 1: Examine Yourself

The first step to improving things is a clear and honest assessment of what’s going on in your head, and there are three parts of this:

1) Get to know your Authentic Voice

meet AV

This doesn’t sound that hard, but it is. It takes some serious reflection to sift through the webs of other people’s thoughts and opinions and figure out who the real you actually is. You spend time with a lot of people—which of them do you actually like the most? How do you spend your leisure time, and do you truly enjoy all parts of it? Is there anything you regularly spend money on that you don’t feel that comfortable with? How does your gut really feel about your job and relationship status? What’s your true political opinion? Do you even care? Do you pretend to care about things you don’t just to have an opinion? Do you secretly have an opinion on a political or moral issue you don’t ever voice because people you know will be outraged?

There are cliché phrases for this process—”soul-searching” or “finding yourself”—but that’s exactly what needs to happen. Maybe you can reflect on this from whatever chair you’re sitting in right now or from some other part of your normal life—or maybe you need to go somewhere far away, by yourself, and step out of your life in order to effectively examine it. Either way, you’ve got to figure out what actually matters to you and start being proud of whoever your Authentic Voice is.

2) Figure out where the mammoth is hiding

mammoth hiding

Most of the time a mammoth is in control of a person, the person’s not really aware of it. But you can’t make progress if you’re not crystal clear about where the biggest problem areas are.

The most obvious way to find the mammoth is to figure out where your fear is—where are you most susceptible to shame or embarrassment? What parts of your life do you think about and a dreadful, sinking feeling washes over you? Where does the prospect of failure seem like a nightmare? What are you too timid to publicly try even though you know you’re good at it? If you were giving advice to yourself, which parts of your life would clearly need a change that you’re avoiding acting on right now?

The second place a mammoth hides is in the way-too-good feelings you get from feeling accepted or on a pedestal over other people. Are you a serious pleaser at work or in your relationship? Are you terrified of disappointing your parents and do you choose making them proud over aiming to gratify yourself? Do you get too excited about being associated with prestigious things or care too much about status? Do you brag more than you should?

A third area the mammoth is present is anywhere you don’t feel comfortable making a decision without “permission” or approval from others. Do you have opinions you’re regurgitating from someone else’s mouth, which you’re comfortable having now that you know that person has them? When you introduce your new girlfriend or boyfriend to your friends or family for the first time, can those people’s reaction to your new person fundamentally change your feelings for him/her? Is there a Puppet Master in your life? If so, who, and why?

3) Decide where the mammoth needs to be ousted

ousted

It’s not realistic to kick the mammoth entirely out of your head—you’re a human and humans have mammoths in their head, period. The thing we all need to do is carve out certain sacred areas of our lives that must be in the hands of the AV and free of mammoth influence. There are obvious areas that need to be made part of the AV’s domain like your choice of life partner, your career path, and the way you raise your kids. Others are personal—it comes down to the question, “In which parts of your life must you be entirely true to yourself?”

 

Step 2: Gather Courage by Internalizing That the Mammoth Has a Low IQ

Real Woolly Mammoths were unimpressive enough to go extinct, and Social Survival Mammoths aren’t any better. Despite the fact that they haunt us so, our mammoths are dumb, primitive creatures who have no understanding of the modern world. Deeply understanding this—and internalizing it—is a key step to taming yours. There are two major reasons not to take your mammoth seriously:

1) The mammoth’s fears are totally irrational.

5 things the Mammoth is incorrect about:

Everyone is talking about me and my life and just think how much everyone will be talking about it if I do this risky or weird thing.

Here’s how the mammoth thinks things are:

circles

Here’s how things actually are:

Circle

No one really cares that much about what you’re doing. People are highly self-absorbed.

If I try really hard, I can please everyone.

Yes, maybe in a 40-person tribe with a unified culture. But in today’s world, no matter who you are, a bunch of people will like you and a bunch of other people won’t. Being approved of by one type of person means turning another off. So obsessing over fitting in with any one group is illogical, especially if that group isn’t really who you are. You’ll do all that work, and meanwhile, your actual favorite people are off being friends with each other somewhere else.

Being disapproved of or looked down upon or shit-talked about has real consequences in my life.

Anyone who disapproves of who you’re being or what you’re doing isn’t even in the same room with you 99.7% of the time. It’s a classic mammoth mistake to fabricate a vision of future social consequences that is way worse than what actually ends up happening—which is usually nothing at all.

Really judgy people matter.

Here’s how judgy people function: They’re highly mammoth-controlled and become good friends with and date other judgy people who are also highly mammoth-controlled. One of the primary activities they do together is talk shit about whoever’s not with them—maybe they feel some jealousy, and eye-rolling disapproval helps them flip the script and feel less jealous, or maybe they’re not jealous and use someone as a vehicle for bathing in schadenfreude—but whatever the underlying feeling, the judging serves to feed their hungry mammoth.

eating words 1

eating words 2

eating words 3

When people shit-talk, they set up a category division of which they’re always on the right side. They do this to prop themselves up on a pedestal that their mammoth can chomp away on.

Being the material a judgy person uses to feel good about themselves is a fairly infuriating thought—but it has no actual consequences and it’s clearly all much more about the judgy person and their mammoth problem than it is about you. If you find yourself making decisions partially based on not being talked badly about by a judgy person, think hard about what’s actually going on and stop.

I’m a bad person if I disappoint or offend the person/people who love me and have invested so much in me.

No. You’re not a bad person for being whoever your Authentic Voice is in your one life. This is one of those simple things—if they truly selflessly love you, they will for sure come around and accept everything once they see that you’re happy. If you’re happy and they still don’t come around, here’s what’s happening: their strong feelings about who you should be or what you should do are their mammoth talking, and their main motivation is worrying about how it’ll “look” to other people who know them. They’re allowing their mammoth to override their love for you, and they should be adamantly ignored.

Two other reasons why the mammoth’s fearful obsession with social approval makes no sense:

A) You live here:

Earth

So who gives a fuck about anything?

B) You and everyone you know are going to die. Kind of soon.

die

So like…yeah.

The mammoth’s fears being irrational is one reason the mammoth has a low IQ. Here’s the second:

2) The mammoth’s efforts are counterproductive.

The irony of the whole thing is that the obsessive lumbering mammoth isn’t even good at his job. His methods of winning approval may have been effective in simpler times, but today, they’re transparent and off-putting. The modern world is an AV’s world, and if the mammoth wants to thrive socially, he should do the thing that scares him most—let the AV take over. Here’s why:

AVs are interesting. Mammoths are boring. Every AV is unique and complex, which is inherently interesting. Mammoths are all the same—they copy and conform, and their motives aren’t based on anything authentic or real, just on doing what they think they’re supposed to do. That’s supremely boring.

AVs lead. Mammoths follow. Leadership is natural for most AVs, because they draw their thoughts and opinions from an original place, which gives them an original angle. And if they’re smart and innovative enough, they can change things in the world and invent things that disrupt the status quo. If you give someone a paintbrush and an empty canvas, they might not paint something good—but they’ll change the canvas in one way or another.

Mammoths, on the other hand, follow—by definition. That’s what they were built to do—blend in and follow the leader. The last thing a mammoth is going to do is change the status quo because it’s trying so hard to be the status quo. When you give someone a paintbrush and canvas, but the paint is the same exact color as the canvas, they can paint all they want, but they won’t change anything.

People gravitate toward AVs, not mammoths. The only time a mammoth-crazed person is appealing on a first date is when they’re on the date with another mammoth-crazed person. People with a strong AV see through mammoth-controlled people and aren’t attracted to them. A friend of mine was dating a great on-paper guy awhile back but broke things off because she couldn’t quite fall for him. She tried to articulate why, saying he wasn’t weird or special enough—he seemed like “just one of the guys.” In other words, he was being run too much by a mammoth.

This also holds among friends or colleagues, where AV-run people are more respected and more magnetic—not because there’s necessarily anything extraordinary about them, but because people respect someone with the strength of character to have tamed their mammoth.

Step 3: Start Being Yourself

This post was all fun and games until “start being yourself” came into the picture. Up to now, this has been an interesting reflection into why humans care so much what other people think, why that’s bad, how it’s a problem in your life, and why there’s no good reason it should continue to plague you. But actually doing something after you finish reading this article is a whole different thing. That takes more than reflection—it takes some courage.

toe in water

But courage against what, exactly? As we’ve discussed, there’s no actual danger involved in being yourself—more than anything, it just takes an Emperor Has No Clothes epiphany, which is as simple as this:

Almost nothing you’re socially scared of is actually scary.

Absorbing this thought will diminish the fear that you feel, and without fear, the mammoth loses some power.

medium mammoth

With a weakened mammoth, it becomes possible to begin standing up for who you are and even making some bold changes—and when you watch those changes turn out well for you with few negative consequences and no regrets, it reinforces the epiphany and an empowered AV becomes a habit. Your mammoth has now lost its ability to pull the strings, and it’s tamed.

small mammoth

The mammoth is still with you—it’ll always be with you—but you’ll have an easier time ignoring or overruling it when it speaks up or acts out, because the AV is the alpha dog now. You can start to relish the feeling of being viewed as weird or inappropriate or confusing to people, and society becomes your playground and blank canvas, not something to grovel before and hope for acceptance from.

Making this shift isn’t easy for anyone, but it’s worth obsessing over. Your Authentic Voice has been given one life—and it’s your job to make sure it gets the opportunity to live it.

 

Advertisements

The Science of Choice in Addiction

Research has shown that beating addiction is ultimately about regarding addicts as people who can rationally choose.

Source: The Science of Choice in Addiction

By: SALLY SATEL

n December 1966, Leroy Powell of Austin, Texas, was convicted of public intoxication and fined $20 in a municipal court. Powell appealed his conviction to Travis County court, where his lawyer argued that he suffered from “the disease of chronic alcoholism.” Powell’s public display of inebriation therefore was “not of his own volition,” his lawyer argued, making the fine a form of cruel and unusual punishment. A psychiatrist concurred, testifying that Powell was “powerless not to drink.”

Then Powell took the stand. On the morning of his trial, his lawyer handed him a drink, presumably to stave off morning tremors. The prosecutor asked him about that drink:

Q: You took that one [drink] at eight o’clock [a.m.] because you wanted to drink?…And you knew that if you drank it, you could keep on drinking and get drunk?

A: Well, I was supposed to be here on trial, and I didn’t take but that one drink.

Q: You knew you had to be here this afternoon, but this morning

you took one drink and then you knew that you couldn’t afford

to drink anymore and come to court; is that right?

A: Yes, sir, that’s right.

The judge let stand Powell’s conviction for public intoxication.

Two years later, the Supreme Court affirmed the constitutionality of punishment for public intoxication, rejecting the idea “that chronic alcoholics … suffer from such an irresistible compulsion to drink and to get drunk in public that they are utterly unable to control their performance.”Now, fast-forward almost half a century to the laboratory of Carl Hart, a neuroscientist at Columbia University, who has been showing that cocaine and methamphetamine addicts have a lot in common with Powell. When Hart’s subjects are given a good enough reason to refuse drugs—in this case, cash—they do so too.

The basic experiment goes like this. Hart recruits addicts who have no interest in quitting but who are willing to stay in a hospital research ward for two weeks for testing. Each day, Hart offers them a sample dose of either crack cocaine or methamphetamine, depending upon the drug they use regularly. Later in the day, they are given a choice between the same amount of drugs, a voucher for $5 of store merchandise, or $5 cash. They collect their reward when they’re discharged two weeks later.

More often than not, subjects choose the $5 voucher or cash over the drug, except that, when offered a higher dose, they go for the drug. But when Hart ups the value of the reward to $20, addicts chose the money every time.

In his new book, High PriceA Neuroscientist’s Journey of Self-Discovery That Challenges Everything You Know About Drugs and Society, Hart reports that he was surprised by his findings. Wasn’t addiction a dopamine-driven compulsion “that ’hijacked’ the brain and took control of the will?” he asks. As a graduate student Hart was taught that. It’s understood that recovered addicts eschew substances for fear that even a small amount could set off an irresistible craving for more.

Indeed, this has been conventional wisdom in research circles for at least the past two decades. Many of Hart’s colleagues who teach this support their claim with brain scans showing the addicts’ reward pathways ablaze with neural activation. But studies going back to the 1960’s show that many people addicted to all kinds of drugs— nicotine, alcohol, cocaine, heroin, methamphetamines— can stop or modify their use in response to rewards or sanctions.

This means that the neural changes that occur in the brains of addicts do not necessarily disable their capacity to respond to rewards. Leroy Powell had surely experienced alcohol-induced brain changes over years of drinking, but they did not keep him from making a choice on the morning of his trial. Hart’s subjects loved cocaine, but they loved cash even more.It is certainly true that when people have an intense urge to use, resisting is very, very hard. Yet there’s room for deliberate action in the form of “self-binding,” a practice by which addicts can erect obstacles between themselves and their drugs. Examples include avoiding people, places, or things associated with drug use; directly depositing paychecks or tearing up ATM cards to keep ready (drug) cash out of one’s pockets; or avoiding boredom, a common source of vulnerability to drug use.

The decision to self-bind is made during calmer moments when addicts are not in withdrawal or experiencing strong desire to use. And addicts have many of these moments; as a rule, they do not spend all their time nodding out or in a frenzy to obtain more drugs.

No one would choose the misery that comes with excessive use. “I’ve never come across a single person that was addicted that wanted to be addicted,” says neuroscientist Nora Volkow, director of the National Institute on Drug Abuse and an enthusiastic booster of the brain-driven model of addiction. It is true, drug users don’t choose to become addicted any more than consumers of high calorie foods choose to become overweight. But addiction and poundage is not what they are choosing: what they seek is momentary gratification or relief—a decision that is rational in the short-term but irrational in the long-term.

A typical trajectory goes something like this. In the early phase of addiction, using drugs and alcohol can simply be fun; or it can be a form of self-medication that quells persistent self-loathing, anxiety, alienation, and loneliness. Meanwhile once-rewarding activities, such as relationships, work, or family, decline in value. The attraction of the drug starts to fade as the troubles accrue—but the drug retains its allure because it blunts mental pain, suppresses withdrawal symptoms, and douses craving.

Eventually, addicts find themselves torn between reasons to use and reasons not to. Sometimes a spasm of self-reproach (“this is not who I am;” “I’m hurting my family,” “my reputation is at risk”) tips the balance toward quitting. Novelist and junkie William S. Burroughs calls this the “naked lunch” experience, “a frozen moment when everyone sees what is on the end of every fork.”

In short, every addict has reasons to begin using, reasons to continue, and reasons to quit. To act on a reason is to choose. To make good choices requires the presence of meaningful alternatives. And making a series of good choices leads to achievements—jobs, relationships, reputations. These give a person something meaningful to lose, another reason in itself to steer away from bad choices.

In his book, Hart uses his own story to breathe life into what may sound like a sterile lesson in behavioral economics. He grew up in the 70’s in the benighted Carol City in south Florida, facing poverty, racism, domestic violence, bad schools, guns, and drugs. Hart himself stole and used drugs (though he was never addicted) and peddled marijuana. Yet he ended up thriving due to the many alternatives to drugs in his life. He calls these “competing reinforcers”—high school sports, educational opportunities, and mentors. Hart wants all young people raised in despairing circumstances to have those too.

Combating social ills on such a grand stage may be a pipe dream. But, in the realm of recovery from alcohol and drugs, the principle of competing reinforcers has been scaled down to size and is being replicated across the country. Take HOPE (Hawaii Opportunity for Probation Enforcement), a jail diversion program in which addict-offenders are subject to short periods of detention if they fail drug tests., but receive a clean corrections record if they complete the year-long program. One year after enrollment, HOPE participants were 55 percent less likely to be arrested for a new crime and 53 percent less likely to have had their probation revoked than those in a control group.

Hart draws attention to how progressive rehab programs use rewards to encourage completion of job training and attendance at treatment or Alcoholics Anonymous or Narcotics Anonymous meetings, and so on. Consequences, rather than rewards, or sticks, rather than carrots, can work too. When at risk of losing their licenses, addicted physicians show impressive rates of recovery. When they come under the surveillance of their state medical boards and are subject to random urine testing, unannounced workplace visits, and frequent employer evaluations, 70 to 90 percent are employed with their licenses intact five years later.

Hart believes that both carrots and sticks, when necessary, should be used far more frequently and creatively in the management of addiction.

As Hart says in his book, “Severe addiction may narrow people’s focus and reduce their ability to take pleasure in non-drug experiences, but it does not turn them into people who cannot react to a variety of incentives.” Although addictions are hard to break, it is most useful to view the potential for overcoming them through the lens of choice. It’s not a matter of just saying no—recovery requires far more grit and conviction than that—but it is very much a matter of regarding addicts as people who can rationally choose to use opportunities to their advantage, and working to provide those opportunities.

 

SALLY SATEL is a psychiatrist, a resident scholar at the American Enterprise Institute, and the co-author of Brainwashed: The Seductive Appeal of Mindless Neuroscience. She works part-time at a methadone clinic in Washington, D.C.

Is the Perfectionism Plague Taking a Psychological Toll?

The rise of perfectionism among young people has psychological consequences.

Source: Is the Perfectionism Plague Taking a Psychological Toll?

The first study to examine generational differences in perfectionism over the past three decades reports that young people’s desire to be flawless has skyrocketed over the past thirty years. Today’s college-age students are much more prone to perfectionism than prior generations, according to the new report.

This paper, “Perfectionism Is Increasing Over Time: A Meta-Analysis of Birth Cohort Differences From 1989 to 2016,” was recently published in the journal Psychological Bulletin.

For this study, lead author Thomas Curran of the University of Bath and co-author Andrew Hill of York St. John University, meta-analyzed data from 41,641 college students in the United States, Canada, and the United Kingdom. They also used the Multidimensional Perfectionism Scale to gauge generational changes in perfectionism from the late 1980s to 2016. During their meta-analysis, Curran and Hill investigated three types of perfectionism:

  1. Self-oriented perfectionism: Imposing an irrational desire to be perfect on oneself.
  2. Other-oriented perfectionism: Placing unrealistic standards of perfection on others.
  3. Socially-prescribed perfectionism: Perceiving excessive expectations of perfection from others.

The statistics are alarming: Between 1989 and 2016, self-oriented perfectionism scores increased by 10 percent, other-oriented perfectionism increased by 16 percent, and socially-prescribed perfectionism increased by a whopping 33 percent.

In describing the possible link between the rise of perfectionism and psychopathology, the authors write:

“In reflecting on our findings, one issue of special relevance is the harm and psychological difficulties that might accompany an increase in perfectionism. According to the most recent global health estimates from the World Health Organization (2017), serious mental illness afflicts a record number of young people. In the United States, Canada, and the United Kingdom, young people are experiencing higher levels of depression, anxiety, and suicide ideation than they did a decade ago (e.g., Bloch, 2016; Bor, Dean, Najman, & Hayatbakhsh, 2014; Patel, Flisher, Hetrick, & McGorry, 2007). They also report more loneliness and present to clinicians with eating disorders and body dysmorphia at a higher rate than generations previous (e.g., Paik & Sanchagrin, 2013; Smink et al., 2012; Thompson & Durrani, 2007).”

What’s Driving the Increase of Perfectionism Among Young People?

The rise in perfectionism among college students is driven by a variety of factors, according to Curran. The raw data suggests that the growing use of social media could be fueling the pressure young adults feel to perfect themselves in comparison to others. That said, Curran emphasizes that more research is needed to confirm the correlation between an uptick in social media usage and increased perfectionism.

Curran also speculates that college students’ drive to perfect their grade point average represents a rise in meritocracy among millennials. As he explained in a statement: “Meritocracy places a strong need for young people to strive, perform and achieve in modern life. Young people are responding by reporting increasingly unrealistic educational and professional expectations for themselves. As a result, perfectionism is rising among millennials.”

In 1976, only about half of high school seniors were expected to earn a college degree. By 2008, that number had risen to 80 percent. Unfortunately, the cut-throat competitionamong the growing number of degree holders appears to be exacerbating people’s desire to strive for perfection.

“Today’s young people are competing with each other in order to meet societal pressures to succeed and they feel that perfectionism is necessary in order to feel safe, socially connected and of worth,” Curran said.

Andrew Hill sees these findings as a clarion call for colleges and policymakers to increase their efforts to curb unnecessary competition among young people in order to preserve their mental health. Unfortunately, this may be easier said than done.

References

Thomas Curran and Andrew P. Hill. “Perfectionism Is Increasing Over Time: A Meta-Analysis of Birth Cohort Differences From 1989 to 2016.” Psychological Bulletin(Published: December 28, 2017) DOI: 10.1037/bul0000138

Forging Deep Ties: It’s Complicated

The millennial’s guide to friendship.

Source: Forging Deep Ties: It’s Complicated

There are numerous challenges to making new friends in adulthood. Not only is making friends harder as we get older, but sustaining friendships can be harder as well: After college and beyond, most people don’t get to be part of such a diverse, built-in social network.

This is particularly significant since research shows us that strong social support is linked to healthier aging and positive health outcomes. Yes, we need solitude to nurture our creativity and our spirit, but we need rich, deep, meaningful connections as well — our mental and physical health depend on it. Importantly, research also suggests that elderly people are healthiest when they regularly interact with, and have friends from, all different age groups.

We see the best health outcomes for heterosexual married men who have relied almost solely on their wives to meet their emotional and relational needs. This is also why for women — in this case, straight women — the subject of friendship is so important: Since women tend to live longer than men, they also tend to rely on their friends, usually other women, for companionship for many of the activities they might have previously enjoyed with their male partners. So it is in our best interest to be thinking about how to nurture and cultivate our friendships as young adults.

Social psychologist Sherry Turkle, who studies our intimacy with machines, says that our online presence often means that we are connected to an ever-widening circle of people, more than ever before, and that this may result in a sort of “friendship lite,” with lots of surface connections, but not a lot of face-to-face, meaningful time together. We might have more than a thousand friends on Facebook and hundreds of followers on Twitter, Instagram, and Snapchat, but there are likely fewer friends with whom we truly want to spend our time. We might have deep affection for our best, oldest, and most cherished friends, yet those are the people we may wind up talking with and seeing less often because of all our time at work and our preoccupation with time online with more superficial connections. This paradox is powerful.

Another challenge and problem is that women in their twenties and thirties often resort to meeting new women friends in much the same way that men “do” friendship with other men — over activities like a running or cycling club, a team sport, a yoga class, etc. But this may not lead to depth of emotional intimacy. These spaces are regarded as less threatening, both for finding friends and for finding dating partners, and an easy place from which to say, “Hey, wanna go grab a drink sometime?” The thought is that if two people both enjoy the same activity, they might have other things in common, or at least can pursue more of that original activity and passion together. The drawback is that this can sometimes feel forced and unnatural.

In my own experience, I have found that when I look back on the friendships that are the dearest to me and that have produced the greatest sense of sisterhood or brotherhood, we did not meet by trying to. For example, four years ago, I attended a fashion show at a department store and saw a woman wearing my favorite jacket, but in a way that looked more interesting than the way I usually wore it, so I approached her and told her so. We wound up standing there for an hour talking about her daughter, a first-year student trying to adjust at college, which we both had a lot to say about, since she is a therapist and I am a professor. We also talked about meditation and a bunch of other things, and then we exchanged numbers and got together, and she remains a true sister-friend. The last place I would have expected to find one of my most soulful friends would have been at the mall, and yet there she was, when we both least expected it. Deep friendships depend on some sense of spontaneity, which was present in that first meeting. Her daughters joked that she had quickly developed a “crush” on me, and I couldn’t stop talking about her either, but it’s because there really is such a thing as friendship chemistry. It can be magnetic.

Another issue that poses real challenges to friendships, especially for women in their twenties and thirties, is how they handle and negotiate choices and priorities around marriage and motherhood. Some women will choose to not have children, others will choose to cocoon with their partners and children, and some will want to include their children in all activities without realizing how that will affect the dynamic of conversation and friendship intimacy.

While one might think that young mothers risk social isolation, many report fulfillment in making friends with other new moms through breastfeeding support groups, groups for stay-at-home moms, or through libraries, parks, and day care. Still, others complain that connecting through children is not enough — there need to be more adult reasons that nourish and sustain a friendship.

Also, when people are new to living together with a romantic partner or spouse, or become new parents, they are usually much less available for impromptu dinners out, long, meandering phone calls later into the night, weekend get-togethers, trips with friends, etc. Single friends may get impatient with the other person’s lack of availability or feel left behind. And married friends don’t always want to hear about a single person’s last date or the more spontaneous rhythms of their life. The single person may be rendered immature, and the married person more boring.

During this period of life, people are making different choices for how to spend their time and resources. Some are using their twenties and thirties to attend graduate school. Others travel extensively, and still others settle down, buy houses, and start families. Inevitably, these decisions can dramatically impact people’s ability to do things together. One person may be earning a robust income and wanting an adventurous travel partner, while another is eating ramen noodles in graduate school. In cases like this, even choosing a restaurant to meet can feel stressful. The sense of power disparity can affect each person’s perception of themselves and of the other person and create a chasm. Each person can feel a certain level of shame or guilt.

Other issues can cause divisions. Politics have recently created a wedge in many people’s relationships, for example, and can also be a determining factor of whether people feel they can, or even want to try to, connect.

Also, with the pressure in one’s twenties and thirties to launch a successful career, time is a precious commodity, and people generally get pickier about who they want to spend it with. They may also be starting the process of liking the skin they’re in and enjoying their own company more — which is a good thing. Sometimes I hear women say that if they are choosing between a person who might bore them, or who drones on and on, or who has different values than they do, they are likely to just watch Netflix and chill on their own.

Interestingly, some women report feeling “maxed out” on friends and unable to find time and space to fit more people into their lives. In this scenario, friendship becomes just one more thing on a seemingly endless to-do list.

Because of career pressures, people in their twenties and thirties are generally more on the move and may literally pick up and move across the country. So staying friends with people can be trickier. Despite all the devices we rely on to stay in touch, sometimes we simply cannot replace the feeling we get when we are in the company of friends and can reach out and hug them, or watch them laugh, etc. Those I have interviewed report using all sorts of apps to stay in touch with long-distance friends, such as Skype, FaceTime, WhatsApp, and Houseparty, yet rely on them far less to find new friends.

And people can get tired of making plans that are not due to materialize for weeks or months; that can simply be unsatisfying. It can also feel overly planned, rigid, and almost transactional, relying on a few hours together just a few times every few months for essentially catching up, but not transcending that. This also explains why research shows that the older we get, the more we can feel drawn back to relationships forged earlier in life with people who know our backstories and with whom we can pick up where we left off without as much surface catch-up.

In a day and age when relationships may look more superficial and fleeting, there tends to be more ghosting: Just as teenagers are more and more frequently backing out of prom dates at the last minute if they get a better offer, adults are making plans and, when the date comes up, reporting relief when they have to cancel or the other person backs out. There’s a sense of wanting to control how we interact and under what specific conditions. But this also limits how we experience friendship, since at the same time, we often yearn for durable and reliable connections.

We might assume that making friends should be simpler than finding dating partners, but the opposite is often true: While sexual intimacy may be a big draw in a dating situation and is often used to forge and deepen emotional intimacy, friendship offers no similar crutch. It has to be interesting, reliable, spontaneous, fun, trustworthy, deep, and rich all on its own.

Deep friendship means grabbing some immediacy together. It also demands that we reveal a certain amount of vulnerability. This is not a quality prized on social media, and people in their twenties and thirties, while just as vulnerable as ever, are understandably reticent to reveal that.

Finally, quite noteworthy is the fact that there is the continually growing phenomenon of only children, many of whom come of age intuiting by necessity that friends are the family we choose; it might be through them that we as a society can better appreciate the powerful role of friendship in our lives.

 

Facebook image: Kaspars Grinvalds/Shutterstock

Happiness is overrated — finding deep meaning in life comes down to 4 basic “pillars”

Emily Esfahani Smith, in her TED talk viewed by almost 3 million people explains what she learned from thousands of pages of psychology, neuroscience and philosophy.

Source: Happiness is overrated — finding deep meaning in life comes down to 4 basic “pillars”

Being happy is the goal in life, isn’t it? Isn’t that what we all aim for? For most people it looks something like this: good grades, popularity at school, good education, great job, ideal life partner, beautiful home, money for great vacations.

Yet, many people have achieved exactly this and still feel empty and unfulfilled.

Is there something wrong with expecting happiness to result from success in life? Clearly it’s not working.

The suicide rate is rising around the world, and even though life is getting objectively better by nearly every conceivable standard, more people feel hopeless, depressed and alone.

Is there more to life than trying to be happy?

Writer Emily Esfahani Smith thinks so. In her popular 2017 TED Talk, viewed by almost 3 million people, she explains what she learned from spending five years interviewing hundreds of people and reading through thousands of pages of psychology, neuroscience and philosophy.

https://embed.ted.com/talks/emily_esfahani_smith_there_s_more_to_life_than_being_happy

In her search she found out that it’s not a lack of happiness that leads to despair. It’s a lack of having meaning in life.

What is the difference between being happy and having meaning in life?

“Many psychologists define happiness as a state of comfort and ease, feeling good in the moment. Meaning, though, is deeper. The renowned psychologist Martin Seligman says meaning comes from belonging to and serving something beyond yourself and from developing the best within you,” says smith.

“Our culture is obsessed with happiness, but I came to see that seeking meaning is the more fulfilling path. And the studies show that people who have meaning in life, they’re more resilient, they do better in school and at work, and they even live longer,” she adds.

Her five-year study led her to the discovery of four pillars than underpin a meaningful life. The first three I might have guessed, but the last one caught me off guard. And it’s really a crucial aspect of the meaning we give to our lives.

“The first pillar is belonging. Belonging comes from being in relationships where you’re valued for who you are intrinsically and where you value others as well,” says Smith.

But she warns that not all belonging is desired belonging. “Some groups and relationships deliver a cheap form of belonging; you’re valued for what you believe, for who you hate, not for who you are.”  This is not true belonging.

For many people, belonging is the most essential source of meaning. Their bonds with family and friends gives real meaning to their lives.

The second pillar or key to meaning is purpose, says Smith, and it’s not the same thing as finding that job that makes you happy.

The key to purpose says Smith is using your strengths to serve others. For many people that happens through work and when they find themselves unemployed, they flounder.

The third pillar of meaning is transcendence. Transcendent states are those rare moments when you lose all sense of time and place and you feel connected to a higher reality.

“For one person I talked to, transcendence came from seeing art. For another person, it was at church. For me, I’m a writer, and it happens through writing. Sometimes I get so in the zone that I lose all sense of time and place. These transcendent experiences can change you.”

So we have belonging, purpose and transcendence.

Now, the fourth pillar of meaning is a surprising one.

The fourth pillar is storytelling, the story you tell yourself about yourself.

“Creating a narrative from the events of your life brings clarity. It helps you understand how you became you.

“But we don’t always realize that we’re the authors of our stories and can change the way we’re telling them. Your life isn’t just a list of events. You can edit, interpret and retell your story, even as you’re constrained by the facts.”

This is so true. It boils down to perspective and that can make all the difference: the difference between a miserable life plagued with misfortune or an inspirational life filled with gratitude and insight.

No matter what has happened in your life to break you, you can heal again and find new purpose in life like so many people who have allowed the bad in their lives to be redeemed by the good.

To learn more, watch Smith’s recounting of such a redemptive story and her touching retelling of a powerful experience she had with her dad when he almost died of a heart attack.

I’m a South African based writer and am passionate about exploring the latest ideas in artificial intelligence, robotics and nanotechnology. I also focus on the human condition, with a particular interest human intuition and creativity. To share some feedback about my articles, email me at coert@ideapod.com

30 years after Prozac arrived, we still buy the lie that chemical imbalances cause depression

Prozac nation, 30 years on. (Reuters/Lucy Nicholson)

We don’t know how Prozac works, and we don’t even know for sure if it’s an effective treatment for the majority of people with depression.

Source: 30 years after Prozac arrived, we still buy the lie that chemical imbalances cause depression

WRITTEN BY  Olivia Goldhill

Some 2,000 years ago, the Ancient Greek scholar Hippocrates argued that all ailments, including mental illnesses such as melancholia, could be explained by imbalances in the four bodily fluids, or “humors.” Today, most of us like to think we know better: Depression—our term for melancholia—is caused by an imbalance, sure, but a chemical imbalance, in the brain.

This explanation, widely cited as empirical truth, is false. It was once a tentatively-posed hypothesis in the sciences, but no evidence for it has been found, and so it has been discarded by physicians and researchers. Yet the idea of chemical imbalances has remained stubbornly embedded in the public understanding of depression.

Prozac, approved by the US Food and Drug Administration 30 years ago today, on Dec. 29, 1987, marked the first in a wave of widely prescribed antidepressants that built on and capitalized off this theory. No wonder: Taking a drug to tweak the biological chemical imbalances in the brain makes intuitive sense. But depression isn’t caused by a chemical imbalance, we don’t know how Prozac works, and we don’t even know for sure if it’s an effective treatment for the majority of people with depression.

 The theory fits in with psychiatry’s attempt, over the past half century, to portray depression as a disease of the brain, instead of an illness of the mind.  One reason the theory of chemical imbalances won’t die is that it fits in with psychiatry’s attempt, over the past half century, to portray depression as a disease of the brain, instead of an illness of the mind. This narrative, which depicts depression as a biological condition that afflicts the material substance of the body, much like cancer, divorces depression from the self. It also casts aside the social factors that contribute to depression, such as isolation, poverty, or tragic events, as secondary concerns. Non-pharmaceutical treatments, such as therapy and exercise, often play second fiddle to drugs.

In the three decades since Prozac went on the market, antidepressants have propagated, which has further fed into the myths and false narratives we tell about mental illnesses. In that time, these trends have shifted not just our understanding, but our actual experiences of depression.

* * *

In the two millennia since Hippocrates founded medicine, society has embraced then rejected many theories of mental illness. Each hypothesis has struggled to reconcile how the subjective psychological symptoms of depression map onto physical malfunctions in the brain. The intractable relationship between the two has never been satisfactorily addressed.

Hippocrates’ humor-based notion of medicine, much like contemporary psychiatry, portrayed mental illness as rooted in biological malfunctions. But the evolution from Hippocrates to today has been far from smooth: In the centuries between, there was widespread belief in superstition and the supernatural, and symptoms that we would today call “depression” were often attributed to witchcraft, magic, or the devil.

The brain became the primary focus of depression in the 19th century, thanks to phrenologists. The field of phrenology, which took the shape of the skull as determinant of features of the underlying brain and psychological tendencies, was used by bigots to justify eugenics and has rightly been dismissed. But, though highly flawed, it did advance ideas of the brain still believed today. Whereas other physicians of the timebelieved organs like the heart and liver were connected to emotional passions, phrenologists held that the brain is the only “organ of the mind.” Phrenologists were also the first to argue that different areas of the brain have distinct, specialized roles and, based on this belief, posited that depression could be linked to a particular brain region.

 “Beginning with Freud’s influence, through the first half of the 20th century, the brain almost disappeared from psychiatry. When it came back, it came back with a vengeance.” The attention on the brain faded in the 20th century, when phrenology was supplanted by Freudian psychoanalysts, who argued that the unconscious mind (rather than brain) is the predominant cause of mental illness. Psychoanalysis considered environmental factors such as family and early childhood experiences as the key determinants of the characteristics of the adult mind, and of any mental illness.

“Beginning with Freud’s influence, through the first half of the 20th century, the brain almost disappeared from psychiatry,” says Allan Horwitz, a sociology professor at Rutgers University who has written on the social construction of mental disorders. “When it came back, it came back with a vengeance.”

* * *

A conglomeration of factors, beginning in the 1960s but having the largest effects in the ‘70s and ‘80s, contributed to psychiatry’s renewed emphasis on the brain. Firstly, in the US, conservative presidents disparaged as liberal causes any political efforts to alleviate social conditions that contribute to mental health, such as poverty, unemployment, and racial discrimination. “Biologically-based approaches became more politically palatable,” says Horwitz, noting that the National Institute of Mental Health largely abandoned its research on the social causes of depression under president Richard Nixon.

 Conservative presidents disparaged as liberal causes any political efforts to alleviate social conditions that contribute to mental health, such as poverty, unemployment, and racial discrimination.  There was also growing interest in the role of drugs, for good reason: Newly developed antidepressants showed early success in treating mental illnesses. Though Freudian psychoanalysts did use the drugs alongside their therapy, the medication didn’t neatly fit with their theories. And while individuals had previously paid for mental health care themselves in the US, the 1960s saw private insurance companies and public programs, such as Medicaid and Medicare, increasingly take on those costs. These groups were impatient to see results from their investment, notes Horwitz—and drugs were clearly both faster and cheaper than years of psychoanalysis.

Psychoanalysis also rapidly went out of fashion in that time. Organizations such as the National Alliance on Mental Illness, which advocated for the interests of those affected by mental illness and their families, were distrustful of psychoanalysis’ blame on parental figures. There was also a growing distaste for psychoanalysis among those on the left side of the political spectrum who believed psychoanalytic theories upheld conservative bourgeois values.

 “Psychiatry has always had a tenuous position in the prestige hierarchy of medicine.” At the time, psychoanalysis was deeply entwined with the field of psychiatry (the medical specialty that treats mental disorders.) Until 1992, psychoanalysts wererequired to have medical degrees (paywall) to practice in the US—and most had MDs in psychiatry. “Psychiatry has always had a tenuous position in the prestige hierarchy of medicine,” says Horwitz. “They weren’t regarded by doctors and other specialties as being very medical. They were seen more as storytellers as opposed to having a scientific basis.” As Freudian psychoanalysis became increasingly rejected as a pseudoscience, the entire field of psychiatry was tarnished by association—and so it pivoted, creating a new framework for diagnosing and treating mental health, founded on the role of the physical brain.

The theory of chemical imbalances was a neat way of explaining just how brain malfunctions could cause mental illness. It was first hypothesized by scientists in academic papers in the mid-to-late 1960s, after the seeming early success of drugs thought to adjust chemicals in the brain. Though the evidence never materialized, it became a popular theory and was repeated so often it became accepted truth.

It’s not hard to see why the theory caught on: It suited psychiatrists’ newfound attempt to create a system of mental health that mirrored diagnostic models used in other fields of medicine. The focus on a clear biological cause for depression gave practicing physicians an easily understandable theory to tell patients about how their disease was being treated.

“The fact that practicing physicians and leaders of science bought that idea, to me, is so disturbing,” says Steve Hyman, director of the Stanley Center for psychiatric research at the Broad Institute of MIT and Harvard.

The shifting language of the Diagnostic and Statistical Manual of Mental Disorders—widely and deferentially referred to as the Bible of contemporary psychiatry—clearly shows the evolution of field’s portrayal of mental illness. The second edition (pdf), published in 1968 (the DSM II), still showed the influence of Freud; conditions are broadly divided into more serious psychoses—with symptoms including delusional thinking, hallucinations, and breaks from reality—and less severe neuroses—such as hysterical, phobic, obsessive compulsive, and depressive neuroses. The neuroses are not clearly differentiated from “normal” behaviors. Importantly, anxiety—which Freud believed was foundational to human psyche and inextricably linked with societal repression—was portrayed as the underlying condition of all neuroses.

The DSM II also says depressive neurosis could be “due to an internal conflict or to an identifiable event such as the loss of a loved object or cherished possession.” The notion of “internal conflict” is explicitly drawn from Freud’s work, which posited that internal psychological conflicts drive irrational thinking and behaviors.

The third edition of the DSM (pdf), published in 1980, uses language far closer to contemporary professional depictions of mental illness. It does not suggest “internal conflicts” cause depression, anxiety is no longer portrayed as the underlying cause of all mental illnesses, and the manual focuses on creating a checklist of symptoms (whereas, in DSM II, none were listed for depressive neurosis.)

 “The fact that practicing physicians and leaders of science bought that idea, to me, is so disturbing” Today, the DSM-5 (pdf) lists various kinds of depressive disorders, such as “depressive disorder due to another medical condition,” “substance/medication-induced depressive disorder,” and “major depressive disorder.” Each of these disorders is distinguished by typical duration and its link to various causes, but the listed symptoms are broadly the same. Or, as the DSM-5 says: “The common feature of all of these disorders is the presence of sad, empty, or irritable mood, accompanied by somatic and cognitive changes that significantly affect the individual’s capacity to function. What differs among them are issues of duration, timing, or presumed etiology.”

The problem is that, though various people could be classed as suffering from a distinct depressive disorder according to their life events, there aren’t clearly defined treatments for each disorder. Patients from all groups are treated with the same drugs, though they are unlikely to be experiencing the same underlying biological condition, despite sharing some symptoms. Currently, a hugely heterogeneous group of people is prescribed the same antidepressants, adding to the difficulty of figuring out who responds best to which treatment.

* * *

Before antidepressants became mainstream, drugs that treated various symptoms of depression were depicted as “tonics which could ease people through the ups and downs of normal, everyday existence,” write Jeffrey Lacasse, a Florida State University professor specializing in psychiatric medications, and Jonathan Leo, a professor of anatomy at Lincoln Memorial University, in a 2007 paper on the history of the chemical imbalance theory.

In the 1950s, Bayer marketed Butisol (a barbiturate) as “the ‘daytime sedative’ for everyday emotional stress”; in the 1970s, Roche advertised Valium (diazepam) as a treatment for the “unremitting buildup of everyday emotional stress resulting in disabling tension.”

Both the narrative and the use of drugs to treat symptoms of depression transformed after Prozac—the brand name for fluoxetine—was released. “Prozac was unique when it came out in terms of side effects compared to the antidepressants available at the time (tricyclic antidepressants and monoamine oxidase inhibitors),” Anthony Rothschild, psychiatry professor at the University of Massachusetts Medical School, writes in an email. “It was the first of the newer antidepressants with less side effects.”

Even the minimum therapeutic dose of commonly prescribed tricyclics like amitriptyline (Elavil) could cause intolerable side effects, says Hyman. “Also these drugs were potentially lethal in overdose, which terrified prescribers.” The market for early antidepressants, as a result, was small.

 Deciding which antidepressant to prescribe to which patient has been described as a “flip of a coin.” Prozac changed everything. It was the first major success in the selective serotonin reuptake inhibitor (SSRI) class of drugs, designed to target serotonin, a neurotransmitter. It was followed by many more SSRIs, which came to dominate the antidepressant market. The variety affords choice, which means that anyone who experiences a problematic side effect from one drug can simply opt for another. (Each antidepressant causes variable and unpredictable side effects in some patients. Deciding which antidepressant to prescribe to which patient has been described as a “flip of a coin.”)

Rothschild notes that all existing antidepressant have similar efficacy. “No drug today is more efficacious that the very first antidepressants such as the tricyclic imipramine,” agrees Hyman. Three decades since Prozac arrived, there are many more antidepressant options, but no improvement in efficacy of treatment.

Meanwhile, as Lacasse and Leo note in a 2005 paper, manufacturers typically marketed these drugs with references to chemical imbalances in the brain. For example, a 2001 television ad for sertraline (another SSRI) said, “While the causes are unknown, depression may be related to an imbalance of natural chemicals between nerve cells in the brain. Prescription Zoloft works to correct this imbalance.”

Another advertisement, this one in 2005, for the drug paroxetine, said, “With continued treatment, Paxil can help restore the balance of serotonin,” a neurotransmitter.

“[T]he serotonin hypothesis is typically presented as a collective scientific belief,” write Lacasse and Leo, though, as they note: “There is not a single peer-reviewed article that can be accurately cited to directly support claims of serotonin deficiency in any mental disorder, while there are many articles that present counterevidence.”

Despite the lack of evidence, the theory has saturated society. In their 2007 paper, Lacasse and Leo point to dozens of articles in mainstream publications that refer to chemical imbalances as the unquestioned cause of depression. One New York Times article on Joseph Schildkraut, the psychiatrist who first put forward the theory in 1965, states that his hypothesis “proved to be right.” When Lacasse and Leo asked the reporter for evidence to support this unfounded claim, they did not get a response. A decade on, there are still dozens of articles published every month in which depression is unquestionably described as the result of a chemical imbalance, and many people explain their ownsymptoms by referring to the myth.

Meanwhile, 30 years after Prozac was released, rates of depression are higher than ever.

* * *

Hyman responds succinctly when I ask him to discuss the causes of depression: “No one has a clue,” he says.

There’s not “an iota of direct evidence” for the theory that a chemical imbalance causes depression, Hyman adds. Early papers that put forward the chemical imbalance theory did so only tentatively, but, “the world quickly forgot their cautions,” he says.

 “Neuroscientists don’t have a good way of separating when brains are functioning normally or abnormally.” Depression, according to current studies, has an estimated heritability of around 37%, so genetics and biology certainly play a significant role. Brain activity corresponds with experiences of depression, just as it corresponds with all mental experiences. This, says Horwitz, “has been known for thousands of years.” Beyond that, knowledge is precarious. “Neuroscientists don’t have a good way of separating when brains are functioning normally or abnormally,” says Horwitz.

If depression was a simple matter of adjusting serotonin levels, SSRIs should work immediately, rather than taking weeks to have an effect. Reducing serotonin levels in the brain should create a state of depression, when research has found that this isn’t the case. One drug, tianeptine (a non-SSRI sold under the brand names Stablon and Coaxil across Europe, South America, and Asia, though not the UK or US), has the opposite effect of most antidepressants and decreases levels of serotonin.

This doesn’t mean that antidepressants that affect levels of serotonin definitively don’t work—it simply means that we don’t know if they’re affecting the root cause of depression. A drug’s effect on serotonin could be a relatively inconsequential side effect, rather than the crucial treatment.

History is filled with treatments that work but fundamentally misunderstand the causes of the illness. In the 19th century, for example, miasma theory held that infectious diseases such as cholera were caused by noxious smells contributing “bad air.” To get rid of these smells, cleaning up waste became a priority—which was ultimately beneficial, but because waste feeds the microorganisms that actually transmit infectious disease, rather than because of the smells.

* * *

It’s possible our current medical categorization and inaccurate cultural perception of “depression” is actually causing more and more people to suffer from depression. There are plenty of historical examples of mental health symptoms that shift alongside cultural expectations: Hysteria has declined as women’s agency has increased, for example, while symptoms of anorexia in Hong Kong changed as the region became more aware of western notions of the illness.

At its core, severe depression has likely retained the same symptoms over the centuries. “When it’s severe, whether you read the ancient Greeks, Shakespeare, [Robert] Burton on [The Anatomy of] Melancholy, it looks just like today,” says Hyman. “The condition is the same; it’s part of being human.” John Stuart Mill’s 19th century description of his mental breakdown is eminently familiar to a contemporary reader.

But less severe cases, in the past, may have been chalked up to simply being “justifiably sad,” even by those experiencing them, whereas they’d be considered a health condition today. And so, psychiatry “reframes ordinary distress as mental illness,” says Horwitz. This framework doesn’t simply label sadness depression, but could lead people to experience depressive symptoms where they would have previously been simply unhappy. The impact of this shift is impossible to track: Mental illness is now recognized as a legitimate health issue, and so many more people are comfortable admitting to their symptoms than ever before. How many more people are truly experiencing depression for the first time, versus those who are acknowledging their symptoms once kept secret? “The prevalence is difficult to determine,” acknowledges Hyman.

* * *

Perhaps unraveling the true causes of depression and exactly how antidepressants treat the symptoms would be a less pressing concern if we knew, with confidence, that antidepressants worked well for the majority of patients. Unfortunately, we don’t.

 “They’re slightly more effective than placebo. The difference is so small, it’s not of any clinical importance.”  The work of Irving Kirsch, associate director of the Program in Placebo Studies at Harvard Medical School, including several meta-analyses of the trials of all approved antidepressants, makes a compelling case that there’s very little difference between antidepressants and placebos. “They’re slightly more effective than placebo. The difference is so small, it’s not of any clinical importance,” he says. Kirsch advocates non-drug-based treatments for depression. Studies show that while drugs and therapy are similarly effective in the short-term, in the long-term those who don’t take medication seem to do better and have a lower risk of relapse.

Others like Peter Kramer, a professor at Brown University’s medical school, are strongly in favor of leaning on the drugs. Kramer is skeptical about the quality of many studies on alternative therapies for depression; people with debilitating depression are unlikely to sign up for anything that require them to do frequent exercise or therapy, for example, and so are often excluded from studies that eventually purport to show exercise is as effective a treatment as drugs. And, as he writes in an email, antidepressants “are as effective as most treatments doctors rely on, in the middle range overall, about as likely to work as Excedrin” for a headache.

 “Some people really respond, some don’t respond at all, and everything in between.” Others are more circumspect. Hyman acknowledges that, when taken in aggregate, all the trials for approved antidepressants show little difference between the drugs and placebo. But that, he says, obscures individual differences in responses to antidepressants. “Some people really respond, some don’t respond at all, and everything in between,” Hyman adds.

There are currently no known biomarkers to definitely show who will respond to what antidepressants. Severely depressed patients who don’t have the energy or interest to go to therapy should certainly be prescribed drugs. For those who are healthy enough to make it to therapy—well, opinions differ. Some psychiatrists believe in a combination of drugs and therapy; some believe antidepressants can be effective for all levels of depression and no therapy is needed; and others believe therapy alone is the best treatment option for all but the most severely depressed. Unfortunately, says Hyman, there’s little evidence on the best treatment plan for each patient.

Clearly, many people respond well to antidepressants. The drugs became so popular in large part because many patients benefited from the treatment and experienced significantly reduced depressive symptoms. Such patients needn’t question why their symptoms have improved or whether they should seek alternative forms of treatment.

On the other hand, the drugs simply do not work for others. Further, there’s evidence to suggest framing depression as a biological disease reduces agency, and makes people feel less capable of overcoming their symptoms. It effectively divorces depression from a sense of self. “It’s not me as a person experiencing depression. It’s my neurochemicals or my brain experiencing depression. It’s a way of othering the experience,” says Horwitz.

It’s nearly impossible to get good data to explain why depression treatments work for some and not others. Psychiatrists largely evaluate the effects of drugs by subjective self-reports; clinical trials usually include only patients that meet a rarefied set of criteria; and it’s hard to know whether those who respond well to therapy benefitted from another, unmeasured factor, such as mood resilience. And when it comes to the subjective experience of mental health, there’s no meaningful difference between what feels like effective treatment and what is effective treatment.

There are also no clear data on whether, when antidepressants work, they actually cause symptoms to fully dissipate long-term. Do antidepressants cure depression, or simply make it more bearable? We don’t know.

* * *

Depression is now a global health epidemic, affecting one in four peopleworldwide. Treating it as an individual medical disorder, primarily with drugs, and failing to consider the environmental factors that underlie the epidemic—such as isolation and poverty, bereavement, job loss,long-term unemployment, and sexual abuse—is comparable to asking citizens to live in a smog-ridden city and using medication to treat the diseases that result instead of regulating pollution.

Investing in substantive societal changes could help prevent the onset of widespread mental illness; we could attempt to prevent the depressive health epidemic, rather than treating it once it’s already prevalent. The conditions that engender a higher quality of life—safe and affordable housing, counsellors in schools, meaningful employment, strong local communities to combat loneliness—are not necessarily easy or cheap to create. But all would lead to a population that has fewer mental health issues, and would be, ultimately, far more productive for society.

Similarly, though therapy may be a more expensive treatment plan than drugs, evidence suggests that cognitive behavioral therapy (CBT) is at least as effective as antidepressants, and so deserves considerable investment. Much as physical therapy can strengthen the body’s muscles, some patients effectively use CBT to build coping mechanisms and healthy thought habits that prevent further depressive episodes.

In the current context, where psychiatry’s system of diagnosing mental health mimics other medical fields, the role of medicine in treating mental illness is often presented as evidence to skeptics that depression is indeed a real disease. Some might worry that a mental health condition treated partly with therapy, exercise, and societal changes could be seen as less serious or less legitimate. Though this line of thinking reflects a well-meaning attempt to reduce stigma around mental health, it panders to faulty logic. After all, many bodily illnesses are massively affected by lifestyle. “It doesn’t make heart attacks less real that we want to do exercise and see a dietician,” says Hyman. No illness needs to be entirely dependent on biological malfunctions for it to be considered “real.” Depression is real. The theory that it’s caused by chemical imbalances is false. Three decades since the antidepressants that helped spread this theory arrived on the market, we need to remodel both our understanding and treatment of depression.

Why Trying New Things Is So Hard to Do

Credit: Abbey Lossing

Source: https://www.nytimes.com/2017/12/01/business/why-trying-new-things-is-so-hard.html

I drink a lot of Diet Coke: two liters a day, almost six cans’ worth. I’m not proud of the habit, but I really like the taste of Diet Coke.

As a frugal economist, I’m well aware that switching to a generic brand would save me money, not just once but daily, for weeks and years to come. Yet I only drink Diet Coke. I’ve never even sampled generic soda.

Why not? I’ve certainly thought about it. And I tell myself that the dollars involved are inconsequential, really, that I’m happy with what I’m already drinking and that I can afford to be passive about this little extravagance.

Yet I’m clearly making an error, one that reveals a deeper decision-making bias whose cumulative cost is sizable: Like most people, I conduct relatively few experiments in my personal life, in both small and big things.

This is a pity because experimentation can produce outsize rewards. For example, I wouldn’t be risking much by trying a generic soda, and if I liked it enough to switch, the payout could be big: All my future sodas would be cheaper.

When the same choice is made over and over again, the downside of trying something different is limited and fixed — that one soda is unappealing — while the potential gains are disproportionately large. One study estimated that 47 percent of human behaviors are of this habitual variety.

Yet many people persist in buying branded products even when equivalent generics are available. These choices are noteworthy for drugs, when generics and branded options are chemically equivalent. Why continue to buy a name-brand aspirin when the same chemical compound sits nearby at a cheaper price? Scientists have already verified that the two forms of aspirin are identical. A little personal experimentation would presumably reassure you that the generic has the same effect.

Our common failure to experiment extends well past generics, as one recent study illustrates. On Feb. 5, 2014, London Underground workers went on a 48-hour strike, forcing the closings of several tube stops. The affected commuters had to find alternate routes.

When the strike ended, most people reverted to their old patterns. But roughly one in 20 stuck with the new route, shaving 6.7 minutes from what had been an average 32-minute commute.

The closings imposed by the strike forced experimentation with alternate routes, yielding valuable results. And if the strike had been longer, even more improvements would probably have been discovered.

Yet the fact that many people needed a strike to force them to experiment reveals the deep roots of a common reluctance to experiment. For example, when I think of my favorite restaurants, the ones I have visited many times, it is striking how few of the menu items I have tried. And when I think of all the lunch places near my workplace, I realize that I keep going to the same places again and again.

Habits are powerful. We persist with many of them because we tend to give undue emphasis to the present. Trying something new can be painful: I might not like what I get and must forgo something I already enjoy. That cost is immediate, while any benefits — even if they are large — will be enjoyed in a future that feels abstract and distant. Yes, I want to know what else my favorite restaurant does well, but today I just want my favorite dish.

Overconfidence also holds us back. I am unduly certain in my guesses of what the alternatives will be like, even though I haven’t tried them.

Finally, many so-called choices are not really choices at all. Walking down the supermarket aisle, I do not make a considered decision about soda. I don’t even pause at the generics. I act without thinking; I automatically grab bottles of Diet Coke as I wheel my cart by.

This is true not only in our personal lives. Executives and policymakers fail to experiment in their jobs, and these failures can be particularly costly. For example, in hiring, executives often apply their preconceived notions of which applicants will be a “good fit” as prospective employees. Yet those presumptions are nothing more than guesses and are rarely given the scrutiny of experimentation.

Hiring someone who doesn’t appear to be a good fit is surely risky, yet it might also prove the presumptions wrong, an outcome that is especially valuable when these presumptions amount to built-in advantages for men or whites or people from economically or culturally advantaged backgrounds.

For government policymakers, experimentation is a thorny issue. We are right to be wary of “experimenting” in the sense of playing with people’s lives. Yet we should also be wary of an automatic bias in favor of the status quo. That can amount to a Panglossian belief that the current policy is best, whereas the current policy may actually be a wobbly structure held together by overconfidence, historical accident and the power of precedent.

Experimentation is an act of humility, an acknowledgment that there is simply no way of knowing without trying something different.

Understanding that truth is a first step, but it is important to act on it. Sticking with an old habit is comforting, but one of these days, maybe, I’ll actually buy a bottle of generic soda.