Forging Deep Ties: It’s Complicated

The millennial’s guide to friendship.

Source: Forging Deep Ties: It’s Complicated

There are numerous challenges to making new friends in adulthood. Not only is making friends harder as we get older, but sustaining friendships can be harder as well: After college and beyond, most people don’t get to be part of such a diverse, built-in social network.

This is particularly significant since research shows us that strong social support is linked to healthier aging and positive health outcomes. Yes, we need solitude to nurture our creativity and our spirit, but we need rich, deep, meaningful connections as well — our mental and physical health depend on it. Importantly, research also suggests that elderly people are healthiest when they regularly interact with, and have friends from, all different age groups.

We see the best health outcomes for heterosexual married men who have relied almost solely on their wives to meet their emotional and relational needs. This is also why for women — in this case, straight women — the subject of friendship is so important: Since women tend to live longer than men, they also tend to rely on their friends, usually other women, for companionship for many of the activities they might have previously enjoyed with their male partners. So it is in our best interest to be thinking about how to nurture and cultivate our friendships as young adults.

Social psychologist Sherry Turkle, who studies our intimacy with machines, says that our online presence often means that we are connected to an ever-widening circle of people, more than ever before, and that this may result in a sort of “friendship lite,” with lots of surface connections, but not a lot of face-to-face, meaningful time together. We might have more than a thousand friends on Facebook and hundreds of followers on Twitter, Instagram, and Snapchat, but there are likely fewer friends with whom we truly want to spend our time. We might have deep affection for our best, oldest, and most cherished friends, yet those are the people we may wind up talking with and seeing less often because of all our time at work and our preoccupation with time online with more superficial connections. This paradox is powerful.

Another challenge and problem is that women in their twenties and thirties often resort to meeting new women friends in much the same way that men “do” friendship with other men — over activities like a running or cycling club, a team sport, a yoga class, etc. But this may not lead to depth of emotional intimacy. These spaces are regarded as less threatening, both for finding friends and for finding dating partners, and an easy place from which to say, “Hey, wanna go grab a drink sometime?” The thought is that if two people both enjoy the same activity, they might have other things in common, or at least can pursue more of that original activity and passion together. The drawback is that this can sometimes feel forced and unnatural.

In my own experience, I have found that when I look back on the friendships that are the dearest to me and that have produced the greatest sense of sisterhood or brotherhood, we did not meet by trying to. For example, four years ago, I attended a fashion show at a department store and saw a woman wearing my favorite jacket, but in a way that looked more interesting than the way I usually wore it, so I approached her and told her so. We wound up standing there for an hour talking about her daughter, a first-year student trying to adjust at college, which we both had a lot to say about, since she is a therapist and I am a professor. We also talked about meditation and a bunch of other things, and then we exchanged numbers and got together, and she remains a true sister-friend. The last place I would have expected to find one of my most soulful friends would have been at the mall, and yet there she was, when we both least expected it. Deep friendships depend on some sense of spontaneity, which was present in that first meeting. Her daughters joked that she had quickly developed a “crush” on me, and I couldn’t stop talking about her either, but it’s because there really is such a thing as friendship chemistry. It can be magnetic.

Another issue that poses real challenges to friendships, especially for women in their twenties and thirties, is how they handle and negotiate choices and priorities around marriage and motherhood. Some women will choose to not have children, others will choose to cocoon with their partners and children, and some will want to include their children in all activities without realizing how that will affect the dynamic of conversation and friendship intimacy.

While one might think that young mothers risk social isolation, many report fulfillment in making friends with other new moms through breastfeeding support groups, groups for stay-at-home moms, or through libraries, parks, and day care. Still, others complain that connecting through children is not enough — there need to be more adult reasons that nourish and sustain a friendship.

Also, when people are new to living together with a romantic partner or spouse, or become new parents, they are usually much less available for impromptu dinners out, long, meandering phone calls later into the night, weekend get-togethers, trips with friends, etc. Single friends may get impatient with the other person’s lack of availability or feel left behind. And married friends don’t always want to hear about a single person’s last date or the more spontaneous rhythms of their life. The single person may be rendered immature, and the married person more boring.

During this period of life, people are making different choices for how to spend their time and resources. Some are using their twenties and thirties to attend graduate school. Others travel extensively, and still others settle down, buy houses, and start families. Inevitably, these decisions can dramatically impact people’s ability to do things together. One person may be earning a robust income and wanting an adventurous travel partner, while another is eating ramen noodles in graduate school. In cases like this, even choosing a restaurant to meet can feel stressful. The sense of power disparity can affect each person’s perception of themselves and of the other person and create a chasm. Each person can feel a certain level of shame or guilt.

Other issues can cause divisions. Politics have recently created a wedge in many people’s relationships, for example, and can also be a determining factor of whether people feel they can, or even want to try to, connect.

Also, with the pressure in one’s twenties and thirties to launch a successful career, time is a precious commodity, and people generally get pickier about who they want to spend it with. They may also be starting the process of liking the skin they’re in and enjoying their own company more — which is a good thing. Sometimes I hear women say that if they are choosing between a person who might bore them, or who drones on and on, or who has different values than they do, they are likely to just watch Netflix and chill on their own.

Interestingly, some women report feeling “maxed out” on friends and unable to find time and space to fit more people into their lives. In this scenario, friendship becomes just one more thing on a seemingly endless to-do list.

Because of career pressures, people in their twenties and thirties are generally more on the move and may literally pick up and move across the country. So staying friends with people can be trickier. Despite all the devices we rely on to stay in touch, sometimes we simply cannot replace the feeling we get when we are in the company of friends and can reach out and hug them, or watch them laugh, etc. Those I have interviewed report using all sorts of apps to stay in touch with long-distance friends, such as Skype, FaceTime, WhatsApp, and Houseparty, yet rely on them far less to find new friends.

And people can get tired of making plans that are not due to materialize for weeks or months; that can simply be unsatisfying. It can also feel overly planned, rigid, and almost transactional, relying on a few hours together just a few times every few months for essentially catching up, but not transcending that. This also explains why research shows that the older we get, the more we can feel drawn back to relationships forged earlier in life with people who know our backstories and with whom we can pick up where we left off without as much surface catch-up.

In a day and age when relationships may look more superficial and fleeting, there tends to be more ghosting: Just as teenagers are more and more frequently backing out of prom dates at the last minute if they get a better offer, adults are making plans and, when the date comes up, reporting relief when they have to cancel or the other person backs out. There’s a sense of wanting to control how we interact and under what specific conditions. But this also limits how we experience friendship, since at the same time, we often yearn for durable and reliable connections.

We might assume that making friends should be simpler than finding dating partners, but the opposite is often true: While sexual intimacy may be a big draw in a dating situation and is often used to forge and deepen emotional intimacy, friendship offers no similar crutch. It has to be interesting, reliable, spontaneous, fun, trustworthy, deep, and rich all on its own.

Deep friendship means grabbing some immediacy together. It also demands that we reveal a certain amount of vulnerability. This is not a quality prized on social media, and people in their twenties and thirties, while just as vulnerable as ever, are understandably reticent to reveal that.

Finally, quite noteworthy is the fact that there is the continually growing phenomenon of only children, many of whom come of age intuiting by necessity that friends are the family we choose; it might be through them that we as a society can better appreciate the powerful role of friendship in our lives.

 

Facebook image: Kaspars Grinvalds/Shutterstock

Advertisements

Happiness is overrated — finding deep meaning in life comes down to 4 basic “pillars”

Emily Esfahani Smith, in her TED talk viewed by almost 3 million people explains what she learned from thousands of pages of psychology, neuroscience and philosophy.

Source: Happiness is overrated — finding deep meaning in life comes down to 4 basic “pillars”

Being happy is the goal in life, isn’t it? Isn’t that what we all aim for? For most people it looks something like this: good grades, popularity at school, good education, great job, ideal life partner, beautiful home, money for great vacations.

Yet, many people have achieved exactly this and still feel empty and unfulfilled.

Is there something wrong with expecting happiness to result from success in life? Clearly it’s not working.

The suicide rate is rising around the world, and even though life is getting objectively better by nearly every conceivable standard, more people feel hopeless, depressed and alone.

Is there more to life than trying to be happy?

Writer Emily Esfahani Smith thinks so. In her popular 2017 TED Talk, viewed by almost 3 million people, she explains what she learned from spending five years interviewing hundreds of people and reading through thousands of pages of psychology, neuroscience and philosophy.

https://embed.ted.com/talks/emily_esfahani_smith_there_s_more_to_life_than_being_happy

In her search she found out that it’s not a lack of happiness that leads to despair. It’s a lack of having meaning in life.

What is the difference between being happy and having meaning in life?

“Many psychologists define happiness as a state of comfort and ease, feeling good in the moment. Meaning, though, is deeper. The renowned psychologist Martin Seligman says meaning comes from belonging to and serving something beyond yourself and from developing the best within you,” says smith.

“Our culture is obsessed with happiness, but I came to see that seeking meaning is the more fulfilling path. And the studies show that people who have meaning in life, they’re more resilient, they do better in school and at work, and they even live longer,” she adds.

Her five-year study led her to the discovery of four pillars than underpin a meaningful life. The first three I might have guessed, but the last one caught me off guard. And it’s really a crucial aspect of the meaning we give to our lives.

“The first pillar is belonging. Belonging comes from being in relationships where you’re valued for who you are intrinsically and where you value others as well,” says Smith.

But she warns that not all belonging is desired belonging. “Some groups and relationships deliver a cheap form of belonging; you’re valued for what you believe, for who you hate, not for who you are.”  This is not true belonging.

For many people, belonging is the most essential source of meaning. Their bonds with family and friends gives real meaning to their lives.

The second pillar or key to meaning is purpose, says Smith, and it’s not the same thing as finding that job that makes you happy.

The key to purpose says Smith is using your strengths to serve others. For many people that happens through work and when they find themselves unemployed, they flounder.

The third pillar of meaning is transcendence. Transcendent states are those rare moments when you lose all sense of time and place and you feel connected to a higher reality.

“For one person I talked to, transcendence came from seeing art. For another person, it was at church. For me, I’m a writer, and it happens through writing. Sometimes I get so in the zone that I lose all sense of time and place. These transcendent experiences can change you.”

So we have belonging, purpose and transcendence.

Now, the fourth pillar of meaning is a surprising one.

The fourth pillar is storytelling, the story you tell yourself about yourself.

“Creating a narrative from the events of your life brings clarity. It helps you understand how you became you.

“But we don’t always realize that we’re the authors of our stories and can change the way we’re telling them. Your life isn’t just a list of events. You can edit, interpret and retell your story, even as you’re constrained by the facts.”

This is so true. It boils down to perspective and that can make all the difference: the difference between a miserable life plagued with misfortune or an inspirational life filled with gratitude and insight.

No matter what has happened in your life to break you, you can heal again and find new purpose in life like so many people who have allowed the bad in their lives to be redeemed by the good.

To learn more, watch Smith’s recounting of such a redemptive story and her touching retelling of a powerful experience she had with her dad when he almost died of a heart attack.

I’m a South African based writer and am passionate about exploring the latest ideas in artificial intelligence, robotics and nanotechnology. I also focus on the human condition, with a particular interest human intuition and creativity. To share some feedback about my articles, email me at coert@ideapod.com

30 years after Prozac arrived, we still buy the lie that chemical imbalances cause depression

Prozac nation, 30 years on. (Reuters/Lucy Nicholson)

We don’t know how Prozac works, and we don’t even know for sure if it’s an effective treatment for the majority of people with depression.

Source: 30 years after Prozac arrived, we still buy the lie that chemical imbalances cause depression

WRITTEN BY  Olivia Goldhill

Some 2,000 years ago, the Ancient Greek scholar Hippocrates argued that all ailments, including mental illnesses such as melancholia, could be explained by imbalances in the four bodily fluids, or “humors.” Today, most of us like to think we know better: Depression—our term for melancholia—is caused by an imbalance, sure, but a chemical imbalance, in the brain.

This explanation, widely cited as empirical truth, is false. It was once a tentatively-posed hypothesis in the sciences, but no evidence for it has been found, and so it has been discarded by physicians and researchers. Yet the idea of chemical imbalances has remained stubbornly embedded in the public understanding of depression.

Prozac, approved by the US Food and Drug Administration 30 years ago today, on Dec. 29, 1987, marked the first in a wave of widely prescribed antidepressants that built on and capitalized off this theory. No wonder: Taking a drug to tweak the biological chemical imbalances in the brain makes intuitive sense. But depression isn’t caused by a chemical imbalance, we don’t know how Prozac works, and we don’t even know for sure if it’s an effective treatment for the majority of people with depression.

 The theory fits in with psychiatry’s attempt, over the past half century, to portray depression as a disease of the brain, instead of an illness of the mind.  One reason the theory of chemical imbalances won’t die is that it fits in with psychiatry’s attempt, over the past half century, to portray depression as a disease of the brain, instead of an illness of the mind. This narrative, which depicts depression as a biological condition that afflicts the material substance of the body, much like cancer, divorces depression from the self. It also casts aside the social factors that contribute to depression, such as isolation, poverty, or tragic events, as secondary concerns. Non-pharmaceutical treatments, such as therapy and exercise, often play second fiddle to drugs.

In the three decades since Prozac went on the market, antidepressants have propagated, which has further fed into the myths and false narratives we tell about mental illnesses. In that time, these trends have shifted not just our understanding, but our actual experiences of depression.

* * *

In the two millennia since Hippocrates founded medicine, society has embraced then rejected many theories of mental illness. Each hypothesis has struggled to reconcile how the subjective psychological symptoms of depression map onto physical malfunctions in the brain. The intractable relationship between the two has never been satisfactorily addressed.

Hippocrates’ humor-based notion of medicine, much like contemporary psychiatry, portrayed mental illness as rooted in biological malfunctions. But the evolution from Hippocrates to today has been far from smooth: In the centuries between, there was widespread belief in superstition and the supernatural, and symptoms that we would today call “depression” were often attributed to witchcraft, magic, or the devil.

The brain became the primary focus of depression in the 19th century, thanks to phrenologists. The field of phrenology, which took the shape of the skull as determinant of features of the underlying brain and psychological tendencies, was used by bigots to justify eugenics and has rightly been dismissed. But, though highly flawed, it did advance ideas of the brain still believed today. Whereas other physicians of the timebelieved organs like the heart and liver were connected to emotional passions, phrenologists held that the brain is the only “organ of the mind.” Phrenologists were also the first to argue that different areas of the brain have distinct, specialized roles and, based on this belief, posited that depression could be linked to a particular brain region.

 “Beginning with Freud’s influence, through the first half of the 20th century, the brain almost disappeared from psychiatry. When it came back, it came back with a vengeance.” The attention on the brain faded in the 20th century, when phrenology was supplanted by Freudian psychoanalysts, who argued that the unconscious mind (rather than brain) is the predominant cause of mental illness. Psychoanalysis considered environmental factors such as family and early childhood experiences as the key determinants of the characteristics of the adult mind, and of any mental illness.

“Beginning with Freud’s influence, through the first half of the 20th century, the brain almost disappeared from psychiatry,” says Allan Horwitz, a sociology professor at Rutgers University who has written on the social construction of mental disorders. “When it came back, it came back with a vengeance.”

* * *

A conglomeration of factors, beginning in the 1960s but having the largest effects in the ‘70s and ‘80s, contributed to psychiatry’s renewed emphasis on the brain. Firstly, in the US, conservative presidents disparaged as liberal causes any political efforts to alleviate social conditions that contribute to mental health, such as poverty, unemployment, and racial discrimination. “Biologically-based approaches became more politically palatable,” says Horwitz, noting that the National Institute of Mental Health largely abandoned its research on the social causes of depression under president Richard Nixon.

 Conservative presidents disparaged as liberal causes any political efforts to alleviate social conditions that contribute to mental health, such as poverty, unemployment, and racial discrimination.  There was also growing interest in the role of drugs, for good reason: Newly developed antidepressants showed early success in treating mental illnesses. Though Freudian psychoanalysts did use the drugs alongside their therapy, the medication didn’t neatly fit with their theories. And while individuals had previously paid for mental health care themselves in the US, the 1960s saw private insurance companies and public programs, such as Medicaid and Medicare, increasingly take on those costs. These groups were impatient to see results from their investment, notes Horwitz—and drugs were clearly both faster and cheaper than years of psychoanalysis.

Psychoanalysis also rapidly went out of fashion in that time. Organizations such as the National Alliance on Mental Illness, which advocated for the interests of those affected by mental illness and their families, were distrustful of psychoanalysis’ blame on parental figures. There was also a growing distaste for psychoanalysis among those on the left side of the political spectrum who believed psychoanalytic theories upheld conservative bourgeois values.

 “Psychiatry has always had a tenuous position in the prestige hierarchy of medicine.” At the time, psychoanalysis was deeply entwined with the field of psychiatry (the medical specialty that treats mental disorders.) Until 1992, psychoanalysts wererequired to have medical degrees (paywall) to practice in the US—and most had MDs in psychiatry. “Psychiatry has always had a tenuous position in the prestige hierarchy of medicine,” says Horwitz. “They weren’t regarded by doctors and other specialties as being very medical. They were seen more as storytellers as opposed to having a scientific basis.” As Freudian psychoanalysis became increasingly rejected as a pseudoscience, the entire field of psychiatry was tarnished by association—and so it pivoted, creating a new framework for diagnosing and treating mental health, founded on the role of the physical brain.

The theory of chemical imbalances was a neat way of explaining just how brain malfunctions could cause mental illness. It was first hypothesized by scientists in academic papers in the mid-to-late 1960s, after the seeming early success of drugs thought to adjust chemicals in the brain. Though the evidence never materialized, it became a popular theory and was repeated so often it became accepted truth.

It’s not hard to see why the theory caught on: It suited psychiatrists’ newfound attempt to create a system of mental health that mirrored diagnostic models used in other fields of medicine. The focus on a clear biological cause for depression gave practicing physicians an easily understandable theory to tell patients about how their disease was being treated.

“The fact that practicing physicians and leaders of science bought that idea, to me, is so disturbing,” says Steve Hyman, director of the Stanley Center for psychiatric research at the Broad Institute of MIT and Harvard.

The shifting language of the Diagnostic and Statistical Manual of Mental Disorders—widely and deferentially referred to as the Bible of contemporary psychiatry—clearly shows the evolution of field’s portrayal of mental illness. The second edition (pdf), published in 1968 (the DSM II), still showed the influence of Freud; conditions are broadly divided into more serious psychoses—with symptoms including delusional thinking, hallucinations, and breaks from reality—and less severe neuroses—such as hysterical, phobic, obsessive compulsive, and depressive neuroses. The neuroses are not clearly differentiated from “normal” behaviors. Importantly, anxiety—which Freud believed was foundational to human psyche and inextricably linked with societal repression—was portrayed as the underlying condition of all neuroses.

The DSM II also says depressive neurosis could be “due to an internal conflict or to an identifiable event such as the loss of a loved object or cherished possession.” The notion of “internal conflict” is explicitly drawn from Freud’s work, which posited that internal psychological conflicts drive irrational thinking and behaviors.

The third edition of the DSM (pdf), published in 1980, uses language far closer to contemporary professional depictions of mental illness. It does not suggest “internal conflicts” cause depression, anxiety is no longer portrayed as the underlying cause of all mental illnesses, and the manual focuses on creating a checklist of symptoms (whereas, in DSM II, none were listed for depressive neurosis.)

 “The fact that practicing physicians and leaders of science bought that idea, to me, is so disturbing” Today, the DSM-5 (pdf) lists various kinds of depressive disorders, such as “depressive disorder due to another medical condition,” “substance/medication-induced depressive disorder,” and “major depressive disorder.” Each of these disorders is distinguished by typical duration and its link to various causes, but the listed symptoms are broadly the same. Or, as the DSM-5 says: “The common feature of all of these disorders is the presence of sad, empty, or irritable mood, accompanied by somatic and cognitive changes that significantly affect the individual’s capacity to function. What differs among them are issues of duration, timing, or presumed etiology.”

The problem is that, though various people could be classed as suffering from a distinct depressive disorder according to their life events, there aren’t clearly defined treatments for each disorder. Patients from all groups are treated with the same drugs, though they are unlikely to be experiencing the same underlying biological condition, despite sharing some symptoms. Currently, a hugely heterogeneous group of people is prescribed the same antidepressants, adding to the difficulty of figuring out who responds best to which treatment.

* * *

Before antidepressants became mainstream, drugs that treated various symptoms of depression were depicted as “tonics which could ease people through the ups and downs of normal, everyday existence,” write Jeffrey Lacasse, a Florida State University professor specializing in psychiatric medications, and Jonathan Leo, a professor of anatomy at Lincoln Memorial University, in a 2007 paper on the history of the chemical imbalance theory.

In the 1950s, Bayer marketed Butisol (a barbiturate) as “the ‘daytime sedative’ for everyday emotional stress”; in the 1970s, Roche advertised Valium (diazepam) as a treatment for the “unremitting buildup of everyday emotional stress resulting in disabling tension.”

Both the narrative and the use of drugs to treat symptoms of depression transformed after Prozac—the brand name for fluoxetine—was released. “Prozac was unique when it came out in terms of side effects compared to the antidepressants available at the time (tricyclic antidepressants and monoamine oxidase inhibitors),” Anthony Rothschild, psychiatry professor at the University of Massachusetts Medical School, writes in an email. “It was the first of the newer antidepressants with less side effects.”

Even the minimum therapeutic dose of commonly prescribed tricyclics like amitriptyline (Elavil) could cause intolerable side effects, says Hyman. “Also these drugs were potentially lethal in overdose, which terrified prescribers.” The market for early antidepressants, as a result, was small.

 Deciding which antidepressant to prescribe to which patient has been described as a “flip of a coin.” Prozac changed everything. It was the first major success in the selective serotonin reuptake inhibitor (SSRI) class of drugs, designed to target serotonin, a neurotransmitter. It was followed by many more SSRIs, which came to dominate the antidepressant market. The variety affords choice, which means that anyone who experiences a problematic side effect from one drug can simply opt for another. (Each antidepressant causes variable and unpredictable side effects in some patients. Deciding which antidepressant to prescribe to which patient has been described as a “flip of a coin.”)

Rothschild notes that all existing antidepressant have similar efficacy. “No drug today is more efficacious that the very first antidepressants such as the tricyclic imipramine,” agrees Hyman. Three decades since Prozac arrived, there are many more antidepressant options, but no improvement in efficacy of treatment.

Meanwhile, as Lacasse and Leo note in a 2005 paper, manufacturers typically marketed these drugs with references to chemical imbalances in the brain. For example, a 2001 television ad for sertraline (another SSRI) said, “While the causes are unknown, depression may be related to an imbalance of natural chemicals between nerve cells in the brain. Prescription Zoloft works to correct this imbalance.”

Another advertisement, this one in 2005, for the drug paroxetine, said, “With continued treatment, Paxil can help restore the balance of serotonin,” a neurotransmitter.

“[T]he serotonin hypothesis is typically presented as a collective scientific belief,” write Lacasse and Leo, though, as they note: “There is not a single peer-reviewed article that can be accurately cited to directly support claims of serotonin deficiency in any mental disorder, while there are many articles that present counterevidence.”

Despite the lack of evidence, the theory has saturated society. In their 2007 paper, Lacasse and Leo point to dozens of articles in mainstream publications that refer to chemical imbalances as the unquestioned cause of depression. One New York Times article on Joseph Schildkraut, the psychiatrist who first put forward the theory in 1965, states that his hypothesis “proved to be right.” When Lacasse and Leo asked the reporter for evidence to support this unfounded claim, they did not get a response. A decade on, there are still dozens of articles published every month in which depression is unquestionably described as the result of a chemical imbalance, and many people explain their ownsymptoms by referring to the myth.

Meanwhile, 30 years after Prozac was released, rates of depression are higher than ever.

* * *

Hyman responds succinctly when I ask him to discuss the causes of depression: “No one has a clue,” he says.

There’s not “an iota of direct evidence” for the theory that a chemical imbalance causes depression, Hyman adds. Early papers that put forward the chemical imbalance theory did so only tentatively, but, “the world quickly forgot their cautions,” he says.

 “Neuroscientists don’t have a good way of separating when brains are functioning normally or abnormally.” Depression, according to current studies, has an estimated heritability of around 37%, so genetics and biology certainly play a significant role. Brain activity corresponds with experiences of depression, just as it corresponds with all mental experiences. This, says Horwitz, “has been known for thousands of years.” Beyond that, knowledge is precarious. “Neuroscientists don’t have a good way of separating when brains are functioning normally or abnormally,” says Horwitz.

If depression was a simple matter of adjusting serotonin levels, SSRIs should work immediately, rather than taking weeks to have an effect. Reducing serotonin levels in the brain should create a state of depression, when research has found that this isn’t the case. One drug, tianeptine (a non-SSRI sold under the brand names Stablon and Coaxil across Europe, South America, and Asia, though not the UK or US), has the opposite effect of most antidepressants and decreases levels of serotonin.

This doesn’t mean that antidepressants that affect levels of serotonin definitively don’t work—it simply means that we don’t know if they’re affecting the root cause of depression. A drug’s effect on serotonin could be a relatively inconsequential side effect, rather than the crucial treatment.

History is filled with treatments that work but fundamentally misunderstand the causes of the illness. In the 19th century, for example, miasma theory held that infectious diseases such as cholera were caused by noxious smells contributing “bad air.” To get rid of these smells, cleaning up waste became a priority—which was ultimately beneficial, but because waste feeds the microorganisms that actually transmit infectious disease, rather than because of the smells.

* * *

It’s possible our current medical categorization and inaccurate cultural perception of “depression” is actually causing more and more people to suffer from depression. There are plenty of historical examples of mental health symptoms that shift alongside cultural expectations: Hysteria has declined as women’s agency has increased, for example, while symptoms of anorexia in Hong Kong changed as the region became more aware of western notions of the illness.

At its core, severe depression has likely retained the same symptoms over the centuries. “When it’s severe, whether you read the ancient Greeks, Shakespeare, [Robert] Burton on [The Anatomy of] Melancholy, it looks just like today,” says Hyman. “The condition is the same; it’s part of being human.” John Stuart Mill’s 19th century description of his mental breakdown is eminently familiar to a contemporary reader.

But less severe cases, in the past, may have been chalked up to simply being “justifiably sad,” even by those experiencing them, whereas they’d be considered a health condition today. And so, psychiatry “reframes ordinary distress as mental illness,” says Horwitz. This framework doesn’t simply label sadness depression, but could lead people to experience depressive symptoms where they would have previously been simply unhappy. The impact of this shift is impossible to track: Mental illness is now recognized as a legitimate health issue, and so many more people are comfortable admitting to their symptoms than ever before. How many more people are truly experiencing depression for the first time, versus those who are acknowledging their symptoms once kept secret? “The prevalence is difficult to determine,” acknowledges Hyman.

* * *

Perhaps unraveling the true causes of depression and exactly how antidepressants treat the symptoms would be a less pressing concern if we knew, with confidence, that antidepressants worked well for the majority of patients. Unfortunately, we don’t.

 “They’re slightly more effective than placebo. The difference is so small, it’s not of any clinical importance.”  The work of Irving Kirsch, associate director of the Program in Placebo Studies at Harvard Medical School, including several meta-analyses of the trials of all approved antidepressants, makes a compelling case that there’s very little difference between antidepressants and placebos. “They’re slightly more effective than placebo. The difference is so small, it’s not of any clinical importance,” he says. Kirsch advocates non-drug-based treatments for depression. Studies show that while drugs and therapy are similarly effective in the short-term, in the long-term those who don’t take medication seem to do better and have a lower risk of relapse.

Others like Peter Kramer, a professor at Brown University’s medical school, are strongly in favor of leaning on the drugs. Kramer is skeptical about the quality of many studies on alternative therapies for depression; people with debilitating depression are unlikely to sign up for anything that require them to do frequent exercise or therapy, for example, and so are often excluded from studies that eventually purport to show exercise is as effective a treatment as drugs. And, as he writes in an email, antidepressants “are as effective as most treatments doctors rely on, in the middle range overall, about as likely to work as Excedrin” for a headache.

 “Some people really respond, some don’t respond at all, and everything in between.” Others are more circumspect. Hyman acknowledges that, when taken in aggregate, all the trials for approved antidepressants show little difference between the drugs and placebo. But that, he says, obscures individual differences in responses to antidepressants. “Some people really respond, some don’t respond at all, and everything in between,” Hyman adds.

There are currently no known biomarkers to definitely show who will respond to what antidepressants. Severely depressed patients who don’t have the energy or interest to go to therapy should certainly be prescribed drugs. For those who are healthy enough to make it to therapy—well, opinions differ. Some psychiatrists believe in a combination of drugs and therapy; some believe antidepressants can be effective for all levels of depression and no therapy is needed; and others believe therapy alone is the best treatment option for all but the most severely depressed. Unfortunately, says Hyman, there’s little evidence on the best treatment plan for each patient.

Clearly, many people respond well to antidepressants. The drugs became so popular in large part because many patients benefited from the treatment and experienced significantly reduced depressive symptoms. Such patients needn’t question why their symptoms have improved or whether they should seek alternative forms of treatment.

On the other hand, the drugs simply do not work for others. Further, there’s evidence to suggest framing depression as a biological disease reduces agency, and makes people feel less capable of overcoming their symptoms. It effectively divorces depression from a sense of self. “It’s not me as a person experiencing depression. It’s my neurochemicals or my brain experiencing depression. It’s a way of othering the experience,” says Horwitz.

It’s nearly impossible to get good data to explain why depression treatments work for some and not others. Psychiatrists largely evaluate the effects of drugs by subjective self-reports; clinical trials usually include only patients that meet a rarefied set of criteria; and it’s hard to know whether those who respond well to therapy benefitted from another, unmeasured factor, such as mood resilience. And when it comes to the subjective experience of mental health, there’s no meaningful difference between what feels like effective treatment and what is effective treatment.

There are also no clear data on whether, when antidepressants work, they actually cause symptoms to fully dissipate long-term. Do antidepressants cure depression, or simply make it more bearable? We don’t know.

* * *

Depression is now a global health epidemic, affecting one in four peopleworldwide. Treating it as an individual medical disorder, primarily with drugs, and failing to consider the environmental factors that underlie the epidemic—such as isolation and poverty, bereavement, job loss,long-term unemployment, and sexual abuse—is comparable to asking citizens to live in a smog-ridden city and using medication to treat the diseases that result instead of regulating pollution.

Investing in substantive societal changes could help prevent the onset of widespread mental illness; we could attempt to prevent the depressive health epidemic, rather than treating it once it’s already prevalent. The conditions that engender a higher quality of life—safe and affordable housing, counsellors in schools, meaningful employment, strong local communities to combat loneliness—are not necessarily easy or cheap to create. But all would lead to a population that has fewer mental health issues, and would be, ultimately, far more productive for society.

Similarly, though therapy may be a more expensive treatment plan than drugs, evidence suggests that cognitive behavioral therapy (CBT) is at least as effective as antidepressants, and so deserves considerable investment. Much as physical therapy can strengthen the body’s muscles, some patients effectively use CBT to build coping mechanisms and healthy thought habits that prevent further depressive episodes.

In the current context, where psychiatry’s system of diagnosing mental health mimics other medical fields, the role of medicine in treating mental illness is often presented as evidence to skeptics that depression is indeed a real disease. Some might worry that a mental health condition treated partly with therapy, exercise, and societal changes could be seen as less serious or less legitimate. Though this line of thinking reflects a well-meaning attempt to reduce stigma around mental health, it panders to faulty logic. After all, many bodily illnesses are massively affected by lifestyle. “It doesn’t make heart attacks less real that we want to do exercise and see a dietician,” says Hyman. No illness needs to be entirely dependent on biological malfunctions for it to be considered “real.” Depression is real. The theory that it’s caused by chemical imbalances is false. Three decades since the antidepressants that helped spread this theory arrived on the market, we need to remodel both our understanding and treatment of depression.

Why Trying New Things Is So Hard to Do

Credit: Abbey Lossing

Source: https://www.nytimes.com/2017/12/01/business/why-trying-new-things-is-so-hard.html

I drink a lot of Diet Coke: two liters a day, almost six cans’ worth. I’m not proud of the habit, but I really like the taste of Diet Coke.

As a frugal economist, I’m well aware that switching to a generic brand would save me money, not just once but daily, for weeks and years to come. Yet I only drink Diet Coke. I’ve never even sampled generic soda.

Why not? I’ve certainly thought about it. And I tell myself that the dollars involved are inconsequential, really, that I’m happy with what I’m already drinking and that I can afford to be passive about this little extravagance.

Yet I’m clearly making an error, one that reveals a deeper decision-making bias whose cumulative cost is sizable: Like most people, I conduct relatively few experiments in my personal life, in both small and big things.

This is a pity because experimentation can produce outsize rewards. For example, I wouldn’t be risking much by trying a generic soda, and if I liked it enough to switch, the payout could be big: All my future sodas would be cheaper.

When the same choice is made over and over again, the downside of trying something different is limited and fixed — that one soda is unappealing — while the potential gains are disproportionately large. One study estimated that 47 percent of human behaviors are of this habitual variety.

Yet many people persist in buying branded products even when equivalent generics are available. These choices are noteworthy for drugs, when generics and branded options are chemically equivalent. Why continue to buy a name-brand aspirin when the same chemical compound sits nearby at a cheaper price? Scientists have already verified that the two forms of aspirin are identical. A little personal experimentation would presumably reassure you that the generic has the same effect.

Our common failure to experiment extends well past generics, as one recent study illustrates. On Feb. 5, 2014, London Underground workers went on a 48-hour strike, forcing the closings of several tube stops. The affected commuters had to find alternate routes.

When the strike ended, most people reverted to their old patterns. But roughly one in 20 stuck with the new route, shaving 6.7 minutes from what had been an average 32-minute commute.

The closings imposed by the strike forced experimentation with alternate routes, yielding valuable results. And if the strike had been longer, even more improvements would probably have been discovered.

Yet the fact that many people needed a strike to force them to experiment reveals the deep roots of a common reluctance to experiment. For example, when I think of my favorite restaurants, the ones I have visited many times, it is striking how few of the menu items I have tried. And when I think of all the lunch places near my workplace, I realize that I keep going to the same places again and again.

Habits are powerful. We persist with many of them because we tend to give undue emphasis to the present. Trying something new can be painful: I might not like what I get and must forgo something I already enjoy. That cost is immediate, while any benefits — even if they are large — will be enjoyed in a future that feels abstract and distant. Yes, I want to know what else my favorite restaurant does well, but today I just want my favorite dish.

Overconfidence also holds us back. I am unduly certain in my guesses of what the alternatives will be like, even though I haven’t tried them.

Finally, many so-called choices are not really choices at all. Walking down the supermarket aisle, I do not make a considered decision about soda. I don’t even pause at the generics. I act without thinking; I automatically grab bottles of Diet Coke as I wheel my cart by.

This is true not only in our personal lives. Executives and policymakers fail to experiment in their jobs, and these failures can be particularly costly. For example, in hiring, executives often apply their preconceived notions of which applicants will be a “good fit” as prospective employees. Yet those presumptions are nothing more than guesses and are rarely given the scrutiny of experimentation.

Hiring someone who doesn’t appear to be a good fit is surely risky, yet it might also prove the presumptions wrong, an outcome that is especially valuable when these presumptions amount to built-in advantages for men or whites or people from economically or culturally advantaged backgrounds.

For government policymakers, experimentation is a thorny issue. We are right to be wary of “experimenting” in the sense of playing with people’s lives. Yet we should also be wary of an automatic bias in favor of the status quo. That can amount to a Panglossian belief that the current policy is best, whereas the current policy may actually be a wobbly structure held together by overconfidence, historical accident and the power of precedent.

Experimentation is an act of humility, an acknowledgment that there is simply no way of knowing without trying something different.

Understanding that truth is a first step, but it is important to act on it. Sticking with an old habit is comforting, but one of these days, maybe, I’ll actually buy a bottle of generic soda.

 

Does Getting Older Mean Losing Your Sense of Humor?

Why absurd humor might help keep your brain young.

Source: Does Getting Older Mean Losing Your Sense of Humor?

We all like different jokes. Humor styles vary as much as we do, which is why a hilarious joke to one person may fall completely flat for another. Perhaps that’s why my own favorite joke appeals to me so much.

Duck #1:  “Quack”
Duck #2:  “I was going to say that!”

I’ve shared that joke dozens of times, perhaps hundreds, and it seldom gets a laugh. Yet I still love it, maybe because it’s a shibboleth for fans of silly humor. Laugh out loud, and I know you’re my kind of person. We all have jokes like that, ones only we love, and that love says a lot about who we really are.

Not long ago a British researcher analyzed differences in humor tastes by hosting one of the largest studies of all time. His name was Richard Wiseman, and he set up a website for people to submit their own favorite jokes, while also rating favorites of others. Not surprisingly, the most common jokes were also the simplest (“What’s brown and sticky? A stick!”). Some were raunchy, others complex, but all said something about the people who submitted them. Brits, as we know, tend to like absurd humor (“Why did the elephant stand on the marshmallow? So she wouldn’t fall in the hot chocolate.”). Americans prefer their jokes to be aggressive. The Germans in Wiseman’s study found nearly all jokes hilarious, which either means they have a great sense of humor or none at all.

Many researchers have tried to use these humor differences to predict personality, though results have been mixed. In fact, it’s almost impossible to guess what joke any particular person will like, with one big exception. As we get older, we tend to turn away from absurd jokes like my duck quip. They’re just too weird.

That’s just the start. Scientists have found that disliking absurd humor as we get older is linked to a very specific personality trait, and that’s conservatism. As we age, we tend to be more fixed in our ways, leading to more conservative outlooks. There’s even a saying about this—children are fools if they are not liberal, just as adults are fools if they are not conservative. This maturation shows itself in several ways. One is a dislike for talking ducks.

In the twenty years since that first study on absurd humor and conservatism was conducted, other scientists have begun to understand why such differences occur. It turns out that absurd humor activates different brain regions than traditional jokes. Take this example:

A student asks her gym instructor to teach her how to do a split. “How flexible are you?” the instructor asks. The girl replies: “I can’t make Tuesdays.”

That’s a normal joke, what is often called an Incongruity Joke because the girl’s expectation is incongruous with the instructor’s. That kind of joke activates the Temporal Lobe and Cingulate Cortex, regions responsible for conflict detection and memory. However, when confronted with something like the duck joke, which ignores the standard setup and resolution, very different brain regions take control. Not only are fewer brain regions activated, but they tend to be focused on interpreting the language of the joke, rather than producing a cohesive story. In other words, our brains tend to be as confused as their owners.

As we get older, we like our stories to make sense, so it shouldn’t be surprising that we don’t like jokes that go nowhere. But could absurd humor be a sort of exercise? Perhaps we become more conservative over the years because we become less flexible and less patient with absent punchlines. Maybe our brains need to be jolted with humor that doesn’t take us where we expect. Even if that destination is nowhere at all.

Which makes me think that maybe duck jokes are quite important. Countless studies have already found that laughing frequently improves heart health, immune system response, and even mental outlook. Perhaps absurd humor might be the best workout for the brain we can get. At worst, it gives us something to think about.

And now, my second favorite joke:  “What has eight legs and an eye? Two chairs and half a cow’s head.”

You’re welcome.

References

Dai, R., Chen, H., Chan Y., Wu, C., Li, P., Cho, S. and Hu, J. (2017) To Resolve or Not To Resolve, that is the Question. Frontiers in Psychology, 8, 1-13.

Ruch, W., McGhee, P., Hehl, F. (1990). Age Differences in the Enjoyment of Incongruity-Resolution and Nonsense Humor During Adulthood. Psychology and Aging, 5, 348-355.

Wiseman, R. (2008). Quirkology. London, UK : Pan Books.

What Type of Thinker Are You?

When you get stuck in convergent thinking, you miss possibilities open to you.

Source: What Type of Thinker Are You?

“Convergent” and “divergent” thinking represent two different ways of looking at the world. A convergent thinker sees a limited, predetermined number of options. By contrast, a divergent thinker is always looking for more options. Many of us get stuck in convergent thinking and, as a result, don’t see the many possibilities available to us. Let’s have a look at both types of thinking.

Convergent Thinking. Convergent is a form of the word “converging” and so it means “coming together.” Convergent thinking is what you engage in when you answer a multiple choice question (although, in real life, we often only see two choices). In convergent thinking, you begin by focusing on a limited number of choices as possibilities. Then you choose the “right” answer or course of action from among those choices. The figure on the left side of the diagram illustrates convergent thinking.

Here’s an example: “People are sick or people are healthy.” For many years after becoming chronically ill, those were the only two possibilities I saw: I was sick or I was healthy. Each night I’d go to bed, hoping to wake up healthy. When I didn’t, I considered myself to be sick. It was one or the other.

Along with that, I thought I only had two possible courses of action: I could be a law professor or I could do nothing with my life. That may sound extreme, but that’s how I saw it at the time. Not wanting to do the latter, I forced myself to keep working, even though I was too sick to do so. It didn’t occur to me that I could be in poor health and lead a productive life.

Here’s another example of convergent thinking. When I considered how friends responded to me when I became chronically ill, I saw only two possibilities: those who stuck around cared about me and those who didn’t stick around didn’t care about me. I wasn’t able to see that people could drop out of my life and still care about me.

I’m not dismissing the value of convergent thinking. It’s an important cognitive tool, particularly in math and science. Unless I’m missing something, it would be silly to be open to other options than “4” when asked, “What’s 2+2?”. But convergent thinking has at times been a great source of suffering for me during my illness, because it’s kept me from seeing beyond my limited vision of what is possible in this new and unexpected life.

You need not have health difficulties to see how convergent thinking—because it leads you to take a narrow view of your life—can be unskillful. For example: “It’s aerobics or no exercise at all.” With this type of thinking, if you have an injury that prevents you from doing aerobics, you’ll opt for no exercise at all rather than considering other options, such as doing something less strenuous but still valuable.

Another example: “This new job is going to be great or it’s going to be terrible.” If these are the only two possibilities you see, then if you decide it’s terrible, you won’t be able to enjoy a pleasant experience at work when it comes along. “He either loves me or he doesn’t care about me at all.” Well, you get the idea: limited options; only one “right” answer or course of action.

Divergent Thinking. By contrast, divergent means “developing in different directions” and so divergent thinking opens your mind in all directions. This opens possibilities in your life because it leads you to look for options that aren’t necessarily apparent at first. The figure on the right side of the above diagram illustrates divergent thinking.

A divergent thinker is looking for options as opposed to choosing among predetermined ones. So instead of deciding that the two choices for me are “sick” or “healthy,” I would ask myself if there are other options, like the possibility that I could be sick and healthy at the same time. It took me many years to see that this was indeed an option (and it became the major theme of my book, How to Be Sick).

When I became chronically ill, I was mostly a convergent thinker. As a result, for many years after I could no longer work, I felt useless, as if my life had no meaning. I slowly emerged from this dark place by becoming more of a divergent thinker, but I still have to work at it by reminding myself: “Look for options you haven’t considered.”

Here’s an example of how switching from convergent to divergent thinking can make our lives easier and lead to fruitful results. When How to Be Sick was published in 2010, I began to get requests for me to read it as an audiobook. I decided I could do it if I just bought a good microphone and some computer software. I announced on Facebook that there would soon be an audiobook, and I responded to the many email requests I’d received by telling people that an audiobook was in the works.

But when I undertook the project, it proved to be much more difficult than I’d anticipated. Without going into details, suffice it to say that there’s a reason that most book narrators are professionally trained (or, at least, not limited in their energetic resources!). As I faltered, I saw only two options: Push forward, at great expense to my health; or not do it at all. I did the latter—not without having had to endure self-recrimination over letting people down.

It took me over 2 1/2  years to put on my divergent thinking cap. I thought: “Maybe there are more options than just “audiobook read by me” or “no audiobook.” I began to do some online research and found a website that matches books with narrators. (It’s a spin-off from Amazon and audible.com.) From my laptop, I signed up, submitted a short excerpt from the book, and “auditioned” narrators. They would record the excerpt, upload the audio file to the website, and I’d get an email notifying me there was a new audition.

I listened to over a dozen auditions (it was fun!) and then one day, I heard the voice that was perfect for the book. Deon reads How to Be Sick as if she wrote it; she seems to understand the intention behind every word I wrote. And so, we’re on our way to producing an audiobook. That’s an example of the value of divergent thinking—thinking in terms of possibilities instead of in terms of limited choices.

As for friends, I began to think that there might be more than the two options I’d settled on (that those who stuck around cared about me and those who didn’t stick around didn’t care about me). When I opened my mind to other possibilities, I discovered that some friends who haven’t stuck around do indeed still care about how I’m doing. They aren’t in contact for other reasons. One of them is too uncomfortable around illness because of her experience with her own parents suddenly taking ill and dying within a few months. Another person, unbeknownst to me, developed serious health problems of her own.

Consider whether you tend to be a convergent thinker or a divergent one. If you’re the former, you’re likely to see limited choices instead of being open to possibilities. If you’d like to work on becoming more of a divergent thinker, I have two suggestions.

First, whenever you’re considering a course of action or forming an opinion about something or someone (including yourself), pay attention to whether you’re assuming you have limited choices—it’s this or it’s that; she’s like this or she’s like that; I’m like this or I’m like that. Second, use the Thich Nhat Hanh practice I’ve written about before: Am I Sure?Ask yourself, “Am I Sure?” before you assume you’ve considered all the alternatives available to you or before you make a judgment about something or someone. Having tried these two suggestion, then start looking for more possibilities.

Open your mind and see where it takes you!

© 2013 Toni Bernhard. Thank you for reading my work. I’m the author of three books.

How to Live Well with Chronic Pain and Illness: A Mindful Guide (2015)

How to Wake Up: A Buddhist-Inspired Guide to Navigating Joy and Sorrow (2013)

How to Be Sick: A Buddhist-Inspired Guide for the Chronically Ill and their Caregivers (2010)

All of my books are available in audio format from Amazon, audible.com, and iTunes.

Visit www.tonibernhard.com for more information and buying options.

How to Put a Stop to Catastrophic Thinking

Learn to skillfully respond to the cognitive distortion of catastrophizing.

Source: How to Put a Stop to Catastrophic Thinking

Cognitive distortions are errors in thinking. The phrase refers to our irrational and exaggerated thoughts: thoughts that have no basis in fact, but which we believe anyway. These distorted thoughts then become the breeding ground for stressful emotions. The result is anxiety and the undermining of our ability to feel good about life or ourselves. In September 2014, I wrote an post entitled “How Distorted Thinking Increases Stress and Anxiety.” You might want to have a look at it. One of those distortions is called catastrophizing and it is the subject of this post.

Catastrophizing is also called magnifying. This is a good way to think of it, because it emphasizes how we often magnify things way out of proportion, dreaming up nightmare scenarios that we believe without question.

The first and second arrows

Catastrophizing is an example of thoughts (and the emotions they give rise to) that the Buddha called the “second arrow.” The first arrow refers to those familiar, unpleasant experiences that are an inevitable part of everyday life, from the mundane (a light bulb that burns out when we flip on the switch) to more profound unpleasant experiences (waking up with a flare in a chronic pain condition). We could each make a list of our own “first arrow” experiences. Some days we’re bombarded with them, again from the relatively minor (a computer crash) to the major (the loss of a job… or a friend). Life is hard enough just coping with the first arrow, that’s for sure.

The second arrow is an unnecessary one. Here’s how it happens. We experience the unpleasantness of the first arrow, but instead of simply acknowledging its presence and, if possible, trying to make things better (e.g., change the light bulb, take a warm shower to try and ease our physical pain), we engage in a stream of stressful thoughts and emotions about that unpleasant “first arrow” experience. Although the Buddha didn’t use the word catastrophize, it’s an example of how we shoot ourselves with a second arrow by mocking up worst-case scenarios instead of just taking care of the business at hand. In other words, we make things worse for ourselves.

It’s as if we’re looking at an unpleasant experience through binoculars, and so it appears way out of proportion to us. I used a light bulb burning out as an example, because it’s a trivial experience. And yet, when it’s happened to you, how often do you say without irritation: “Oh, well, the light bulb burned out; no big deal, I’ll just change it”?

If you’re like me, when you encounter an unpleasant experience, you tend to add a negative reaction, which may not always rise to the level of catastrophizing, but can if it takes on this type of form: “Why do light bulbs always burn out on me? The new one will probably burn out in a few days — on me again.” It’s this second arrow, magnifying an unpleasant experience and making it into a catastrophe, that keeps us from feeling at peace with our lives. After all, if we changed the light bulb mindfully — paying careful attention as we get a new bulb, unscrew the old bulb, screw in the new one, and perhaps even take a moment to reflect on the wonders of electricity — we might even enjoy the experience.

And what about that “first arrow” unpleasant experience of waking up with a flare in our chronic pain levels? Instead of keeping calm and waiting to see if the pain subsides as the morning wears on, there’s a tendency to catastrophize by convincing ourselves that this is our new normal. We say to ourselves: “This pain will never go away; I’ll be miserable the rest of my life.” That’s the experience of the second arrow and, not surprisingly, it tends to be a source of stress and anxiety.

Through habits we’ve developed over our lifetimes, we seem to be quite adept at making ourselves miserable by magnifying our disappointments and frustrations until they seem like catastrophes. Another simple example. I’ve been teaching myself some new embroidery stitches. A few months ago, I was embroidering an underwater scene and wanted to use a “cretan stitch” to make a fish. But I couldn’t do it. Every fish I tried looked awful. Instead of feeling compassion for how hard this was proving to be, I started spinning irrational stories about my attempts: “I’ll never figure out this stitch. I might as well throw the whole piece away.” Catastrophizing.

How to stop the tendency to catastrophize

To reverse the tendency to catastrophize, put your experience into perspective. Start by reminding yourself that unpleasant experiences — not having things go as you want — are an inevitable part of life. Then reframe your thoughts regarding whatever unpleasant experience is threatening to set off that second arrow. Sticking with my examples, remind yourself that everyone has to change light bulbs sometimes; it’s no big deal. Remind yourself that just because you’re in pain this morning doesn’t mean you’ll be in pain everymorning. Everything changes, including pain levels. Remind yourself that some embroidery stitches are hard to learn, and besides, an underwater scene doesn’t have to have a fish in it anyway — put in a crab.

In other words, put a stop to this type of distorted thinking by first becoming aware that you’re engaged in it, and then countering that thinking by adopting a reasonable perspective on what’s going on. Sometimes I even say to myself: “Stop! You’re going down that catastrophizing road again, and it’s only going to make an unpleasant situation worse.” Gently saying, “Stop!” like this can interrupt your tendency to start spinning those “second arrow” worst-case scenarios.

I’m not saying this will always be easy. You may have a lifelong habit of blowing things out of proportion and assuming the worst, often about yourself. The good news is that habits can change, and the first step is to become aware of how you’re making life more difficult for yourself by magnifying unpleasant experiences and blowing them out of proportion.I recommend that you start small — maybe with that light bulb or something you’ve spilled. The better you get at keeping calm and not going straight to exaggerating and catastrophizing over minor unpleasant experiences (“I’m always spilling things and always will”), the easier it will be to maintain your peace of mind when you’re struck by harsher first arrows.

© 2017 Toni Bernhard. Thank you for reading my work. You might also find these helpful: “You Don’t Have to Believe Your Thoughts” and “What Type of Thinker Are You?