Why Our Brains Love The Curve

CurveTV.jpg

You’ve seen the advertisements all around the Web: the curve is coming to a TV near you.  It seems at first glance a simple innovation, in some ways even a predictable one. Watching commercials for Samsung’s new line of televisions, I find myself wondering why it’s taken this long for curved screens to arrive. And it’s altogether possible that I’m asking that question because to my brain—and quite likely to yours—the curve simply fits.

Behavioral researchers have known this for some time; people consistently show a preference for curves over hard lines and angles. Whether the object is a wristwatch, a sofa, or a well-designed building, curves curry favor. Neuroscience, following the lead of behavioral science, is on the hunt for a neurally-hardwired preference for curvy elegance.

Of course, televisions with curved screens offer technical advantages beyond aesthetics. Curvature reduces reflection, making the viewing experience easier on the eyes, and simultaneously creates the illusion that the viewer is surrounded by the screen. The so-called “sweet spot” at the center of the illusion is a comfortable magnet for our attention. (Most of this has been known since the 1950s with the introduction of the first curved movie theater screen, the Cinerama.)

But aside from those advantages, studies suggest that merely viewing the curves of an object triggers relief in our brains – an easing of the threat response that keeps us on guard so much of the time.  While hard lines and corners confer a sense of strength and solidity, they are also subtly imposing. Something about them keeps our danger-alert system revved.

Research shows that we subjectively interpret sharp, hard visual cues as red flags in our environment (with corresponding heightened activity in our brain’s threat tripwire, the amygdala).  Even holding a glass with pronounced hard lines and edges has been shown to elevate tension across the dinner table. Curves take the perceptual edge off.

The softness of contour may also play out in our brains not unlike an emotionally satisfying song or poem. A new discipline known as neuroaesthetics—an ambitious vector between neuroscience and the fine arts—is exploring this idea, shedding light on why the love of curves is seducing technology manufacturers. Recent research under this new banner suggests that curved features in furniture, including TVs, trigger activity in our brains’ pleasure center. We derive a buzz from curves much as we do when viewing a beautiful work of art.

The overlap of visual impact from things as commonplace as chairs and tables and TVs, and emotional impact at such a high level (a level we’d normally reserve for art and music) suggests that the ordinary elements in our environments aren’t so ordinary after all. Skilled industrial designers have known this for quite some time, but now science is adding an explanatory dimension that makes the point all the more compelling.

And if it’s true, as the research indicates, that the curve is an emotional elixir for anxiety-prone brains, the latest trend is likely to take hold and transform our interface with all things digital.  We may eventually look back on the non-curved days of technology the way we think of black and white television now. The term “flat-screen TV” will go the way of “rotary dial phone”.

Having said that, the true test of the curvy trend’s appeal won’t happen until the price point drops considerably. Curves may calm, but the prices on many curved screen TVs are anything but calming. We’ll have to wait a while for that to play out to know whether the brain's love of the curve translates into an enduring shift in technology.

You can find David DiSalvo on Twitter@neuronarrative and at Forbes. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on July 21, 2014 .

How Your Blood Sugar Could Be Wrecking Your Relationships

couplearguing.jpg

We’ve all known people who should have to wear a flashing red DANGER! sign if they miss lunch, though even without the warning we instinctively know to steer clear if someone is running on empty. A grumbling stomach means a drop in blood sugar, and through excruciating experience most of us realize that means trouble. But could the blood sugar-anger connection lurk behind more relationship conflicts than we realize?

new study probed that question with a research methodology as painfully funny as it was effective.  Researchers rounded up 107 married couples for a 21-day couples’ boot-camp to draw a direct line between blood glucose (aka circulating blood sugar) and aggression.

First they asked the couples to complete a relationship questionnaire that evaluated their level of satisfaction with their marriages, which allowed the research team to control for variables like how rocky the marriage was to begin with. They also measured all of the participants’ blood glucose levels to set a benchmark, and continued to measure the levels throughout the 21-day study.

The researchers predicted that drops in blood sugar would consistently correlate with heightened aggression between the spouses. Aggression was defined in two ways: aggressive impulse and aggressive behavior.  The distinction was meant to identify aggression in thought versus action, because aggression rarely happens in a vacuum—there’s usually a thought impulse that precedes it, even if that impulse doesn’t occur immediately before the action but compounds over time.

To test aggressive impulse, the researchers gave participants a voodoo doll and 51 pins, with instructions to place as many pins in the doll every night as needed to show how angry they were with their spouse. A light conflict day might get just a couple pokes, while a “cover the kids' eyes and ears” day might warrant the full 51 to the head.

To test aggressive behavior, the researchers had the spouses wear headphones while they competed against each other in 25-part tasks. After each task, the winner decided how loudly and for how long to blast the loser with a noise through the headphones.

At the end of the 21 days, with riddled voodoo dolls and ringing ears aplenty, the hypothesis was proven out.  The lower the level of blood glucose, the more pins the spouses poked, and the higher the intensity and longer the duration they blasted their partners through their headphones.

The study provides a couple of worthwhile takeaways. First, quoting Brad Bushman, professor of psychology and communication at Ohio State University and lead study author, “Before you have a difficult conversation with your spouse, make sure you're not hungry."  Simple to say, harder to do.

Second, and the reason why that’s such good advice, is our brains are energy hogs. "Even though the brain is only two percent of our body weight, it consumes about 20 percent of our calories. It is a very demanding organ when it comes to energy," added Bushman. When the brain is short on energy, it’s also short on self-control, and the door is opened for aggressive impulses and behavior to take center stage. And if the study results are a true indication, we’re red lining our self-control more often than we realize.

I’d love to see a follow-up study that attempts to track these results against the blood sugar rollercoaster associated with fast food-laden diets. I have a suspicion that glucose-related aggression isn’t solely about how much or little food we eat, but also the sorts of food we eat. Just a hunch, but it stands to reason that shoveling in foods that cause our blood sugar levels to spike and crash day after day may also trigger spousal (and other) explosions. A little food for thought while you're sitting in the drive-thru.

The study was published in the Proceedings of the National Academy of Sciences.

You can find David DiSalvo on Twitter@neuronarrative, at his website The Daily Brain, and on YouTube at Your Brain Channel. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on June 15, 2014 .

When It Comes To Choosing Mates, Women And Men Often Get Framed

WMframe.jpg

If I tell you that seven of ten doctors believe a medication is helpful, the positive weight of the seven endorsements will trump potential negatives. But if I tell you that three of ten doctors believe that a medication should be avoided, the weight of those three negative critiques will overpower potential positives.  The information in either case is the same – the only difference is how it's framed.

Our susceptibility to the framing bias has been demonstrated in study after study (most notably by Nobel Prize-winning psychologist Daniel Kahneman), and now a new study by Concordia University researchers shows how framing influences our selection of love interests.

Hundreds of study participants were given positively and negatively framed descriptions of potential partners. For example:

"Seven out of 10 people who know this person think that this person is kind." [positive frame]

versus

"Three out of 10 people who know this person think that this person is not kind." [negative frame]

The researchers tested the framing effect across six attributes that are known from previous research to rank high in importance to men and women; four that are important to either sex, and two that are important to both sexes:

  • Attractive body (usually more important to men)
  • Attractive face (usually more important to men)
  • Earning potential (usually more important to women)
  • Ambition (usually more important to women)
  • Kindness (equally important to both)
  • Intelligence (equally important to both)

Participants evaluated both “high-quality” (e.g. seven out of 10 people think this person is kind) and “low-quality” (e.g. three out of 10 people think this person is kind) prospective mates for each of these attributes, in the context of a short-term fling or a long-term relationship.

What the research team found is that more often than not, women were significantly less likely to show interest in men described with the low-quality frame, even though they were being presented with exactly the same information as they were in the high-quality frames.

"When it comes to mate selection, women are more attuned to negatively framed information due to an evolutionary phenomenon called 'parental investment theory,'" says study co-author and Concordia marketing professor Gad Saad, a noted researcher on the evolutionary and biological roots of consumer behavior.

"Choosing someone who might be a poor provider or an unloving father would have serious consequences for a woman and for her offspring. So we hypothesized that women would naturally be more leery of negatively framed information when evaluating a prospective mate.”

In particular, women were most susceptible to the framing bias when evaluating a man’s earning potential and ambition.

Men, on the other hand, fell prey to framing most often when evaluating a woman’s physical attractiveness.

While these results at first seem to reinforce stereotypes about how women and men seek mates, they make sense in light of what we know about evolutionary psychology. And they provide an important takeaway for both sexes: before you draw a final conclusion about a would-be mate, consider whether you’re being overly influenced by how their good or bad attributes have been framed, either by others or by the person her or himself. Better to check your bias early than suffer its consequences later.

The study was published in the journal Evolution and Human Behavior.

You can find David DiSalvo on Twitter@neuronarrative, at his website The Daily Brain, and on YouTube at Your Brain Channel. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on June 8, 2014 .

Could Cooperation and Corruption Originate with the Same Hormone?

We humans contend with quite a few wicked flip sides in our personal and interpersonal lives. Gratitude can transform into resentment. Concern can morph into apathy. Love can quickly become hate. New research digs deeper into a similar neurobiological duality that can, and frequently does, run rampant in groups: the Jekyll and Hyde of cooperation and corruption.

Researchers hypothesized that oxytocin—the same hormone that previous studies have linked to collaboration and altruism—can predispose us to acting dishonestly if we think doing so will benefit our group of choice. A “group” in this case means anyone to whom we feel some sense of obligation, be it family, coworkers, peers, political cronies or our Friday night craft beer buddies.

In our day-to-day lives, oxytocin is thought to play a big role in how closely bonded we feel to our group. It isn’t just the “cuddle hormone” (often discussed in studies about love and affection) but also the group-cohesion hormone.

To test the hypothesis, the research team gave one group of healthy male participants a dose of oxytocin via nasal spray and another group a placebo nasal spray (neither the participants nor the researchers knew which participants received which spray). The participants were then asked to toss a coin multiple times and make predictions on whether they’d flip heads or tails, and then self-report on the results. How well they did, they were told, would win or lose money for their fellow group members. How they reported—honestly or dishonestly—was kept anonymous, assuring the participants that how they chose to respond wouldn’t reflect back on them personally.

We might guess that participants would lie more often about the results only if they, individually, could benefit – but instead participants given oxytocin lied significantly more about the coin flip than the placebo group only if doing so gained money for their fellow group members. And they lied for the group even if they thought that the favor wouldn't be reciprocated.

To find out how participants would react if they thought they’d benefit individually, the researchers put another group through the same testing conditions but told participants that the results of their predictions would only win or lose them money, with no group benefit or loss attached. The results showed that oxytocin did not influence participants to lie any more than those in the placebo group.

In other words, oxytocin promoted lying for group but not individual benefit.

The study has a few limitations, the most obvious of which is that it used only male participants. Whether or not oxytocin would influence females toward group dishonesty is impossible to tell from these results.

But, at least for men, it seems that higher levels of oxytocin potently affect decisions to lie for the group’s benefit. This may help explain the “you go, I go, we all go” nature of fraternal groups. And the results highlight the role of group bonding in forging hard-to-crack corruption. Last year's hit movie, The Wolf of Wall Street, a true tale about a group of corrupt stock brokers making an obscene amount of ill-gotten money, and lying to ensure that no one got caught (at least for a while), comes to mind as a vivid illustration.

The study was published in The Proceedings of the National Academy of Sciences.

You can find David DiSalvo on Twitter @neuronarrative, at his website The Daily Brain, and on YouTube at Your Brain Channel. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on May 18, 2014 .

Your Brain Channel Has Launched!

Hi everyone! Just wanted to let you know that I've launched a new science video channel on YouTube called Your Brain Channel. We'll be featuring brief "News You Can Use" video segments on a range of science-related topics. Please check out the first couple of entries on the green tea-memory connection and testing the sleep debt theory. Plenty more videos to come, so check back often. Thanks! 

 

The Connection Between Playing Video Games and a Thicker Brain

For all the negative news about the alleged downsides of playing video games, it’s always surprising to come across research that shows a potentially huge upside. A new study fills the bill by showing that heavy video game play is associated with greater “cortical thickness” – a neuroscience term meaning greater density in specific brain areas.

Researchers studied the brains of 152 adolescents, both male and female, who averaged about 12.6 hours of video gaming a week. As one might guess, the males, on average, played more than the females, but all of the participants spent a significant amount of time with a gaming console. The research team wanted to know if more time spent gaming correlated with differences in participants’ brains.

What they found is that the brains of adolescents that spent the most time playing video games showed greater cortical thickness in two brain areas: the left dorsolateral prefrontal cortex (DLPFC) and the left frontal eye field (FEF).

The prefrontal cortex is often referred to as our brain’s command and control center. It’s where higher order thinking takes place, like decision-making and self-control.  Previous research has shown that the DLPFC plays a big part in how we process complex decisions, particularly those that involve weighing options that include achieving short-term objectives with long-term implications. It’s also where we make use of our brain’s working memory resources – the information we keep “top of mind” for quick access when making a decision.

The FEF is a brain area central to how we process visual-motor information and make judgments about how to handle external stimuli. It’s also important in decision-making because it allows us to efficiently figure out what sort of reaction best suits what’s happening around us. The term “hand-eye coordination” is part of this process.

Together, the DLPFC and FEF are crucial players in our brain’s executive decision-making system. Greater “thickness” in these brain areas (in other words, more connections between brain cells) indicates a greater ability to juggle multiple variables, whether those variables have immediate or long-term implications, or both.

While this study doesn’t quite show that playing hours of videos games each week causes these brain areas to grow thicker, the correlation is strong – strong enough to consider the possibility that gaming is sort of like weight lifting for the brain.

And that, even more than the video game connection, is what makes this study really interesting. It suggests that the popular terms “brain training” and "brain fitness" are more than marketing ploys to sell specialized software. If it’s true that playing video games is not unlike exercise that beefs up our brain’s decision-making brawn, then it logically follows that we can not only perceptually, but physically improve our brains with practices designed for the purpose. Future research will continue exploring precisely that possibility.

The study was published in the online journal PLoS ONE.

You can find David DiSalvo on Twitter @neuronarrative and at his website, The Daily Brain. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on May 1, 2014 .

Can Chocolate In A Pill Boost Heart Health?

We've been hearing about the alleged health benefits of eating dark chocolate for the last decade or so, including lower blood pressure and improved cholesterol levels. Those claims are about to be put to an exhaustive test in a study of 18,000 adults in Boston and Seattle. But instead of eating chocolate bars every day, the study participants will take capsules containing concentrated amounts of the bio-active chemicals in cocoa beans, known as cocoa flavanols.

If study results are consistent with previous studies showing health benefits of eating cocoa flavanols, it will be a semi-sweet outcome for chocolate lovers. Generally, the higher the level of cocoa, the less sweet the chocolate -- though even chocolate with 72% cocoa contains in the neighborhood of 240 calories per serving, including 10 grams of sugar and 18 grams of fat.

The study participants will theoretically get all of the good stuff without the extra calories from fat and sugar.  Each participant will take two flavorless capsules a day containing 750 milligrams of cocoa flavanols (or dummy pills for those in the control group) for four years. Over that time, participants' heart health will be monitored to determine if the mega dose of cocoa does what previous, smaller studies indicate. To ingest the same amount of cocoa flavanols as the study participants would require eating almost five bars of dark chocolate a day.

Cocoa is thought to benefit heart health by acting as a vasodilator, meaning it triggers relaxation of muscle cells within blood vessel walls. Relaxed blood vessels naturally widen, resulting in greater blood flow and decreased blood pressure.

The latest research is being funded by Mars Inc., makers of M&Ms and other candies, and the National Heart, Lung and Blood InstituteMars co-sponsoring the study will raise red flags with critics, but it’s worth noting that the company has funded cocoa flavanol research since the 1990s, and much of what we know about the possible benefits of cocoa has emerged from Mars-supported studies.

You can find David DiSalvo on Twitter @neuronarrative and at his website, The Daily Brain. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on April 28, 2014 .

How A Tiny Bit Of Procrastination Can Help You Make Better Decisions

Decision-making is what you might call “practical science.” Findings on how we make decisions have direct applicability to life outside the psychology lab, and in recent years there’s been quite a lot said about this most commonplace, yet complicated, feats of mind. A new study adds to the discussion by suggesting that a wee bit of procrastination can make us better decision-makers.

Researchers from the Columbia University Medical Center wanted to know if they could improve decision accuracy by inserting just a smidgen more time between peoples’ observation of a problem and their decision on how to respond.

The research team conducted two experiments to test this hypothesis. First, they asked study participants to make a judgment about the direction of a cluster of rapidly moving dark dots on a computer monitor. As the dots (called the “target dots” in the study) traveled across the screen, participants had to determine if the overall movement was right or left. At the same time, another set of brighter colored dots (the “distractor dots”) emerged on the screen to obscure the movement of the first set. Participants were asked to make their decisions as quickly as possible.

When the first and second set of dots moved in generally the same direction, participants completed the task with near-perfect accuracy. When the second set of dots moved in a different direction than the first, the error rate significantly increased. Simple enough.

The second experiment was identical to the first, except this time participants were told to make their decisions when they heard a clicking sound. The researchers varied the clicks to be heard between 17 and 500 milliseconds after the participants began watching the dots – a timespan chosen to mimic real-life situations, such as driving, where events happen so quickly that time seems almost imperceptible.

The research team found that when participants’ decisions were delayed by about 120 milliseconds, their accuracy significantly improved.

"Manipulating how long the subject viewed the stimulus before responding allowed us to determine how quickly the brain is able to block out the distractors and focus on the target dots," said Jack Grinband, PhD, one of the study authors. "In this situation, it takes about 120 milliseconds to shift attention from one stimulus, the bright distractors, to the darker targets."

The researchers were careful to distinguish “delaying” from “prolonging” the decision process. There seems to be a sweet spot that allows the brain just enough time to filter out distractions and focus on the target. If there’s too little time, the brain tries to make a decision while it’s still processing through the distractions. If there’s too much time, the process can be derailed by more distractions.

If you’re wondering how anyone can actually do this with so little time to make a decision, the answer—suggested by this study—is practice. Just as the participants were cued by the clicks to make a decision, it would seem that we have to train ourselves to delay just long enough to filter distractions.

Said another way, doing nothing--for just a tiny amount of time--gives the brain an opportunity to process and execute. (In my book, Brain Changer, I refer to this as the "awareness wedge" because it's a consciously inserted "wedge" between the immediacy of whatever situation we're facing and our next action.)

This research also underscores just how dangerous it can be to add distractions to the mix—like using a phone while driving—when our brains already need a cushion of time to filter the normal array of distractions we experience all the time.

The study appears in the online journal PLoS One.

You can find David DiSalvo on Twitter @neuronarrative and at his website, The Daily Brain. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on April 6, 2014 .

The Good and Bad News About Your Sleep Debt

Sleep, science tells us, is a lot like a bank account with a minimum balance penalty. You can short the account a few days a month as long as you replenish it with fresh funds before the penalty kicks in. This understanding, known colloquially as “paying off your sleep debt,” has held sway over sleep research for the last few decades, and has served as a comfortable context for popular media to discuss sleep with weary eyed readers and listeners.

The question is – just how scientifically valid is the sleep debt theory?

Recent research targeted this question by testing the theory across a few things that sleep, or the lack of it, is known to influence: attention, stress, daytime sleepiness, and low-grade inflammation. The first three are widely known for their linkage to sleep, while the last—inflammation—isn’t, but should be. Low-grade tissue inflammation has been increasingly linked to a range of unhealthiness, with heart disease high on the list.

Study participants were first evaluated in a sleep lab for four nights of eight-hour sleep to establish a baseline. This provided the researchers with a measurement of normal attention, stress, sleepiness and inflammation levels to measure against.

The participants then endured six nights of six-hour sleep (a decent average for someone working a demanding job and managing an active family and social life). They were then allowed three nights of 10-hour catch-up sleep. Throughout the study, participants’ health and ability to perform a series of tasks were evaluated.

Sleep debt theory predicts that the negative effects from the first six nights of minimal sleep would be largely reversed by the last three nights of catch-up sleep – but that’s not exactly what happened.

The analysis showed that the six nights of sleep deprivation had a negative effect on attention, daytime sleepiness, and inflammation as measured by blood levels of interleukin-6 (IL-6), a biomarker for tissue inflammation throughout the body — all as predicted. It did not, however, have an effect on levels of the stress hormone cortisol—the biomarker used to measure stress in the study—which remained essentially the same as baseline levels.

After three-nights of catch-up sleep, daytime sleepiness returned to baseline levels – score one for sleep debt theory. Levels of IL-6 also returned to baseline after catch-up – another score in the theory’s corner. Cortisol levels remained unchanged, but that’s not necessarily a plus for the theory (more on that in a moment).

Attention levels, which dropped significantly during the sleep-deprivation period, didn't return to baseline after the catch-up period. That’s an especially big strike against the theory since attention, perhaps more than any other measurement, directly affects performance. Along with many other draws on attention—like using a smart phone while trying to drive—minimal sleep isn’t just a hindrance, it’s dangerous, and this study tells us that sleeping heavy on the weekends won’t renew it.

Coming back to the stress hormone cortisol, the researchers point out that its level remaining relatively unchanged probably indicates that the participants were already sleep deprived before they started the study. Previous research has shown a strong connection between cortisol and sleep; the less sleep we get, the higher the level of the stress hormone circulating in our bodies, and that carries its own set of health dangers. This study doesn’t contradict that evidence, but also doesn’t tell us one way or the other if catch-up sleep decreases cortisol levels.

The takeaway from the study is that catch-up sleep helps us pay off some, but by no means all of our sleep debt. And given the results on impaired attention, another takeaway is that it’s best to keep your sleep-deprived nights to a minimum. Just because you slept in Saturday and Sunday doesn’t mean you’ll be sharp Monday morning.

You can find David DiSalvo on Twitter @neuronarrative and at his website, The Daily Brain. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on March 15, 2014 .

Balancing the Self-Control Seesaw

Imagine a seesaw in your brain. On one side is your desire system, the network of brain areas related to seeking pleasure and reward. On the other side is your self-control system, the network of brain areas that throw up red flags before you engage in risky behavior. The tough questions facing scientific explorers of behavior are what makes the seesaw too heavy on either side, and why is it so difficult to achieve balance?

new study from University of Texas-Austin, Yale and UCLA researchers suggests that for many of us, the issue is not that we’re too heavy on desire, but rather that we’re too light on self-control.

Researchers asked study participants hooked up to a magnetic resonance imaging (MRI) scanner to play a video game designed to simulate risk-taking. The game is called Balloon Analogue Risk Task (BART), which past research has shown correlates well with self-reported risk-taking such as drug and alcohol use, smoking, gambling, driving without a seatbelt, stealing and engaging in unprotected sex.

The research team used specialized software to look for patterns of activity across the brain that preceded someone making a risky or safe decision while playing the game.

The software was then used to predict what other subjects would choose during the game based solely on their brain activity. The results: the software accurately predicted people's choices 71 percent of the time.

What this means is that there’s a predictable pattern of brain activity associated with choosing to take or not take risks.

"These patterns are reliable enough that not only can we predict what will happen in an additional test on the same person, but on people we haven't seen before," said Russ Poldrack, director of UT Austin's Imaging Research Center and professor of psychology and neuroscience.

The especially intriguing part of this study is that the researchers were able to “train” the software to identify specific brain regions associated with risk-taking. The results fell within what’s commonly known as the “executive control” regions of the brain that encompass things like mental focus, working memory and attention. The patterns identified by the software suggest a decrease in intensity across the executive control regions when someone opts for risk, or is simply thinking about doing something risky.

"We all have these desires, but whether we act on them is a function of control," says Sarah Helfinstein, a postdoctoral researcher at UT Austin and lead author of the study.

Coming back to the seesaw analogy, this research suggests that even if our desire system is level, our self-control system appears to slow down in the face of risk; less intensity on that side of the seesaw naturally elevates intensity on the other side.

And that’s under normal conditions. Add variables like peer pressure, sleep deprivation and drug and alcohol use to the equation--all of which further handicap self-control--and the imbalance can only become more pronounced.

That’s what the next phase of this research will focus on, says Helfinstein. "If we can figure out the factors in the world that influence the brain, we can draw conclusions about what actions are best at helping people resist risks.”

Ideally, we'd be able to balance the seesaw -- enabling consistently healthy discretion as to which risks are worth taking. While it's evident that too much exposure to risk is dangerous, it's equally true that too little exposure to risk leads to stagnation.

We are, after all, an adaptive species. If we're never challenged to adapt to new risks, we stop learning and developing, and eventually sink into boredom, which, ironically, sets us up to take even more radical risks. 

The study appears in the journal Proceedings of the National Academy of Sciences.

You can find David DiSalvo on Twitter @neuronarrative and at his website, The Daily Brain. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on February 28, 2014 .

How to Squeeze Snake Oil From Deer Antlers and Make Millions

If I offered to sell you a liquid extract made from the velvety coating of deer antlers, claiming that it will catalyze muscle growth, slow aging, improve athletic performance and supercharge your libido – I’d expect you'd be a little skeptical. But what if I added that a huge percentage of professional athletes are using the stuff and paying top dollar, $100 or more an ounce, and swear up and down that just a few mouth sprays a day provides all benefits as advertised? Would you be willing to give it a try?

Ever since former Baltimore Ravens star Ray Lewis admitted a few months ago that he used deer antler spray (though subsequently denied it), the market for the stuff has exploded. Some estimates say that close to half of all professional football and baseball players are using it and a hefty percentage of college players as well, to say nothing of the army of weightlifters and bodybuilders that have made the spray a daily part of their routines.

TV journalism bastion 60 Minutes recently ran a special sports segment about "Deer Antler Man" Mitch Ross, the product's highest profile salesman, and the tsunami of buyers for oral deer antler spray and its growing list of celebrity devotees. Without question, deer antler spray has captivated the attention of the sports world and is rapidly pushing into mainstream markets.

Let’s take a look at the science behind the claims and try to find out what’s really fueling the surge in sales for this peculiar product.

The velvety coating of deer antlers is a chemically interesting material. For centuries it’s been used in eastern traditions as a remedy for a range of maladies, and there’s an underlying rationale for why it theoretically could be useful for certain conditions. The velvet coating contains small amounts of insulin-like growth factor 1, or IGF-1, that has been studied for several decades as a clinically proven means to reverse growth disorders in humans. For example, in children born with Laron Syndrome—a disorder that causes insensitivity to growth hormone, resulting in dwarfism—treatment with IGF-I has been shown to dramatically increase growth rates.  IGF-1 appears to act as a chemical facilitator for the production of growth hormone from the pituitary gland, and in sufficient amounts even synthetically derived IGF-1 can help boost physical growth.

That’s the reason why IGF-1 has been banned by the Food and Drug Administration (FDA) and the World Anti-Doping Agency in certain forms as having similar outcomes to using human growth hormone and anabolic steroids.  The forms these agencies have banned, however, are high-dosage, ultra-purified liquids administered by injection.

Why can’t the FDA and anti-doping agencies ban IGF-1 outright? For the simple reason that the chemical, in trace amounts, is found in things we eat every day: red meat, eggs and dairy products. Every time you eat a juicy ribeye or have a few eggs over easy, you’re ingesting IGF-1.

In the tiny amounts of the substance found in these foods, we may experience a cumulative, positive effect on muscle repair over time, but you’ll never be able to drink enough whole milk in a sitting to experience the anabolic effects you’d get from a syringe full of concentrated and purified IGF-1.

As I mentioned, the velvety substance on growing deer antlers also contains trace amounts of IGF-1, and (along with oddities like powdered tiger bone) has been sold in China for centuries as a traditional cure for several ailments. In traditional Chinese medicine, the antler is divided into segments, each segment targeted to different problems.  The middle segment, for example, is sold as a cure for adult arthritis, while the upper section is sold as a solution for growth-related problems in children. The antler tip is considered the most valuable part and sells for top dollar.

The main source for the market explosion in deer antler spray is New Zealand, which produces 450 tons of deer velvet annually, compared to the relatively small amount produced by the US and Canada: about 20 tons annually. Deer can be killed outright for their antlers, but in New Zealand the more accepted procedure is to anesthetize the deer and remove the antlers at the base. The antlers are then shipped overseas to the growing market demanding them.

The reason why deer antler velvet is usually turned into an oral liquid spray instead of a pill (although it is also sold in pill form around the world) is that the trace proteins in the substance are rapidly broken down by the digestive system, so only a fraction of the already tiny amount actually makes it into the bloodstream. In spray form, IGF-1 can potentially penetrate mucosal membranes and enter the bloodstream intact more quickly. Purchasing the spray form can run from anywhere between about $20 for a tiny bottle to $200 for two ounces. Standard doses are several sprays per day, so the monthly costs of using the product are exorbitant.

The question is does using deer antler spray deliver the benefits its sellers claim? These alleged benefits include accelerated muscle growth and muscle repair, tendon repair, enhanced stamina, slowing of the aging process, and increased libido – a virtual biological panacea of outcomes.

The consensus opinion from leading endocrinologists studying the substance, including Dr. Roberto Salvatori at the Johns Hopkins School of Medicine and Dr. Alan Vogol at the University of Virginia,  is that the chances of it delivering on any of these benefits are slim to none.  The reason is simply that there's far too little of the substance in even the purest forms of the spray to make any difference.

Think of it this way: If a steak contains roughly the same trace amount of IGF-1 as deer antler velvet, is there any evidence to suggest that eating steak can provide the same array of benefits claimed for deer antler spray?  No, there’s not a shard of clinical evidence to support that claim.

And yet, thousands of people are paying close to $200 a bottle for the spray believing that it will deliver these benefits.  With such high-profile celebrity connections as Ray Lewis and golf superstar Vijay Singh, there’s little wonder why the craze has picked up momentum. But in light of scientific evidence, there’s no credible reason to pay $200 or any amount for a bottle of deer antler spray.

Aside from the lack of evidence supporting benefits, it’s unclear what the negative effects may be of using the product long-term.  WebMD reports that the compounds in the spray may mimic estrogen in the body, which could contribute to spawning a variety of cancers or worsening of conditions such as uterine fibroids in women. Elevated estrogen levels in men can throw off hormonal balance and lead to a thickening waistline and a host of related metabolic problems.

The takeaway is this: deer antler spray is the latest high-priced snake oil captivating the market. Not only will it cost you a lot of money and not deliver promised benefits, but it could lead to negative health outcomes. Let the deer keep their antler velvet and keep your cash in your wallet.

You can find David DiSalvo on Twitter @neuronarrative and at his website, The Daily Brain. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on February 21, 2014 .

The Era Of Genetically-Altered Humans Could Begin This Year

By the middle of 2014, the prospect of altering DNA to produce a genetically-modified human could move from science fiction to science reality.  At some point between now and July, the UK parliament is likely to vote on whether a new form of in vitro fertilization (IVF)—involving DNA from three parents—becomes legally available to couples. If it passes, the law would be the first to allow pre-birth human-DNA modification, and another door to the future will open.

The procedure involves replacing mitochondrial DNA (mtDNA) to avoid destructive cell mutations. Mitochondria are the power plants of human cells that convert energy from food into what our cells need to function, and they carry their own DNA apart from the nuclear DNA in our chromosomes where most of our genetic information is stored. Only the mother passes on mtDNA to the child, and it occasionally contains mutations that can lead to serious problems.

According to the journal Nature, an estimated 1 in 5,000-10,000 people carry mtDNA with mutations leading to blindness, diabetes, dementia, epilepsy and several other impairments (the equivalent of 1,000 – 4,000 children born each year in the U.S.). Some of the mutations lead to fatal diseases, like Leigh Syndrome, a rare neurological disorder that emerges in infancy and progressively destroys the ability to think and move.

By combining normal mitochondrial DNA from a donor with the nucleus from a prospective mother’s egg, the newborn is theoretically free from mutations that would eventually lead to one or more of these disorders. While never tried in humans (human cell research on mtDNA has so far been confined to the lab), researchers have successfully tested the procedure in rhesus monkeys.

Last March, the UK Human Fertilization and Embryology Authority wrapped up a lengthy study of safety and ethical considerations and advised parliament to approve the procedure in humans. According to New Scientist magazine, parliament is likely to vote on the procedure by July of this year. If the procedure overcomes that hurdle, it will still take several months to pass into law, but the initial vote will allow researchers to begin recruiting couples for the first human mtDNA replacement trials.

The U.S. is not nearly as close to approving mtDNA replacement as the UK seems poised to do; the U.S. Food and Drug Administration will start reviewing the data in earnest in February.  Among the concerns on the table is whether the mtDNA donor mother could be considered a true “co-parent” of the child, and if so, can she claim parental rights?

Even though the donor would be contributing just 0.1 percent of the child’s total DNA (according to the New Scientist report), we don’t as yet have a DNA benchmark to judge the issue. Who is to say what percentage of a person’s DNA must come from another human to constitute biological parenthood?

Other scientists have raised concerns about the compatibility of donor mtDNA with the host nucleus and believe the push to legalize human trials is premature. By artificially separating mtDNA from the nucleus, these researchers argue, we may be short-circuiting levels of genetic communication that we're only beginning to fully understand.

These are but two of many issues that this procedure will surface in the coming months. One thing is certain: we’re rapidly moving into new and deeper waters, and chances are we're going to need a bigger boat.

You can find David DiSalvo on Twitter @neuronarrative and at his website, The Daily Brain. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

 

Posted on February 9, 2014 .

Why Is Heroin Abuse Rising While Other Drug Abuse Is Falling?

Peter Shumlin, Democratic governor of Vermont, moved heroin addiction to the front burner of national news by devoting his entire State of the State address to his state’s dramatic increase in heroin abuse. Shumlin described the situation as an “epidemic,” with heroin abuse increasing 770 percent in Vermont since 2000.

Vermont is a microcosm of the nation. Across the U.S., heroin abuse among first-time users has increased by nearly 60 percent in the last decade, from about 90,000 to 156,000 new users a year, according to the U.S. Substance Abuse and Mental Health Services Administration (SAMHSA).

At the same time, non-medical prescription opiate abuse has slowly decreased.  According to the SAMHSA 2012 National Survey on Drug Use and Health, the number of new non-medical users of pain killers in 2012 was 1.9 million; in 2002 it was 2.2 million. [It bears repeating that these stats are for abuse of non-medical prescription pain killers, not abuse of drugs obtained with a prescription.]

In the same time-frame, abuse of methamphetamine also decreased. The number of new users of meth among persons aged 12 or older was 133,000 in 2012, compared to about 160,000 in 2002.

Cocaine abuse also fell, from about 640,000 new users in 2012 from over 1 million in 2002. Crack abuse fell from over 200,000 users in 2002 to about 84,000 in 2012 (a number that’s held steady for the last three years).

The statistics suggest that heroin has taken up the slack from fall offs among other major drugs (only marijuana and hallucinogens like ecstasy have held steady or slightly increased among new users over the last decade; not surprising since they’re the drugs of choice among the youngest users, and since pot has been angling toward legalization for the last few years).

Most surprising in this sea of stats is the drop in non-medical prescription opiate abuse overlapping with an increase in heroin abuse. The reason may come down to basic economics: illegally obtained prescription pain killers have become more expensive and harder to get, while the price and difficulty in obtaining heroin have decreased.  An 80 mg OxyContin pill runs between $60 to $100 on the street. Heroin costs about $9 a dose. Even among heavy heroin abusers, a day’s worth of the drug is cheaper than a couple hits of Oxy.

Laws cracking down on non-medical prescription pain killers have also played a role. The amount of drugs like Oxy hitting the streets has decreased, but the steady flow of heroin hasn’t hiccupped.  Many cities are reporting that previous non-medical abusers of prescription pain killers—who are often high income professionals—have turned to heroin as a cheaper, easier-to-buy alternative.

One conclusion that can be drawn from the stats is that prescription opiates are serving as a gateway drug for heroin, not so much by choice but by default. The market moves to fill holes in demand, and heroin is effectively filling fissures in demand opened by legal pressures and cost.

Another interesting stat is that among first-time drug users, the mean age of initiation for non-medical prescription pain killers and heroin is virtually identical: 22 to 23 years old. That would also support an argument that there’s a cross-over effect from drugs like Oxy to heroin (in contrast, the mean ages for first-time users of pot and ecstasy are 18 and 20, respectively).

Vermont’s heroin problem would seem a foretelling of things to come in the more affluent parts of the country. According to the U.S. Census Bureau, Vermont’s median household income, home ownership rate, and percentage of people with graduate and professional degrees are all higher than the national averages, and Vermont’s percentage of those living at or below poverty level is significantly lower than the national average.

The bottom line: Vermont’s stratospheric heroin increase is happening where the money is, and the national drug abuse trends suggest that the same thing is happening across the country.

You can find David DiSalvo on Twitter @neuronarrative and at his website, The Daily Brain. His latest book is Brain Changer: How Harnessing Your Brain’s Power To Adapt Can Change Your Life.

Posted on February 4, 2014 .

How Video Games Will Help Us Steal Back Our Focus

I’ve become a focus junkie. If I see something written in a legit publication about techniques or technologies to improve mental focus, I freebase it—mainly because the forces draining focus are unrelenting, and I’m convinced that the only way to regain balance is by indulging measures that are just as intense. (My working philosophy: extreme forces call for extreme adaptation, using the best tools and strategies science can afford us.)

Enter author and psychologist Daniel Goleman, popularizer of “Emotional Intelligence”, and author of a new book about the power of focus called, simply, “Focus”.  Goleman is one of my favorite writers in the psychology space because his work is a true example of what I call “science-help” – he’s all about the research. When you glean takeaway knowledge from a Goleman book, you can be sure it’s been tested and credible enough to earn his writer’s brand.

Because I’m also a midnight snacker of business nibblets, I came across Goleman’s latest article in the Harvard Business Review, “The Focused Leader: How effective executives direct their own—and their organization's—attention". The entire piece is well worth the magazine's $17 cover price (or at least buying a PDF reprint online), but I was especially intrigued by a sidebar in the article about a new species of video games designed to help regain our focus in a focus-fragmenting world.

Dave Eggers or Michael Chabon couldn’t come up with a better ironic twist than video games—engaging and entertaining video games, no less(!)—being used to sharpen attention.  As Goleman discusses in HBR, neuroscientists at the University of Wisconsin-Madison have grabbed hold of this task like tics on a deerhound and produced a video game slated for a 2014 release called, fittingly, “Tenacity”.  Quoting Goleman:

“The game offers a leisurely journey through any of half a dozen scenes, from a barren desert to a fantasy staircase spiraling heavenward. At the beginner’s level you tap an IPad screen with one finger every time you exhale; the challenge is to tap two fingers with every fifth breath. As you move to higher levels, you’re presented with more distractions—a helicopter flies into view, a plane does a flip, a flock of birds suddenly scud by.”

The objective is the same as that of meditation—to draw attention back to a central point despite the number or intensity of distractions dive-bombing one’s focus.  Goleman adds, “When players are attuned to the rhythm of their breathing, they experience the strengthening of selective attention as a feeling of calm focus, as in meditation.”

University of Wisconsin-Madison researchers see this as just the beginning of a focus-enhancing revolution in digital tech.  Through an initiative called Games+Learning+Society (GLS), they are pioneering efforts that marry entertainment with enrichment, and building it all on a platform of solid science.

The team boasts members with serious science street cred, like neuroscientist Richard J. Davidson, founder of the Center for Investigating Healthy Minds, whose work on neuroplasticity (the brain’s ability to change at the neuronal level) could carry Promethean fire to the video game world.  Davidson is leading research to identify what’s happening in the brains of people who use games like Tenacity, with the hypothesis that the technology will help train our brains for enhanced focus, and—believe it or not—greater kindness.

“Modern neuroscientific research on neuroplasticity leads us to the inevitable conclusion that well-being, kindness and focused attention are best regarded as skills that can be enhanced through training,” says Davidson. “This study is uniquely positioned to determine if game playing can impact these brain circuits and lead to increases in mindfulness and kindness.”

Given the deluge of news about video games leading to violence, the idea that they could make us a bit nicer sounds, well, mighty nice. And the truth is that it's not even far-fetched: it's an outcome sitting at the crossroads of ancient wisdom traditions and focus-enhancing technology—as we learn to more consistently focus our attention, we experience a change in both awareness and attitude. As everyone from the Buddha to David Foster Wallace has observed, once our awareness is enhanced and broadened, we can get out of our heads and interact more conscientiously with others.

That's the pro-social goal that has the Wisconsin team fired up about the focus-enhancing power of digital tech.  According to Constance Steinkuehler, co-director of GLS and associate professor of education at UW-Madison: “We’re looking at pro-social skills, particularly being able to recognize human emotions and then respond to them in some productive fashion, which turns out to be harder than you might think.”

Armed with Davidson’s brain-imaging analysis, the team wants to know if playing the games they’ve designed will foster pro-social adaptation in our noggins.

“We look at pre- and post-test measures and see if there is a difference,” said Steinkuehler. “For example, in Tenacity, our mindfulness app, you might ask yourself ‘Is there a dosage effect? Can we see that more game play has more positive effect on kid’s attention?’”

If that's proven out, then GLS's technology-harnessing work could be the perfect counterbalance to the dubious video-game legacy the news media is so fond of blowhorning: that gaming does little more than foster anti-social behavior, everything from bullying to serial violence.

At a less radical level, the UW-Madison team’s work may also provide an antidote to the insular effects of digital tech. If doses of Tenacity, or similar games, leads to heightened focus and social awareness, then spending time buried in your smartphone or tablet could have an upside beyond accruing more gold and elixir for your barbarian clan.

“There’s this tremendous amount of time and energy investment in games and media,” says Steinkuehler. “So part of what we’ve been trying to figure out is how do we take some of that time and make it beneficial for the people engaged in it? We have examples from television or film of documentaries, of art pieces, of indie films, of shows like Sesame Street, that actually have documented benefits for their viewers. So games are another media, why not use them?”

It’s this pragmatic view of technology, as opposed to the absolutist view that too often creeps into our mindspace, that will eventually win the day. As Sesame Street proved decades ago amidst the clamor of “TV is an anti-educational evil!” fear mongering, we can make use of technology to enrich minds. The difficulty in doing so arises from fighting against path-of-least-resistance thinking—human nature's chronic disease of default—that turns us into willing slaves of our time-chewing vices.

The work of the GLS team and others crafting new uses for digital tech reminds us that how technology ultimately affects us is an outcome we can, and should, influence. If we punt on that responsibility, we shouldn't be surprised at the bad news that invariably follows.  But if we see the responsibility as an opportunity, we'll be surprised at how much good can come from the ones and zeroes in our hands.

David DiSalvo's newest book, Brain Changer, is now available at AmazonBarnes and Noble and other major booksellers.

Posted on January 26, 2014 .

Study Shows That Electrical Stimulation Can Boost The Brain's Brakes

Using harmless electrical stimulation, researchers have shown that they can boost self-control by amplifying the human brain’s “brakes.”

Researchers from The University of Texas Health Science Center at Houston (UTHealth) and the University of California, San Diego asked study participants to perform simple tasks in which they had to exert self-control to slow down their behavior. While doing so, the team used brain imaging to identify the areas of the participants’ prefrontal cortex (sometimes called the brain’s “command and control center”) associated with the behavior—allowing them to pinpoint the specific brain area that would need a boost to make each participant’s “braking” ability more effective.

They then placed electrodes on the surface of the participants’ brains associated with the prefrontal cortex areas linked with the behavior.  With an imperceptible, computer-controlled electrical charge, researchers were able to enhance self-control at the exact time the participants needed it.

"There is a circuit in the brain for inhibiting or braking responses," said Nitin Tandon, M.D., the study's senior author and associate professor in The Vivian L. Smith Department of Neurosurgery at the UTHealth Medical School. "We believe we are the first to show that we can enhance this braking system with brain stimulation."

To make sure that specifically stimulating the prefrontal cortex was really causing the effect, the researchers conducted a follow-up in which they placed the electrodes on other surface areas of the participants’ brains. Doing so had no effect.

That’s an important point, because it separates this study from past research that used electrical stimulation to disrupt general brain function.  In contrast, this study shows that particular parts of the prefrontal cortex form a self-control circuit that can be externally enhanced.

What also makes this study noteworthy is that it was double-blind-- neither the researchers nor participants knew when or where the electrical charges were being administered.  That’s critical because it means the participants would not know when to intentionally slow down their behavior to exaggerate the effect. They were, in a very real sense, being externally controlled by the stimulation, albeit only briefly.

The study has a few caveats. First, all of the participants were volunteers suffering from epilepsy who agreed to be monitored for seizures by hospital staff during the experiment.  Second, there were only four participants—though all four experienced the self-control boosting effect.  Obviously, placing electrodes on the surface of the brain is an invasive procedure, hence the small number of participants.

If this research sounds a little scary to you, you can relax knowing that we're a long way from externally controlling peoples' behavior. The true value of this study is to demonstrate that the brain's self-control circuit can be amplified, at least under certain conditions.

Placing electrodes on peoples' brains isn't a practical solution, but eventually the same effect may be triggered with scalp electrodes and, down the road, with medication that targets the self-control circuit. That may one day be promising news for sufferers of behavioral disorders like Tourette’s Syndrome and OCD.

The study was published in The Journal of Neuroscience.

David DiSalvo's newest book, Brain Changer, is now available at AmazonBarnes and Noble and other major booksellers.

Posted on January 16, 2014 .

Eat More Of These Four Things For A Stronger, Healthier Brain

Remember these four letters: DDFM.  If it’s easier, think of them as call letters for a cheesy radio station, “Double D FM!” The letters stand for four nutrients critical to brain health that you probably aren’t getting enough of: Vitamin D, DHA, Folate and Magnesium.

Research suggests that our diets are increasingly low in all four, and our brains are suffering for it.

Vitamin D

Why it’s important:  I stumbled across the importance of vitamin D when a routine blood test revealed that my level was low and my doctor recommended that I begin taking three 2000 i.u. vitamin D3 supplements a day. I’d always thought being out in the sun was enough to keep vitamin D levels high, because the human body uses sunlight to manufacture the vitamin. But research shows we’re frequently low in this essential vitamin and that’s potentially dangerous. Low levels are associated with free radical damage to brain cells and accelerated cognitive decline.  In addition to boosting brain health, there’s also evidence suggesting that vitamin D aids in muscle strength and repair.

Deutsch: Schweizer Emmentaler AOC, Block

How to get more of it:  Eat oily fish like wild salmon* and eggs. You can also get a boost by eating cheese, and if you go this route I recommend swiss cheese because it also contains a high level of Conjugated Linoleic Acid (CLA) that has shown promise in helping reduce abdominal fat.  Another decent source is Greek yogurt, but avoid brands with excess sugar (I’m not recommending milk for that reason – it’s naturally high in sugar). If your vitamin D levels are especially low—and it’s best to determine that via a blood test—consider taking a vitamin D3 supplement at a level your doctor recommends.**

DHA

Why it’s important:  DHA (Docosahexaenoic acid) plays a vital role in keeping cell membranes flexible, resilient and healthy. Healthy cell membranes are less susceptible to oxidative stress, the damage caused by free radicals, which can lead to cell mutation and, ultimately, cancer. DHA also appears to help brain cells regulate their energy use and protects them from inflammation—a condition linked to an array of degenerative diseases including Alzheimer's. In addition, low levels of DHA have been linked to depression, memory loss, and even elevated hostility. Suffice to say, there's enough credible research out there on DHA now to support a strong statement that it's essential to brain health.

How to get more of it: Eat more oily fish like wild salmon and sardines, though if you eat canned sardines try to find brands that are not packed in cans containing BPA, a chemical linked to a host of toxic badness. If you don't mind the taste, kelp (aka seaweed) is another excellent source. You can also get ample DHA in Omega 3 fish oil supplements. Just make sure that you are buying a brand that is filtered to remove mercury and has a high level of DHA (the EPA and DHA levels will be listed in the ingredients; try to get a supplement with at least 200mg of DHA per capsule).**

Folate

Why it’s important: Folate, a water-soluble B vitamin, has long been established as critical to brain development in infants; pregnant women are strongly advised to take a folate supplement to fend off birth defects. But research has also shown that folate is important to brains of all ages, and deficiencies are correlated with cognitive decline particularly in the elderly. Studies have linked folate to improved memory function and mental processing speed—two things that typically take a hit as we age. There's also evidence indicating that folate deficiency contributes to psychiatric disorders such as depression.

How to get more of it:  Eat unsalted peanuts. The little legumes are folate powerhouses, and they’re also packed with heart healthy monounsaturated fat.  If crunching nuts isn’t your thing, try natural peanut butter. Just stay away from peanut butter with added sugar and salt – stick to the kind that’s all peanuts. Other good sources include asparagus, black eyed peas, spinach, broccoli and egg yolks.

Magnesium

Why it’s important:  In the brain, magnesium acts as buffer between neuron synapses, particularly the NMDA receptor that plays a role in several cognitive functions including learning and memory. Magnesium “sits” on the receptor without activating it, in effect protecting the receptor from over-activation by other neurochemicals, especially the neurotransmitter glutamate. If there isn’t enough magnesium available to protect NMDA receptors, glutamate constantly triggers the receptors causing an “excitatory” response. That’s why you often see magnesium advertised as a calming nutrient, because it blocks glutamate from too-frequently activating the NMDA receptors in your brain. The most important thing to remember is that without magnesium, over-activation of NMDA receptors eventually becomes toxic to the brain, leading to progressively worse damage and steady cognitive decline.

Spinach

How to get more of it:  Eat spinach, it's loaded with magnesium. Other sources include almonds and black beans. Just be sure to eat raw or roasted almonds that are unsalted and not coated in sugar (even though those taste so good). Peanuts are also a decent source of magnesium, which makes them a double-whammy snack because they're also high in folate as mentioned above.

If you decide to take a magnesium supplement, be sure to find a readily absorbable form of magnesium such as magnesium citrate, and avoid the less absorbable (but widely sold) form of magnesium oxide. **

*In each case where I recommended eating more fish, you'll notice that I said "wild salmon," and that's because there's troubling evidence to suggest that farm-raised salmon are a significantly less healthy choice for the brain and the heart.

** Always check with your doctor before beginning any supplement regimen.

David DiSalvo's newest book, Brain Changer, is now available at AmazonBarnes and Noble and other major booksellers.

Posted on January 9, 2014 .

Two Incredible Speeches About Thinking To Begin 2014

Widely regarded as two of the most influential commencement addresses ever given, I offer you David Foster Wallace's speech "This is Water" from his commencement at Kenyon College in 2005, and Steve Job's commencement speech at Stanford, also in 2005.  As the New Year begins, I urge you to listen to both and spend some time thinking about the messages from these two remarkable thinkers from different parts of culture who had important lessons to teach us about our own thinking.

Happy New Year. 

 

 

 

 

Posted on December 31, 2013 .

Why The Future Of Online Dating Relies On Ignoring You

According to a new studyNetflix and Amazon have much to teach online dating sites. Netflix doesn’t wait around for you to tell it what you want; its algorithm is busy deciphering your behavior to figure it out. Likewise, say researchers, dating sites need to start ignoring what people put in their online profiles and use stealthy algorithmic logic to figure out ideal matches – matches that online daters may have never pursued on their own.

Kang Zhao, assistant professor of management sciences in the University of Iowa Tippie College of Business, is leading a team that developed an algorithm for dating sites that uses a person's contact history to recommend partners with whom they may be more compatible, following the lead of the model Netflix uses to recommend movies users might like by tracking their viewing history.

The difference between this approach, and that of using a user’s profile, can be night and day. A user’s contact history may in fact run entirely counter to what she or he says they are looking for in a mate, and usually they aren’t even aware of it.

Zhao's team used a substantial amount of data provided by a popular commercial online dating service: 475,000 initial contacts involving 47,000 users in two U.S. cities over 196 days. About 28,000 of the users were men and 19,000 were women, and men made 80 percent of the initial contacts. Only about 25 percent of those contacts were reciprocated.

Zhao's team sought to improve the reciprocation rate by developing a model that combines two factors to recommend contacts: a client's tastes, determined by the types of people the client has contacted; and attractiveness/unattractiveness, determined by how many of those contacts are returned and how many are not.

“Those combinations of taste and attractiveness,” Zhao says, “do a better job of predicting successful connections than relying on information that clients enter into their profile, because what people put in their profile may not always be what they're really interested in. They could be intentionally misleading, or may not know themselves well enough to know their own tastes in the opposite sex.”

Zao gives the example of a man who says on his profile that he likes tall women, but who may in fact be approaching mostly short women, even though the dating website will continue to recommend tall women.

"Your actions reflect your taste and attractiveness in a way that could be more accurate than what you include in your profile," Zhao says. The research team’s algorithm will eventually “learn” that while a man says he likes tall women, he keeps contacting short women, and will unilaterally change its dating recommendations to him without his notice, much in the same way that Netflix’s algorithm learns that you’re really a closet drama devotee even though you claim to love action and sci-fi.

"In our model, users with similar taste and (un)attractiveness will have higher similarity scores than those who only share common taste or attractiveness," Zhao says. "The model also considers the match of both taste and attractiveness when recommending dating partners. Those who match both a service user's taste and attractiveness are more likely to be recommended than those who may only ignite unilateral interests."

After the research team’s algorithm is used, the example 25 percent reciprocation rate described above improves to about 44 percent --  a better than 50% jump.

Zhao says that his team’s algorithm seems to work best for people who post multiple photos of themselves, and also for women who say they “want many kids,” though the reasons for that correlation aren't quite clear.

If you’re wondering how soon online dating services could start overruling your profile to find your best match, Zhao’s team has already been approached by two major services interested in using the algorithm.   And it’s not only online dating that will eventually change. Zhao adds that college admissions offices and job recruiters will also benefit from the algorithm.

The age of Ignore is upon us, though safe money says we’ll continue thinking we’ve “chosen” the outcomes anyway.

The research was published in the journal Social Computing, Behavioral-Cultural Modeling and Prediction

David DiSalvo's newest book, Brain Changer, is now available at AmazonBarnes and Noble and other major booksellers.

Posted on December 28, 2013 .

Neuroscience Explains Why the Grinch Stole Christmas

"You're a mean one, Mr. Grinch."

But why?

We all know Dr. Seuss's iconic tale of the green ogre who lives on a mountain, seething while the Whos in the village below celebrate Christmas. The happier they are, the angrier he gets, until finally he can't take it anymore and hatches a plan to steal away their joy.

Dr. Seuss was a brilliant intuitive psychologist, and I'd have loved to chat with him about the core of the Grinch's rage, but, alas, he left us too early. So I'm turning to another impressive thinker who has taught me a great deal about the neurobiology of emotion: Dr. John Cacioppo, a pioneer in the field of social neuroscience and co-author of the book, Loneliness: Human Nature and the Need for Social Connection

Cacioppo has conducted a wealth of research about the effects of loneliness on the human brain. We're not talking about physical loneliness (although that can be part of the equation); we're talking about a sense of loneliness that someone in the midst of thousands of people can feel.  I saw an interview with John Bon Jovi who was describing the feeling he gets after he leaves the stage. Surrounded by tens of thousands of fans, you might think that he'd have no reason to feel lonely—but when he goes back to his hotel room, those thousands of people screaming his name may as well not exist at all. He feels alone despite being the center of a publicity universe.

That's much closer to the sort of loneliness Cacioppo studies, and it's especially relevant in the age of social media, where someone might have 2,000 Facebook friends and yet feel like they're completely alone in the world.  

If Cacioppo could persuade the Grinch to step into his MRI, he'd likely observe a result consistent with those of a 2009 brain imaging study he conducted to identify differences in the neural mechanisms of lonely and nonlonely people.   Specifically, he wanted to know what's going on in the brains of individuals with an acute sense of "social isolation"—a key ingredient in loneliness that has nothing to do with being physically alone, and everything to do with feeling alone. 

While in an MRI machine, subjects viewed a series of images, some with positive connotations, such as happy people doing fun things, and others with negative associations, such as scenes of human conflict.  As the two groups watched pleasant imagery, the area of the brain that recognizes rewards showed a significantly greater response in nonlonely people than in lonely people. Similarly, the visual cortex of lonely subjects responded much more strongly to unpleasant images of people than to unpleasant images of objects—suggesting that the attention of lonely people is especially drawn to human conflict. Nonlonely subjects showed no such difference.

In short, people with an acute sense of social isolation appear to have a reduced response to things that make most people happy, and a heightened response to human conflict.  This explains a lot about people who not only seem to wallow in unhappiness, but also seem obsessed with the emotional "drama" of others. Every office has a few people just like that.

The Grinch is easier to understand given these findings. He is physically isolated (except for his dog), but more importantly he's socially isolated. He feels no sense of connection to the citizens of Whoville, though they live just outside his mountain lair. Watching them surround themselves with happy things like ornaments and gifts and food ticks him off, so he determines to inject some strife into the festivities and revel in the fallout. 

Fortunately, the Grinch has an epiphany (a Gestalt moment) that makes him not only want to return everything to Whoville, but participate in the merriment as well. The real-life corollary would probably include a couple years of therapy, but Dr. Seuss makes the point well enough: there is redemption for those suffering from loneliness.  It requires genuine connection with others—not faces in a cheering crowd or numbers on a Facebook page. Those things might supplement real relationships, but they can't replace them. 

And that, it seems to me, is the heart of the holidays: they are ritualized reminders that none of us are islands, and that no matter how many people surround us, we're only at our best when we allow some of them to be part of us.  

Happy holidays.  

David DiSalvo's newest book, Brain Changer, is now available at AmazonBarnes and Noble and other major booksellers.

Posted on December 21, 2013 .

New Study Asks: What Kind Of Bored Are You?

Most of us think we already know what it means to be bored, and we’ll look for just about any diversion to avoid the feeling.  But according to recent research, boredom is not a one-size-fits-all problem — what triggers or alleviates one person’s boredom won’t necessarily hold sway for someone else.

According to researchers publishing in the journal Motivation and Emotion, there are four well-established types of boredom:

Indifferent boredom (characterized by feeling relaxed and indifferent – typical coach potato boredom);

Calibrating boredom (characterized by feeling uncertain but also receptive to change/distraction);

Searching boredom (characterized by feeling restless and actively searching for change/distraction); and

Reactant boredom (characterized by feeling reactive, i.e. someone bored out of her mind storming out of a movie theater to find something better to do).

The most recent study by the boredom-defining research team has now identified a fifth type--apathetic boredom--and it's the most troublesome of all. People exhibiting apathetic boredom are withdrawn, avoid social contact, and are most likely to suffer from depression. In fact, apathetic boredom could be considered a portal leading to depression.

The sort of remedy that would alleviate “searching bordeom”—actively pursuing change—would not help someone with apathetic boredom, because change itself represents too much of a threat. Apathetic boredom feeds on itself, perpetuating over and over the same feelings that make it so difficult to escape. The uncertainty of change is just another reason to stay cloistered away.

Study co-author Dr. Thomas Goetz of the University of Konstanz and the research team conducted two real-time experiments over two weeks involving students from German universities and high schools. Participants were given personal digital assistants to record their activities, experiences and feelings throughout the day for the duration of the study. The results showed that not only do different people experience different types of boredom, but also that people don't typically switch-hit between flavors of boredom – any given person will tend to predominantly experience one type of boredom far more than others.

The most alarming finding of the study is that apathetic boredom was reported by almost 40 percent of the high school students, suggesting a link between apathetic boredom and rising numbers of depressed teens.

The obvious drawback of this research is that participants self-reported their feelings and experiences during the study period, and self-reporting is often unreliable.  On the plus side, the researchers ran the study for two full weeks instead of just a few days, and had far more data to analyze as a result.

The research was published in the journal Motivation and Emotion.

David DiSalvo's newest book, Brain Changer, is now available at AmazonBarnes and Noble and other major booksellers.

Posted on December 19, 2013 .