Marching to the beat of your own drummer works…

… if you don’t mind making music all by yourself?

New research written up in a Science Daily article called, Dont Get Mad, Get Creative: Social Rejection Can Fuel Imaginative Thinking claims the following:

A study by a Johns Hopkins University business professor finds that social rejection can inspire imaginative thinking, particularly in individuals with a strong sense of their own independence.

(The emphasis in bold is mine.)

Some questions this raises for me:

1) What leads some people to develop a sense of strong independence vs. really hoping to be included? Can people who really long for inclusion become more genuinely independent? (not in a false way, where they pretend not to care, while seething with anger and forming little groups of their own from which they can reject people). Can people who start off independent get worn down and long for inclusion – if so, how does this happen?

2) How is social inclusion being defined here? Is it inclusion in terms of mainstream values? There are people who might not be bothered by rejection by the mainstream, but care very much about the opinions of a non-mainstream group. Does true independence hold in the face of all kinds of rejection, both mainstream and non-mainstream?

3) More from the article:

“Rejection confirms for independent people what they already feel about themselves, that they’re not like others. For such people, that distinction is a positive one leading them to greater creativity.”

What other qualities accompany this feeling of being proudly different from others? Positive qualities like resilience, focus and discipline. Possibly negative qualities like arrogance and contempt (are these conducive to creativity?).

Synaptic Sunday #7 – Mind-Controlled Robotics edition

1) Real-life Avatar: The first mind-controlled robot surrogate

Tirosh Shapira, an Israeli student, controlled the movements of a small robot over a thousand miles away using only his thoughts.

The fMRI (functional magnetic resonance imaging) reads his thoughts, a computer translates those thoughts into commands, and then those commands are sent across the internet to the robot in France. The system requires training: On its own, an fMRI can simply see the real-time blood flow in your brain (pictured below right). Training teaches the system that a particular “thought” (blood flow pattern) equates to a certain command.

Some of the future uses for such technology are medical (for people who have suffered paralysis for example) and military.

Shapira mentions in the article that he “became one with the robot.” How would we come to feel about these robotic extensions of ourselves? If they get damaged or destroyed, would we feel as if a part of us had been killed, or after some disappointment would we settle for any replacement?

2) Mind-controlled robot arms show promise

Using implants to record neuronal activity in parts of the brain associated with the intention to move, researchers were able to help two people with tetraplegia manipulate a robotic arm by thinking about certain actions (e.g. lifting up a cup).

The challenge lies in decoding the neural signals picked up by the participant’s neural interface implant — and then converting those signals to digital commands that the robotic device can follow to execute the exact intended movement. The more complex the movement, the more difficult the decoding task.

This is amazing work.

Synaptic Sunday #6 – The Dog’s Mind and fMRI Edition

1) How do you train a dog to get into an fMRI scanner and stay there without resorting to restraints and drugs?

2) More importantly, why would you want to get a dog into an fMRI scanner?

Scientists Use Brain Scans to Peek at What Dogs Are Thinking

From the link:

The researchers aim to decode the mental processes of dogs by recording which areas of their brains are activated by various stimuli. Ultimately, they hope to get at questions like: Do dogs have empathy? Do they know when their owners are happy or sad? How much language do they really understand?

An fMRI scan doesn’t give us mind-reading abilities; it shows blood-flow to different areas of the brain (oxygen-rich as compared to deoxygenated blood), and researchers infer brain activity from that. When the dogs were given a signal for “treat,” for instance, there appeared to be increased activity in a part of the brain that in people is associated with rewards. But can we get a real understanding of what the dog is experiencing? If you look at questions of empathy, what is empathy to a dog? Maybe we’d see increased activity in certain parts of the brain that in humans is associated with empathy, which could be interesting, but what does that tell us more deeply about the dog’s mind and subjective experiences? If they know when their owners are happy or sad, what kind of knowledge is this: a reading of facial and behavioral cues, or something deeper than that? This is a limitation of fMRI when it’s used on people as well, though with people we can try to supplement the fMRI scan findings with other measures – various cognitive tasks, including those that ask for verbal input (“woof, woof”).

3) Overall, fMRI studies can be quite problematic, for dogs or humans (or dead salmon) – as detailed in this recent article: Controversial science of brain imaging.

Synaptic Sunday #5

Psychology/neuroscience link roundup centered on a particular topic – this week, some links on what makes people productive.

1) Would this work for anyone? (If something like it has worked for you, speak up):

Helen Oyeyemi advises writers to download the Write or Die app onto their computer (or does she write on an iPhone?). In ‘kamikaze mode’, if you stop writing for more than 45 seconds it starts deleting the words you have already written.


That sounds like a nightmare to me. Whenever I’d stop to think (or to just sit quietly for a little bit, staring out the window and letting my brain do whatever it does when I appear to be unproductive), I’d be too busy watching the clock to let my brain work.

2) It can be good to let your mind wander! (As long as you’ve put in some focused mental effort beforehand.)

3) When our thoughts and attention wander, the brain isn’t as passive as we imagine it to be: …an interesting study published in a 2009 issue of Proceedings of the National Academy of Sciences found that daydreaming also activates parts of our brain associated with ‘high-level, complex problem-solving’ including the lateral pre-frontal cortex and the dorsal anterior cingulate cortex.”

I don’t think day-dreaming, and its potential creative benefits, can be forced (then you’re too self-conscious – attending too much to your own thoughts); it also isn’t beneficial when done excessively. But to dismiss it as wasted time is a mistake. And to chain productive and creative thinking to strict time intervals strikes me as useless (and horrifying).

The safety of negative criticism

In a class I took a few years ago, the professor assigned readings every week and instructed the students to come up with some comments or discussion questions in response. The readings were primarily research articles in psychology and neuroscience.

At one point the professor brought to our attention that most of the time our comments were negative and critical. “The researchers could’ve done XYZ but they didn’t” or “You can’t use an ANOVA for these data, can you?” or “They didn’t perfectly control for XYZ so their results are less conclusive.” These comments usually weren’t followed up on with alternate suggestions, so the professor would try to coax them out of people. “How would you have improved on the study?” she’d ask. But what’s more, she wanted a substantive discussion of the bigger picture questions. She started to demand more questions larger in scope and accompanied by people’s own ideas for experiments. The discussion had a different intensity then, more energetic and thought-provoking than when students just sat around picking at other people’s work.

Don’t get me wrong. It’s important to pick apart ideas and recognize a study’s limitations and flaws, whatever they happen to be: abuse of statistics, a poorly chosen subject population, a set of conclusions that’s too bold given the relatively weak results. That’s all a necessary part of critical thinking. Regardless of whether you’re a scientist or not you need to be able to evaluate people’s claims and see what merit they have.

But it’s also important to think in a more positive sense – generating ideas, asking questions, relating one topic to another and considering the implications of different findings. Fewer student comments referred to the strengths of any given study, only the weaknesses.

I remember at the time thinking of why negative remarks naturally dominated our discussions until the professor stepped in:

  • We were afraid to look stupid. If we offered our own ideas they could get shot down and maybe show the workings of an immature mind. What did we know? We didn’t want to take risks. Picking at other people’s mistakes protected us from the most part from criticism, and this was important because we worried too much about what others thought of us.
  • Some of us wanted to look like hotshots in a game of one-upmanship. It was less about the research, more about scoring points off of other people.
  • We were emulating certain professors. The one who ran the class wasn’t like this, but over the years I’ve known other professors who liked to devote their seminars to shredding the work of academic rivals in a mix of scholarly rigor and personal enmity (recently I watched a movie that explores this toxic mix).
  • We were on the receiving end of frequent critical evaluation, sometimes of a very negative kind, so we liked being able to dish it out. It gave us a feeling of power.
  • Making small focused negative remarks took less effort than also trying to think of solutions or come up with new ideas or questions to investigate. Granted, our critical thinking, even if it was mostly negative criticism, took more mental effort than just blindly accepting or rejecting something without justification; we did our homework. But for lack of time, training, knowledge or willingness to put in the effort, we stuck to picking things apart.

One reason I respected the professor who taught that class was her balanced approach to criticizing other people’s work. She looked for flaws, but also for possibilities. She encouraged debate and discussion but didn’t permit nasty remarks. The idea was that we were supposed to take risks, and think more widely and broadly than a purely negative approach would allow, while also being perceptive enough to delve into the nitty-gritty details of a research study and understand its limitations.

Staying purely negative would have been a safer option. In playing the part of ‘superior critic’ we wouldn’t have had to confront our fears, insecurities and weaknesses as much, or take as many risks. And the discussions wouldn’t have been nearly as productive and inspiring.

Synaptic Sunday #4

This Sunday, links on the flexibility of our moral choices:

1) Psychology of Fraud: Why Good People Do Bad Things

I wonder what the definition of a bad person would be within the framework of the article. Someone who’s instructed to think ethically (given an ethical framework about a set of choices) but still makes unethical choices? Someone who’s never sincerely repentant? This line also jumped out at me:

In general, when we think about bad behavior, we think about it being tied to character: Bad people do bad things. But that model, researchers say, is profoundly inadequate.

I think it’s still tied to character, but not in a cartoonish way – shining superheroes vs. dastardly supervillains (though there are individuals who closely resemble each). Everyone has various weaknesses and temptations, not to mention the capacity for self-delusion – to think about an evil act in a more benign way, rationalizing it. The ability to fight rationalizations and temptations, and recognize them before they take root and become mental habits, is an essential part of having a stronger character. The success may be mixed. It’s usually not as simple as thinking of character having two settings: pure good or pure evil.

So the question ‘Why do Good People Do Bad Things?’ still brings you back to the point on what the authors here mean by a ‘good person’ (or a ‘bad person’). Good people may do bad things, but they also do good things? They do certain kinds of bad things but not other kinds? They do bad things from a good motive that they sincerely feel minimizes the bad or makes it a grudging ‘necessary evil’ rather than something undertaken with supervillainish glee? (But so much destruction and evil have stemmed from well-intentioned policies, ideological principles and motives.) They operate out of ignorance more than cold calculation? (A line from the article: (“and if we want to attack fraud, we have to understand that a lot of fraud is unintentional.”) How ignorant are they? How unintentional is it?

The article ends with some proposals to make people in business environments less susceptible to perpetrating fraud. After listing some proposals the article ends with:

Or, we could just keep saying what we’ve always said — that right is right, and wrong is wrong, and people should know the difference.

Well, shouldn’t they know? That doesn’t mean that people aren’t more susceptible in some situations to committing evil, even outside of their awareness. Developing awareness of those susceptibilities and temptations, developing the discernment to see them even when they seem to slip unknowingly into one’s behavior (including when they’re in the guise of good deeds), and rectifying their ill effects as soon as possible are all at the heart of having a good character.

2) Wearing Two Different Hats: Moral Decisions May Depend on the Situation

“We find that people tend to make decisions that may conflict with their morals when they are overwhelmed, or when they are just doing routine tasks without thinking of the consequences,” Leavitt said. “We tend to play out a script as if our role has already been written. So the bottom line is, slow down and think about the consequences when making an ethical decision.”

The scripts can be different depending on the role we’re playing (are we thinking like a medic or a soldier?) More on this research here.

Aging, memory, and context

There are limitations to memory research studies conducted only in the lab, especially if they never include memory tasks and situations that are encountered in everyday life (in fact this is a limitation of lab studies investigating any cognitive process, not just memory).

For example, when researchers take into account how aging adults remember things in day-to-day life, they start to get a different picture of the difficulties people experience with memory as they get older:

When people are tested in the lab and have nothing to rely on but their own memories, young adults typically do better than older adults, she said.

Remarkably, when the same studies are conducted in real-world settings, older adults sometimes outperform young adults at things like remember appointments or when to take medicines.

Synaptic Sunday #3

This Sunday, some links on addiction and control:

1) The Fallacy of the Hijacked Brain

An op-ed from the NY Times:

A little logic is helpful here, since the “choice or disease” question rests on a false dilemma. This fallacy posits that only two options exist. Since there are only two options, they must be mutually exclusive. If we think, however, of addiction as involving both choice and disease, our outlook is likely to become more nuanced. For instance, the progression of many medical diseases is affected by the choices that individuals make.

2) Disease and Choice

One blogger’s response to the above op-ed.

The hijacked brain metaphor may be flawed, but it’s attempting to communicate that the addiction uses the addict’s own self-preservation instincts, desires and will to maintain addiction.

3) Addicts’ Brains May Be Wired At Birth For Less Self-Control

A study in Science finds that cocaine addicts have abnormalities in areas of the brain involved in self-control. And these abnormalities appear to predate any drug abuse.

Cocaine addicted people were studied alongside siblings who didn’t have a drug abuse history. What’s interesting is that the siblings also showed poorer self-control during the study’s task, and had atypical brain scan findings as well. So what led to one sibling abusing drugs, while the other didn’t? How do personal choices and environment come into play? Having a brain that might be more susceptible to poor impulse control or addictive behaviors doesn’t doom you to drug addiction. And, as in other studies, were there individuals whose results differed from the group as a whole? (e.g. a cocaine-addicted person who didn’t have the pre-existing abnormalities in the brain).

Attention smart people

Don’t be complacent:

And here’s the upsetting punch line: intelligence seems to make things worse. The scientists gave the students four measures of “cognitive sophistication.” As they report in the paper, all four of the measures showed positive correlations, “indicating that more cognitively sophisticated participants showed larger bias blind spots.”

What would a GSR bracelet do?

Months ago I read a short story, “Dead Space for the Unexpected,” by Geoff Ryman in a short fiction anthology Brave New Worlds: Dystopian Stories. In the story corporate managers are hooked up to and monitored by various technologies that measure not only their verbal and behavioral actions in the course of their job but also their physiological responses (things like heart rate and blood pressure and Galvanic Skin Response). It then gives them constantly updated scores on their calmness, effectiveness, and efficiency (down to millisecond-long reaction times) in the face of stressful situations, such as having to lay off an employee.

From what I remember, the monitoring technology didn’t make the main character a better manager. He was pretty obsessed about checking his scores and was anxious about them and about what would happen if he were to lose his edge as he gets older. I don’t remember if he or anyone else in the company ever came up with a better product or service (I’m not even sure what the company did). I do remember an atmosphere of excessive tension and competitiveness heightened by the technology, which, along with the scoring system, was abused during the course of the story. Nothing about the workplace seemed any better – no spirit of innovation and creativity for instance, or a genuine feeling of community and teamwork. No inspiring leadership.

The idea of monitoring technology probably sounded good on paper to the corporate head honchos who decided on it – not least because it gave them the means to more closely control and scrutinize their mid-level managers, who had little privacy – but what were its overall positive results? Just because a technology gives us a window into the responses of the brain and body doesn’t mean it’s worth the money or produces long-term benefit. It can instead be a waste of money that also distorts the human spirit.

I thought of this story after reading an article from a Washington Post blog on the hundreds of thousands of dollars of Gates Foundation grant money invested in the study of Galvanic Skin Response (GSR) bracelets that are meant to measure students’ engagement in the classroom.

GSR is a measure of physiological and psychological arousal involving the amount of moisture (sweat) on your skin. Ok, then. Many kinds of emotions can be picked up by GSR devices (which are used in lie-detection); if someone is afraid or angry or sexually aroused, the device shows an increase in arousal without telling you anything about the underlying cause.

How will the bracelet measure classroom engagement? Students could be excited about the lesson, sure, or they could be excited about the person sitting next to them, or worried about the test that’s coming up or afraid the teacher will call on them or interested in something they spotted out the window, or caught up in thoughts (exciting or unexciting) that have nothing to do with school.

Also assuming we could somehow isolate the underlying cause of arousal and pinpoint it to intellectual engagement (which is a complex state of mind in and of itself) does this tell us anything about how the students are learning? I can be engaged with a particular topic but still not understand it fully; I could find aspects of it puzzling or draw incorrect conclusions, excitedly thinking that I get it when I really don’t. Granted, the GSR bracelets would only be one measurement of student engagement, but what’s the point? If the bracelets tell the teacher that the students are fully attentive, the teacher would still have to make sure the wide-eyed interest translates into comprehension.

Other less expensive, less formal and more potent measures of attention and engagement exist – are students asking questions for instance? Are they asleep? Staring at the clock? Taking notes? Passing notes? Raising their hands? What does a bracelet add to all of this except to give schools a feeling of being cutting-edge and slick? (Reminds me of a number of fMRI studies I read through years ago that were poorly designed and didn’t measure what they claimed to but got published in peer-reviewed journals, one suspects, because fMRI was cutting-edge and a “window into the brain.”)

Then there’s the potential for abuse and gaming the system. From Diane Ravitch’s blog:

…a reader noted that the GSR bracelet was unable to distinguish between “electrodermal activity that grows higher during states such as excitement, attention or anxiety and lower during states such as boredom or relaxation.”

Thus a teacher might be highly effective if his students were in a statement of excitement or anxiety; and a teacher might be considered ineffective if her students were either bored or relaxed. The reader concluded, quite rightly, that the meter would be useless since a teacher might inspire anxiety by keeping students in constant fear and might look ineffective if students were silently reading a satisfying story.

So again, what would be the potential benefit of these bracelets? I’d like to see a copy of the grant proposal submitted by the researchers at Clemson University; how did they justify this study?