top of page
  • Michelle Niedziela, PhD

The Brain, What is it good for?

Updated: Apr 1, 2022

What does it mean to do “neuro” research? Most of the time when we think of “neuro” we picture some sort of gadget or technology on someone’s head. This is because the word itself, “neuro”, comes from the Greek “neuron” and implies that whatever “neuro-“ is attached to is relating to nerves or the nervous system, or… for most people, the brain.


Neuro-science… Neuro-marketing… Neuro Research

(Adobe Stock image left, Wikipedia EEG right)


At HCD we talk about having a large and varied toolbox consisting of methodologies from Neuroscience, Psychology and Traditional Market Research. When we talk about the tools within the neuroscience bucket of the toolbox, we often hear the questions:


What about something you put on the head? -Or- What about something to read the mind?

And the answer is, yes, we do have the capability of measuring brain activity using methods such as EEG (electroencephalography) or fNIRS (functional near infrared spectroscopy). But honestly, we do not use them as often as some of our other, more reliable and validated tools.


But first things first, there are no tools that can read a consumer’s mind. But many people do believe this. Some countries have even banned the use of neuro imaging methods in consumer work due to ethical concerns over privacy (ex, France). In reality though, the point is moot. In fact, there are all sorts of neuro-myths that have mislead many clients:

  • We only use 10% of our brains – In order to survive, we do, in fact, use all of our brain all of the time. It is constantly active regulating homeostatic systems like our heart and breathing as well as peripheral sensing of our surroundings, keeping us upright, etc. and so on.

  • 90/95% of all decision making is non-conscious – Actually we have no way of measuring if something is conscious or non-conscious, so there is no way to tell. Most non-conscious brain activity is regulatory to keep us alive. But a lot of brain activity is also conscious/cognitive. There is not true separation but rather an interaction of sensing and interpreting external data “non-consciously” and deciding how to react “consciously”.

  • There’s a buy button in the brain – No, there is no secret structure in the brain that can be influenced through marketing to force you to do anything. Purchasing decisions involve many different parts of the brain and both conscious and non-conscious activity.

  • Neuroscience tools can read minds – No. Some studies have shown that it may be possible to train a brain to react to a specific stimulus (for example seeing the same video clip repeatedly) and using brain imaging to be able to recognize patterns of activity and then going back to be able to match those patterns to that particular clip… not exactly mind reading but as close as we’ve gotten so far.


There are many brain imaging methodologies out there and for the most part they are all off the shelf, meaning anyone can buy them and use them. You don’t need any special license or degree. Functional Magnetic Resonance Imaging (fMRI) and Positron Emission Tomography (PET) are some other more advanced methodologies that have been used in consumer research. However, they are also much more expensive (millions to buy, thousands per participant to run) and require a medical clinical staff and hospital or clinical setting to use.


While these more expensive and advanced brain imaging tools are cool and provide interesting images of the brain, they may not necessarily provide the information a client is looking for. And they may also not work within the constraints of the research (budget, timing, exposure or use of products).

(fMRI images, Wikipedia)

fMRI is described as being superb for imaging specific brain structures for activity. However, it is not great temporally (meaning it doesn’t work quickly or at time locking events). It uses an estimation of blood flow to brain structures using static structural view of brain matter by measuring differences in magnetic properties between arterial (oxygen-rich) and venous (oxygen-poor) blood. Areas that are more oxygen-rich are considered more activated. fMRI is often criticized for problematic statistical analyses, often based on low-power, small-sample studies. In one criticism shared at the Society for Neuroscience’s annual meeting and ultimately published in NeuroImage in 2009, a dead salmon was shown pictures of humans in different emotional states. The authors provided evidence, according to two different commonly used statistical tests, of areas in the dead salmon’s brain (purchased from a grocery store) suggesting meaningful activity. The study was used to highlight the need for more careful statistical analyses in fMRI research, given the large number of voxels in a typical fMRI scan.

But perhaps a more important question when it comes to industrial consumer research is what structural activity can tell us about consumer perception and experience? Most people think that fancy tools like fMRI are capable of reading consumers minds after exposure to products, but this is far from true. Tools like fMRI are great for academic and basic research to gain insight into how different brain structures work. However, those structures turn out to have multiple functions, some of which are often contradictory (ex: the insular cortex is often cited in neuro-marketing or consumer neuroscience as a hub for emotional experience, however, many of the emotions it is associated with are contradictory – including maternal and romantic love, anger, fear, sadness, happiness, sexual arousal, disgust, aversion, unfairness, inequity, indignation, uncertainty, disbelief, social exclusion, trust, empathy, sculptural beauty, a ‘state of union with God’, and hallucinogenic states). This makes sense when you think about how complicated human cognition and brain function are when all of it takes place in such a small space (human brain has about 100 billion neurons with 100 trillion connections all housed in about 3 lbs of tissue).


But, fMRI is expensive anyway and EEG is more commonly used because it is cheaper and easier. So it must certainly be better, right? Maybe not. Most companies use EEG, a measure of the electrical activity of the brain through electrodes placed on the scalp. It’s been known since the late 19th century that the brain’s activity gives off electrical signals, and the first recording of that activity in humans was in 1924. EEG works by attaching recording electrodes to the scalp. It is used in clinical settings to diagnose epilepsy and monitor coma patients, and until recently (when better technologies like MRI came along) it was also used for diagnosing tumors and strokes. EEG is a popular neuroscience research method (academic) and can provide very detailed information about the brain’s activity while the participant performs some kind of specific and highly controlled task. EEG is only good for sensing the activity on the surface of the brain; activity lower down is just too far away from the electrodes on the scalp to get reliable data. Recording neural activity through the skull is like listening to an argument in the apartment below yours by pressing your ear against the floor; you might be able to hear some muffled voices, and maybe even some of the louder details, but you have no hope at all of hearing what’s happening in an apartment five floors below.


One problem is that these signals can be drowned out by electrical activity in the muscles and are sensitive to interference from other electrical devices. Genuine EEG research overcomes these problems by using extremely sensitive equipment in electrically shielded environments and by repeatedly doing the same tests, to average out any interference.


One reason neuro-research is not and has had difficulty being representative of real-life experience is due to the complexity of getting reliable EEG signal. In order to sort through the noise (extraneous brain activity not related to the stimulus) and get aggregated results, participants need to be exposed to the stimulus in multiple (sometimes even hundreds) of exposures (for example, in order to get valid ERP signals) and statistics are used to look for meaningful differences between variables. Though in some cases (using proper statistical approaches) it can be possible to measure in one take if the sample size is large enough and the stimulus or experience is exactly the same over a given (controlled) amount of time for all participants (such as a video) using inter-subject correlations. If you talk to neuroimaging researchers (academic experts) they will tell you that there are many sources of possible interference, the results are sensitive to slight changes in analysis, and drawing strong conclusions from a single study is a minefield.


Much of commercial neuromarketing EEG uses cheap kits, in poorly controlled, poorly designed experiments (very loosely based on validated academic research), that often produce junk data. Analytics using these cheaper kits often rely on global brain activity changes (hemispheric differences, global differences inwave forms) as these kits don’t often have the spatial resolution to properly look at regional differences. This becomes a major issue when trying to make generalizations and interpretations of the data to specific stimuli. For example, the effect that brighter colors may have on the visual cortex or that specific tastes may have on the gustatory cortex. However, most commercial neuromarketing is not experienced with or concerned about these specific modalities, opting to rely on over-generalizations of global data and largely ignoring a very strong field of research for the specific modalities.


Commercial neuromarketing is able to hide these glaring issues behind proprietary black box metrics obscuring how they did their analytics, avoiding using statistical analysis, and masking participant and experimental variability. Often they will apply the same metric across modalities as well, assuming that the neural responses to stimuli are all the same (though all of the academic research shows it is not). EEG responses to video ads for example is not at all similar to tasting a product, and so the study design and approach to analytics needs to be adjusted for different stimuli..


All this is not to say that EEG should not be used or is not useful. It can be when used correctly. But instead of focusing on the tool, we do suggest on focusing on research question and then choosing the best tool to match.


So, what’s the best way to measure consumer “neuro” response?


While it’s great to use all of these scientific tools and be on the cutting edge of technology, it’s important to take a step back and think about what you are ultimately trying to accomplish. Putting the cart before the horse won’t get you to where you want to go.


There’s still value in cognitive self report.


It’s my firm belief that if you can just ask someone, then just ask them. It’s far more reliable, accurate and cheaper in most cases. If the question is about liking, for example, you are much better off simply asking consumers if they like the product. Consumers are actually quite reliable at knowing whether they will purchase something or if they like something. While physiological measures, such as skin temperature, have been positively correlated with liking of different tastes, it is far more reliable to just ask the consumer. Why spend the extra money and time to use skin temperature when a simple cognitive survey would be more accurate?


It should be noted that the relevance of the differences between liking results and physiological results are still unclear. Correlations between physiological measures and self-reported measures is, however, a must to make any real conclusions. Neuro measures are not stand alone and cannot be interpreted without integrating with cognitive response. Neural responses are proven to be highly dynamic over time with specific time courses and can vary for different stimuli and type of response (de Wijk et al., 2014). For some measures, relatively fast responses can show mostly differentiation based on liking, whereas somewhat later responses can show mostly differentiation based on intensity.


So really, that’s not what the technology should be used for, and is in fact, not great at doing such. Neuroscience and psychological methodologies should, instead, really be used for measuring participant reactions beyond liking, to better understand the drivers of liking, and to diagnose the consumer experience with products and communications.


*Neuro measures should not be used in a vacuum, they are not stand alone. But instead should be used to further diagnose and supplement cognitive data.*


This is why it’s important to have a large and varied toolbox.


Contributions to the prediction of market success may not be the only reason for selection of physiological measures. These measures offer advantages over other more traditional measures because they are relatively fast (typically a matter of seconds rather than minutes as required for questionnaires) which facilitates linkage to specific phases of product-consumer interactions. In addition, these measures may reflect processes that consumers are not even aware of, and that are therefore difficult to capture with questionnaires, but which may contribute to consumer decisions. On the other hand, physiological measurements are technically more challenging than questionnaires and applications are therefore more suitable for laboratory than for real-life. Ultimately, they should never be a replacement for cognitive measures, such as self-report, as it is clear that the two approaches (physiological and cognitive) provide very different, though often complimentary, information. It is still important to ask the consumer what they think. While non-conscious measures certainly can stand alone, they are currently more powerful when combined with traditional measures.


If applied consumer neuroscience measures simply repeated results from traditional testing, then it wouldn’t be worth doing as the cognitive measures are far cheaper and more reliable. Conscious and non-conscious measures provide very different answers (or should if done correctly). And so, they shouldn’t be at odds with one another but should be providing added and synergistic information to help clients make better business decisions. Real and thoughtful applied consumer neuroscience is about using the right combination of sensitive physiological measures from psychology and neuroscience so we can get at the “why” (or diagnosis) of consumer behavior and this is something that can be most useful for making better products and packaging.


There are other more flexible and validated physiological measures than fMRI or EEG that can be used including (but not limited to) fEMG (facial electromyography), GSR (galvanic skin response), skin temperature, HRV (heart rate variability), and facial expression coding. However, these can meet some of the same problems that have been described earlier if research design is not accounted for properly.


But being able to use multiple tools and getting multiple measures of the phenomenon helps us to better understand the consumer’s reaction.


Biometric tools, like the ones mentioned above, have the advantage of being easy and cheap to use. But also in that they have been well established in the scientific literature to be direct correlates of the psychological and emotional phenomena they are claimed to represent. fEMG is directly correlated to emotional valence (pleasantness), GSR is directly correlated to arousal, and HR/HRV is inversely directly correlated to attention and relaxation. This set of 3 biometrics has been considered the “Gold Standard” of applied consumer neuroscience due to its strong validity and reliability. And is one of the most common used due to its flexibility.


We like to use them in our research to better understand the consumer emotional experience due to its great representation of the multi-dimensional theory of emotion, or the PAD emotional state model – a psychological model developed by Albert Mehrabian and James A. Russell (1974 and after). This approach allows us the freedom to substitute different tools to assess the different valences of PAD (pleasure, arousal, dominance – or approach/withdrawal) AND to best be able to differentiate different experiences with statistical accuracy and detail. This helps us provide actionable results to clients to make real business decisions beyond cool anecdotes and new technology.

(HCD’s patented multidimensional mood map)

Being flexible also means that sometimes you have to admit that neuro just simply isn’t the best way to go. Additional self-report methodologies include a wide array of quantitative market research tools (MaxDiff, etc.) as well as other psychological tools including several emotional batteries and scales (PrEmo, EsSense Profiling, EmoSemio, SAM, implicit reaction response, etc.). These approaches can often bridge the gap between the conscious and non-conscious response (implicit reaction) and better examine the emotional reactions of consumers better and more reliably than physiological measures.


Given the complex nature of emotion, it is clear that there is no one magic tool to measure emotions and that each methodology emphasizes a specific part of the phenomenon. In their wide review of the methods developed to measure emotional states, Mauss and Robinson (2009) emphasized that each method is sensitive to specific aspects and best captured some but not all the aspects of the emotional states. Methodologies to study emotions are as various as the theories that proposed a definition of the phenomenon. And we should always be flexible to using the right tool for the right question.


For more information on how HCD can help to ensure you are using the right tool for the right question, please reach out to Allison Gutkowski (Allison.Gutkowski@hcdi.net).

Related Posts

See All

OUR POST

  • LinkedIn
  • Facebook
  • Twitter
  • YouTube
  • Instagram
bottom of page