top of page
  • Michelle Niedziela, PhD

HCD On The Road:  Debunking Neurohype

For the past year or so (starting at Pangborn 2017), I’ve been sort of on tour discussing the trials and tribulations of using applied consumer neuroscience.


I’m calling it my manifesto, where I’m basically burning down my own house.

You see, my background is in behavioral neuroscience. My undergraduate degree is in psychology, PhD in behavioral neuro-genetics, and my postdoctoral research focused on sensory perception. I’ve been passionate about research and science my whole life. For the past 10 years I’ve worked as a student mentor and have judged several high school and middle school science fairs. I speak regularly on topics of science in industry as well as write my own blog and magazine columns.  When I speak to student researchers, I always focus on proper scientific method use and scientific integrity.


When I entered the industry, my first job was to act as the scientific lead for external research/innovation at a large CPG company. Basically, it was my job to work with external research providers and vet out their methodology and conclusions, mostly around using what is now referred to as consumer neuroscience, but back then, it didn’t really have a name.


I now work on the other side of things as a research provider. And as chief methodologist and VP of research and innovation at HCD, it’s really my job to ensure we are working as hard as possible to do things correctly.


And so it has really pained me to see how this field has developed. When I speak my manifesto at various market research conferences, I start with a question:

How many of you have heard about neuromarketing?

And of course, many have. In fact, typically there are many other talks at these conferences on the topic. You may see them with key words like “System 1” or “consumer neuroscience” or “implicit” or “behavioral economics.” The name changes with what is currently popular (from neuromarketing to now System 1) or trending.


And so then I ask another question, how many of you are skeptical of neuromarketing?

And to this, many hands will go up.


Why so cynical?

Well, potentially for good reason. What started out as an interesting concept has turned its course a bit. Published in 2011, Kahneman’s book Thinking, Fast and Slow has become dogma to neuromarketers, dividing consumer decision making into two processes: System 1 and System 2. System 1 being the fast and emotional reactive decision-making process and System 2 being the slow and deliberate, purposeful decision-making process (*this concept isn’t all Kahneman, and in fact, can likely be traced back to Plato’s Chariot Allegory or maybe even Freud’s Id, Ego & Super Ego). Or an easier way to consider it, when car shopping, perhaps your System 1 is excited by a shiny, red, convertible sports car while your System 2 is more convinced by the more reliable, more appropriate, compact sedan.

Far too often neuromarketers propose that marketers and market researchers should forgo System 2 and focus on System 1. And those of us who are familiar, know that marketers have being targeting the consumer “id” for a long time, this is nothing new. But does it work? If we revisit the car purchasing scenario, sales of compact sedans far outnumber sports cars. Why? Well, as attractive as a flashy sports car may be, when we make our final purchase decisions, we ultimately rely on what is most practical. With work commutes and budgets, the compact sedan ends up being the better choice in most cases.


Once upon a time, a neuromarketer tossed out a number: 90% of all purchasing decisions are made subconsciously. It sounds great, but it’s total fiction. It seems to stem from the idea that we only use 10% of our brains for conscious thought and all else (95%) is non-conscious. Of course, this ignores the fact that our brains are mostly involved in maintaining body homeostasis (breathing, cardiovascular functions, balance, hunger, thirst, etc.). The stat is often credited to Martin Lindstrom, in reference to mirror neurons, or sometimes to Dr. Gerald Zaltman (with no real agreement on who owns this number); however, no actual evidence exists proving that this statement is true. Unfortunately, it’s also impossible to prove incorrect because you can’t prove the negative.


That’s not to say that System 1 (or non-conscious) style thinking is useless when considering consumer appeal. In fact, a lot can be learned from what activates System 1. However, a neuroscientist or psychologist would not view consumer behavior as divided decision-making processes. Instead, they would more likely view the consumer experience and decision-making process on a continuum or process of thinking.

When you think about how people interact in the world and the environment around them, you will see that it isn’t a completely divided process. Certainly, there is a “non-conscious” and “conscious” in that there are sensations of which we are not consciously aware and sensations of which we are aware. The example I like to use is the behavior of answering your cell phone. When your cell phone rings, the hair cells in your ear react to sound vibrations and send a neural signal to the brain. This happens without you being consciously aware of the sound quite yet (non-conscious). But as your brain receives the signal, it classifies its meaning and value and then deliberates on that information (as you become more conscious of this effort) to finally decide (consciously) how you will react.


The value in measuring non-conscious reactions is that by better understanding of the non-conscious response we may be able to influence the cognitive behavior. For example, changing the tone, pattern or length of the ring tone may influence how quickly you respond to it, or changing the color of a package may influence perceptions of a product.


Using The Right Tool For The Right Question

And this is where my manifesto becomes a bit more controversial, addressing the misuse and bad science methodology and technology. By calling out the misuse of the most common tools, certainly I’ve managed to anger a few companies that rely on using those tools in their research. But I’d like to stress here that it isn’t that the tool itself is bad, in fact, I do say that the tools do exactly what they are supposed to do. Humans, however, are more often the problem, overinterpreting results or designing studies incorrectly.


The image above is directly from my presentation. In it, I describe the methodologies on the left being more reliable and those on the right as being less reliable. On the left, more reliable side, I start with the “gold standard” biometrics measures (fEMG – facial electromyography, HRV – hear rate variability, GSR – galvanic skin response). These are considered gold standards mainly due to their simplicity and direct correlation to what they measure. For example, increases in GSR are directly and positively correlated to increases in arousal. Similarly, eye tracking is a direct measure of gaze behavior, and implicit reaction measures are directly correlated with association. However, I set eye tracking and implicit reaction slightly more to the right (less reliable) side because there is room for misinterpretation and misuse. For example, far too often eye-tracking behavior is attributed to attention when in fact it is possible to be looking at something but not paying attention. There have also been cases of improper design in implicit reaction studies that make their results less reliable. Further to the right of the image, you may be surprised to see EEG and fMRI. Arguably, these methodologies are more consistent of what we think of when we think of using neuroscience in research, and they have been wonderful tools in academia. However, their application in industry research is often plagued with improper research design. For example, extrapolating emotional conclusions from EEG or fMRI work is not as simple as it may seem and typically requires evoking the reactions, not passive measurement. Thus, this step has often been skipped in industry use, making the conclusions hazy at best and total false at worst. Further, fMRI studies are notoriously expensive and difficult to perform in the confines of consumer research. Perhaps most controversially, I placed cheaper EEG headsets and facial coding at the far right end of the reliability spectrum. Both are cheap solutions to adding neuroscience to consumer research, but as one would expect, you get what you pay for. Cheaper EEG headsets mean a poorer signal thus, more difficulty in interpreting already difficult to interpret results. And in our opinion, facial coding is not nearly as useful as it is being sold to be, as its proponents often neglect to reveal its limitations (socially driven reactions, dropout rates, interpretations, etc.).

However, I do want to stress that it is not the fault of the measures. It is perfectly reasonable to use any one of these measures as long as you are clear on all the limitations AND use them properly.


Ultimately, there is no one tool that will cover all research, and so we must be willing to accept that certain tools are better at collecting certain types of information over other tools, and that we must be sure we are using the right tool for the right measure.

The Scientific Method. So when I talk about design problems in research, I’m talking mostly about people not following the scientific method. Most of us learned this process in elementary school, but I’ve updated it here for industry research purposes

  1. Make an Observation – this would be the scope of the research.

  2. Develop Research Questions – this step is most often skipped, unfortunately. We find it is best to identify current problems in Step 1 to revisit as research questions for step 2.

  3. Formulate Testable Hypotheses –this step is also frequently skipped, but very important as it helps drive which methodological approach will be most appropriate.

  4. Conduct an Experiment – specifically, design an experiment to test the hypotheses from step 3 using the most appropriate methodologies and minimizing confounds.

  5. Analysis – use the appropriate statistical methods to show real differences and effects.

  6. Conclusion – interpret the data as is, based on the limitations of the method and avoiding over-reaching claims

It sounds simple, and yet appears to be rarely followed when you see presented case studies. Far too often, there doesn’t appear to be any research question beyond wanting to add a neuroscience technique. Which is, of course, fun but…

While it’s great to use all of these scientific tools and be on the cutting edge of technology, it’s important to take a step back and think about what you are ultimately trying to accomplish. It’s my firm belief that if you can just ask someone, then just ask them. If the question is about liking, for example, you are much better off simply asking consumers if they like the product. Consumers are actually quite reliable at knowing whether they will purchase something or if they like something. So really, that’s not what the technology should be used for, and is in fact, not great at doing such. While skin temperature has been positively correlated with liking of different tastes, it is far more reliable to just ask the consumer. Neuroscience and psychological methodologies should, instead, really be used for measuring items beyond liking, to better understand the drivers of liking.


So while some research providers claim that you can’t trust consumers to tell you what they really think, I don’t agree that is necessarily true, although it makes for a very convenient story. The truth is that consumers can tell you what they think if you ask them correctly and neuroscience really isn’t a great tool for lie detection (except for pupil dilation which has some reliable correlation to lying).


So what can you do?

I suggest following a few rules/guides to help decide how to use both neuroscience and a potential neuroscience provider:

  1. Start with the research question.While it is often attractive to passively measure consumers in a naturalistic environment, and there certainly can be exploratory ideas uncovered in observational research, in order to get the best and most actionable results for applied neuroscience, you should really consider what the research question is. Do you need to compare prototypes to a benchmark product? Do you need to show more engagement with a particular communication? Starting with the research question will help guide the scope of the study and the approach of the method.But often, clients don’t have a clear research question outside of pressure to implement implicit or System 1 research. And so to help our clients, we suggest a few ideas to guide research question development:

  2. Identify your current research pain point? Are there any areas of your current research which leave knowledge gaps? Are there areas in your current research that are not clear or provide incomplete results?

  3. Who are we studying? Current brand loyal users? General populations?

  4. What are the action standards? To approve this prototype, in what ways should it be different from the current product? Does it need to perform better in some aspect than a competitor benchmark?

  5. Always use the right tool for the right question.Once you have the right question, it can be a lot easier to choose which research tool will best provide an answer. This is a much more productive and cost-efficient approach than starting with a tool and looking to apply it somehow. For example, if your research question is about whether a new fragrance helps to suggest that the product is more “spiritual,” facial coding will not be able to help you, but implicit reaction may be able to help.And this is why it is important that your research provider be “methodologically agnostic.” Or as I often say, if you go to a widget salesmen he is going to sell you a widget and not something else. If the research provider is a “one trick pony” or only has 1 methodology to offer you, he likely won’t be trying to tell you about the limitations of that method.So how can you identify a good research provider? I suggest asking a few targeted questions about the limitations of the proposed methodologies. And if they can’t tell you about any limitations or suggest that there is one solution to fit all needs, then likely they are not a great research partner.Further, if the research provider does not suggest that proper research design needs to be followed, this may indicate they aren’t being entirely truthful. A major problem in using neuroscientific and psychology methodology is that it does require a level of experimental control to reduce noise in the test, to make sure you are measuring exactly what you say you are measuring. So if there is no effort to establish this, then again, likely they are not a great research partner.

  6. Build a story with multiple research pointsNeuromarketers often try to say that consumers’ cognitive responses can’t be trusted and that neuro measures are somehow more truthful. However, we assert that neuro measures should never be performed or relied on alone, nd that neuro measures should always act as a supplement to cognitive research. The reason for that is that neuro measures are not a replacement for cognitive or more traditional measures. They don’t answer the same questions. So instead of trying to prove one of them better or wrong, we would suggest that you use them both to be able to view the consumer response with “both eyes open.” By understanding both the cognitive and non-cognitive consumer experience, we believe we can help our clients better communicate with consumers and design better products.We suggest that instead of trying to make neuroscience data stand alone, you should supplement current research with additional insights from neuroscience and psychology. And that by integrating the data (either in story or through statistical modeling), it is possible to make better and more actionable, informed conclusions and interpretation.

Final Thoughts

In giving this talk during the past couple of years, I’ve been overwhelmed with the positive responses I’ve received from people on both the end client side and the research provider side. End clients have often said they were disappointed with results they’ve gotten from neuromarketing studies and were glad that it wasn’t because the science was bad, just misused. Research providers have been glad to hear that others in the industry saw the problems and were speaking out about them.


At one conference, I witnessed a research provider being called out. An audience member asked him how he had validated his methodology, and his shocking response was, “that’s not my job.”


It is the job of the research provider to use reliable, validated methods and technologies. The client-provider relationship is one of trust, and so we must do our very best to nurture that trust with full disclosure regarding the limitations of these tools.


I’m happy to report that since giving these talks, I’ve noticed more providers posting blog posts speaking critically about their own methodologies and the field in general. While it is important to always push the limits and create new and innovative applications, we must, most importantly, stay scientifically vigilant and maintain scientific integrity.


That being said, I’m certainly open to any discussion about any methodologies. While I know a lot about some specific things, I certainly don’t know everything. So I’m more than happy to have more conversations about any methodologies and uses and abuses in the research field.


Remember, the first rule of the Dunning-Kruger club is that you don’t know you’re a member of the Dunning-Kruger club.

Related Posts

See All

OUR POST

  • LinkedIn
  • Facebook
  • Twitter
  • YouTube
  • Instagram
bottom of page