All posts by Ha Nguyen

Reflection on my internship experience at HCD

With my internship coming to an end this week, I cannot help but wonder why the past two months flew by so fast. It feels like just yesterday I met with Michelle Niedziela, VP of Research and Innovation of HCD Research, during my marketing graduate course at UPenn last fall. I remember being fascinated with Dr. Niedziela’s talk and the work that she does at HCD. I reached out that same day asking if I would be able to intern at HCD during the summer. Dr. Niedziela got back to me the next day, offering me the internship. I felt a rush of emotions from being anxious to excited, making the seven months before the start of my internship seem like years. I was certain the summer with HCD had a lot in store for me.

The work that I completed during my internship varied day-to-day, making the experience both interesting and exciting. During the first week, I attended HCD’s NeuroU conference where I had the chance to listen to and interact with well-versed guest speakers about the future of neuroscience and its integration into modern consumer research methods. The highlight of the conference was my realization of how a similar concept can be used so differently in academia and in industry. Check out my blog on the HCD website where I reflected on my NeuroU conference experience! Speaking of blog posts, during my short time at HCD, I was provided the opportunity to write four blogs on different topics that I personally found intriguing. I also helped review articles and papers for different journals. However, my favorite project was the internal study using implicit association testing that I was responsible for designing and implementing. I was in charge of this project from brainstorming the research topic all the way to writing the survey questions, and I cannot be more excited that the survey is being programmed and should be ready to be run soon. These projects and work products would not have been possible without the support and trust of my supervisor, Dr. Niedziela, and everyone else at HCD. I cannot thank the HCD team enough for giving me so many opportunities and always ensuring that the tasks and projects match my personal and professional interests. I honestly did not expect to accomplish so much at an internship in such a short amount of time.

Reflecting on my experience at HCD and what I was able to complete this summer, I think this opportunity could benefit people who:

  • Have an interest in research. I love doing research, but I am not interested in pursuing a PhD-type-of-research environment. I cannot see myself committing to a research project for such an extended period of time. Therefore, this position at HCD was perfect for me because I was able to work on a variety of research topics, depending on the project and the task. This summer, I was able to research information on topics such as the lodging industry, the implicit association test, the effectiveness of video advertisements, and the use of virtual reality in market research, just to name a few.
  • Have a background in neuroscience/psychology and want an industry exposure/experience. As someone coming straight out of undergraduate into graduate school, I was excited for this internship to provide some industry exposure and to see if this industry is the right fit for me. I would encourage anyone with a strictly academic background, especially in neuroscience or psychology, to try a similar opportunity to see how to apply their expertise in the real world.  
  • Are in market research and interested in the application of neuroscience and other cutting-edge technologies. With the advances of technology nowadays, more and more market research companies are trying to incorporate neuroscience and technology into their traditional research. This opportunity would be a good fit for those who are interested to learn more about the ways to do so.
  • Want to work in a small-business environment. I remember during one of my first days at HCD, I met with Glenn Kessler, CEO of HCD Research. There was one thing he said that sticks with me until this day, “The only reason small businesses exist is for innovation.” It reminded me of why I wanted to pursue behavioral science in the first place. I will take this piece of advice with me wherever I am going next to remind myself to always strive for creativity and innovation.

Ten years from today, I probably will not remember everything I did at HCD, but I sure will remember that this internship is where I confirmed my interest in pursuing research as well as my passion for applying science and technology in meaningful ways. I am so thankful to begin my professional career with the foundational knowledge I gained from interning at HCD Research. I am now confident in my abilities as a researcher to never stop seeking out innovation in my future work.

Reliability and Validity of Implicit Association Test

I remember when I was in an Experimental Methods course for my undergraduate psychology major, where my professor spent hours discussing the reliability and validity in experimental research. At the time, I did not really understand why my professor was so “obsessed” with validity and reliability issues. Those days feel like decades ago, but the words of my professor have become more and more relevant as I get involved in researching and designing experiments. Unfortunately, the issues of reliability and validity are often neglected in market research. This blog will discuss the importance of reliability and validity in research in general, dive deeper into the relation of these issues in implicit association testing (IAT) and explore how market researchers should handle them.

What validity and reliability mean, and why bother?

Reliability refers to the consistency of a measure. A reliable measure includes test-retest reliability, meaning the scores should be the same if tested on the same group of people at different times. A reliable measure should also have internal consistency, which is the consistency of people’s responses across the items on a multiple-item measure. All the items on a measure should reflect the same underlying construct, so the scores on these items should be correlated with each other. Finally, a reliable measure should have inter-rater consistency, meaning that different individuals assessing the same stimuli should score similarly (Drost, 2011).

Researchers should also be concerned with validity, the ability of a study to measure what it intends to measure. There are several types of validity, but market researchers should particularly pay attention to content validity, construct validity and predictive validity. Content validity is the extent a measure covers the questions to match the research objectives. A valid measure should also have construct validity, the degree wherein an assessment corresponds to other variables, as predicted by some rationale or theory. Despite being important and central in academic research, construct validity is often not addressed in market research. Predictive validity, on the other hand, seems to be more relatable to market researchers, as it assesses how well a measurement can predict future actions or behaviors (Drost, 2011).

Validity and reliability are not always aligned. Reliability is needed, but not enough to establish validity. It is possible to get high reliability, but low validity (for example, when the wrong questions are asked repeatedly). It is also possible to have a valid, but not reliable measure, such as when results show large variation. Therefore, it is crucial to make sure your research is both reliable and valid simultaneously. If the measurement is not valid, it is meaningless to a study because the results cannot be used to answer the research question. Similarly, if results from a study are not reliable, market researchers should not use them for any decision-making processes.

Implicit Association Test

IAT is a popular measure in social psychology to measure the relative strength of association between pairs of concepts (Greenwald, McGhee, & Schwartz, 1998). The theory behind this form of testing is that making a response should be easier when closely related items share the same response key. IAT is also among one of the fastest growing approaches in market research for its objectivity and cost effectiveness in capturing consumers’ immediate, gut instinct, and subconscious responses to brands, new product concepts, and other marketing products (Calvert, 2015). IAT was developed in response to reports of low validity of explicit (self-report) measures, as most people are unwilling to report their true personal thoughts or feelings towards a stimulus. However, despite IAT’s popularity both in academia and in market research, its reliability and validity still raise some concerns that are worth discussing.

In psychology, a measure is considered reliable if it has a test-retest reliability of at least 0.7, although it is preferred to be over 0.8. Studies have found that racial bias IAT studies have a test-retest reliability score of only 0.44, while the IAT overall is just around 0.5. The second major concern with IAT is its validity. Validity is best established by showing that results from the test can accurately predict behaviors in real life. However, from 2009 to 2015, four separate meta-analyses came out all suggesting that the IAT is a weak predictor of discriminating behavior (Goldhill, 2017).

While these numbers might seem alarming, it is important to note that these statistics are mostly for the IAT studies that tried to understand racial implicit bias. As mentioned, validity and reliability are often not addressed in market research; hence, the literature focusing on the previously mentioned IAT concerns in market research is scarce. Perhaps, it would not be fair to use the somewhat alarming statistics from the race IAT studies, a rather big and complex issue in our society, to infer that IAT should not be used in market research. In fact, studies among different contexts (other than racial bias) have shown that IAT is a better predictor of subsequent behavior than explicit responses. These studies include topics such as consumer choice, risk-taking behavior, and stress response (Calvert, 2015).

Some Recommendations

Regardless of some concerns with IAT’s validity and reliability, IAT can still be a powerful tool for insights into consumers’ implicit attitudes, if designed and studied properly. Below, we will include items that market researchers should be aware of and take into consideration when thinking about IAT for their research.

  • IAT only measures the relative strength of association. For example, it examines the relative favorableness toward two concepts; thus, results can only tell us whether one prefers A over B, not whether he/she dislikes B or is neutral toward B. It is important that researchers are aware of this difference, so that if the research objective is to study attitude toward a single object, perhaps IAT would not be their ideal method. Different approaches have been suggested to go around this limitation of IAT, but they still require more work until they can be applied widely (Brunel, Tietje, & Greenwald, 2004).
  • The use of reaction time makes IAT vulnerable. IAT uses reaction time to measure the strength of association. While this measurement is convenient, reaction time as a measure of association strength makes the test vulnerable in assessment of its validity and reliability (Rezaei, 2011). This is because even “a tenth of a second can have a sequential effect on a person’s score” (Blanton & Jaccard, 2008). Market researchers should keep this fact in mind when analyzing their IAT results to avoid jumping to conclusions that the test is not reliable.
  • Familiarity with IAT can help improve the reliability of the test. As suggested by Rezaei (2011), perhaps it would be beneficial for market researchers to include trials where participants practice the test before the actual study to improve reliability measures.
  • Cautions on stimulus selection. In selecting stimuli for IAT, it is important that they are reasonably familiar and unambiguously fall into one of two categories (Brunel, Tietje, & Greenwald, 2004). Additionally, researchers should also be cautious with the length of the words/expressions they include in IAT (Neuromarketing Science & Business Association). Again, because the IAT uses reaction time as its measure for association, it is critical to use words of similar lengths, preferably single words, to ensure the validity of the test against individual differences in reading and comprehension time.
  • Do results from your IAT study correlate with other explicit measures? Although IAT has been shown as a better indicator of behaviors than explicit measures, it might be helpful to still include explicit measures in your study together with the implicit component. Comparing results from explicit measures with results from IAT can achieve two objectives. Firstly, it can be used to test the validity and reliability of IAT.  In theory, explicit measures and IAT should provide results that are, while distinct, still correlated on some levels because they are essentially measuring the same construct. Secondly, the divergence of results from the implicit and explicit measures has the potential to complement each other in predicting consumers’ behavior (Maison et al., 2004).

While this list can go on and on, this blog includes perhaps the most important components of IAT to address. IAT can be a great tool for market researchers to understand their consumers at a deeper, implicit level, in addition to explicit measures. IAT provides the second layer to a full picture of your consumers’ thoughts, beliefs, and behaviors.

Citations:

Blanton, H., & Jaccard, J. (2008). Unconscious racism: A concept in pursuit of a measure. Annual Review of Sociology, 34, 277–297.

Brunel, F. F., Tietje, B. C., & Greenwald, A. G. (2004). Is the implicit association test a valid and valuable measure of implicit consumer social cognition?. Journal of consumer Psychology14(4), 385-404.

Calvert, G. (2015, September 30). Everything you need to know about Implicit Reaction Time (IRTs). Retrieved from http://gemmacalvert.com/everything-you-need-to-know-about-implicit-reaction-time/

Drost, E. A. (2011). Validity and reliability in social science research. Education Research and perspectives, 38(1), 105.

Goldhill, O. (2017, December 3). The world is relying on a flawed psychological test to fight racism. Retrieved from https://qz.com/1144504/the-world-is-relying-on-a-flawed-psychological-test-to-fight-racism/ 

Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. (1998). Measuring individual differences in implicit cognition: the implicit association test. Journal of Personality and Social Psychology, 74(6), 1464.

Lane, K. A., Banaji, M. R., Nosek, B. A., & Greenwald, A. G. (2007). Understanding and using the implicit association test: IV. Implicit measures of attitudes, 59-102.

Maison, D., Greenwald, A. G., & Bruin, R. (2004). Predictive validity of the Implicit Association Test in studies of brands, consumer attitudes, and behavior. Journal of Consumer Psychology, 14, 405–415.

Neuromarketing Science & Business Association. Implicit measures: what is it? How to use it?. Retrieved from https://www.nmsba.com/buying-neuromarketing/neuromarketing-techniques/implicit-measures-what-is-it-how-to-use-it

Rezaei, A. R. (2011). Validity and reliability of the IAT: Measuring gender and ethnic stereotypes. Computers in human behavior, 27(5), 1937-1941.

Brand Disharmony

Last December, we posted on our blog the discussion of brand harmony. For those who aren’t familiar, brand harmony is the idea of ensuring that all experiences consumers have with a brand blend to tell a complete and harmonious story. Brand harmony shows through different aspects of a brand from consistent names, logos, and visual identity, to messages. We have shown how matching brand perception with product perception can increase consumer satisfaction and brand equity, as well as discussed how we at HCD help our clients achieve brand harmony. To continue this important dialogue, in this blog we will discuss some branding mistakes that companies have made in the past, how those mistakes affected them, and what we can learn from them moving forward.

The story of Coke, New Coke, and Pepsi is probably the most classic branding tale failure in history. Coca-Cola is such an instantly iconic and recognizable brand with its signature red color. However, back in the late 90s, Coke almost destroyed their brand image by including New Coke (Smartt, The Newsletter Pro). It all stemmed from a challenge by Pepsi, Coca-Cola’s biggest rival, revealing that customers preferred the sweeter taste of Pepsi over Coke. In response to this, Coca-Cola developed a new formula that beat Pepsi in flavor, named it New Coke, and completely got rid of the traditional Coke. Their customers were obviously upset with this move, and sales were extremely low. Everything, from the signature taste of the beverage to the shape of the can, was gone, and the new Coke just did not match with how customers perceived Coca-Cola. Eventually, Coke had to bring back their old formula and renamed it Coca-Cola Classic. This painful story from Coke should be a reminder for any brand when they try to make some drastic adjustments – keep what is signature to your brand and how your customers remember you. It is what creates a sense of trust and loyalty for the brand.

Similarly, Gap also failed in their logo redesign by straying too far from what made them different in the first place (Cook, 2017). In 2010, Gap wanted a radical design shift to transition the brand’s image of “classic, American design” to “modern, sexy, cool.”  Unfortunately, customers did not take this change positively, and less than a week later, Gap had to go back to its original logo. One reason was because customers did not associate the new logo and its new image with the Gap that they knew and loved. More importantly, the new logo came in isolation, with no actual change to the products offered at Gap; the clothing offered was not modern, sexy, nor cool like they advertised. This discrepancy in brand products and brand message was enough for Gap customers to go against this shift. Again, the lesson here is to not let big and abrupt changes alienate your loyal customers. They know you and love you for your harmonious brand image and products – don’t let a new product or feature of your brand destroy an established perception that customers have with you.

Brand extension is a marketing strategy in which a brand markets a new product (often in a new product category) with a well-developed product from the same brand. It sounds like a great idea to increase sales, but it doesn’t always go well. In 1982, Colgate launched Kitchen Entrees, a line of frozen food products, trying to capture the growing market for ready-to-eat meals (Rosenbaum, 2017). Colgate is a well-known, top selling brand for toothpaste – that is what their customers know them for. Before this food extension, Colgate succeeded in selling dental rinse, an extension for dental care products, together with their famous toothpaste. However, the food extension idea just didn’t seem to make sense. Who would easily make the connection of frozen meals with toothpaste? The product obviously failed miserably. When you build and position your brand around dental care products, you cannot introduce a new food product line and expect your customers to fall in love with it. There is no connection between your brand image and this new product.

Another (funny) example where a product did not align with a brand image is when Disney released Hannah Montana-branded cherries in 2009 (Cook, 2017). Even the biggest fans of the show back then had to question this move by Disney. There was no connection between cherries and Hannah Montana or the Disney brand. When you attach your brand name to something, it is important that it reflects your brand’s values and image and creates a consistent image for your customers.

I want to end this blog on a high note by showing an example of how brand harmony can indeed create a positive experience for customers. In 2002, when Shira Goodman took over as Staples CMO, she decided to come up with a new position for the brand given the current approach of offering the widest ranges at the lowest prices was no longer working (Ritson, 2015). After months of research, in 2013, Staples moved the focus from price and range to easiness – they wanted to make buying office supplies easy. They created a new logo with “that was easy” under the logo and an Easy Button as their symbol. Additionally, they launched a series of ads in which Staples rescued customers from the complicated array of office products. Most importantly, Staples took an initiative to redesign their stores. They realized that all the most popular products were at the back of the store, which was standard for retail practice (remember when you run into a gas station to grab a quick drink while waiting for your tank to fill up, the refrigerators are always in the back of store – they do this to encourage you to grab a candy bar or a bag of chips as well). Staples understood that this was not easy for their customers and was completely opposite of how they were trying to reposition themselves. As a result, they decided to move all the best-selling products to the front of the store to make the Staples experience easier for customers, as promised. This was done at the cost of losing money from ancillary sales, but it helped Staples remain a market leader in office products even to this day.

None of the new logo or ads would have mattered if Staples stores were kept the same. Customers would quickly realize the disconnection between what was promised by the brand and what they experienced; thus, resulting in frustration for Staples customers and backfiring for the company. Brand harmony is a long process to achieve and maintain that may require some up-front costs to brands, but we would argue that it is always worth it in the end. 

HCD always strives to use the right tool for the research question to provide our clients with the best answer possible. For more information on how HCD can help you ensure your brand harmonizes with your products, please reach out to Allison Gutkowski (allison.gutkowski@hcdi.net).

Citations:

Cook, K. (2017, March 20). 6 branding mistakes undermining your company’s image. Retrieved from https://blog.hubspot.com/marketing/branding-mistakes

Ritson, M. (2015, February 12). The best brands are disruptively consistent. Retrieved from https://www.brandingstrategyinsider.com/2015/02/the-best-brands-are-disruptively-consistent.html#.XSTNfOhKjct 

Rosenbaum, A. (2017. February 17). What were they thinking #6? Colgate kitchen entrees. Retrieved from http://that401ksite.com/2017/02/17/what-were-they-thinking-6-colgate-kitchen-entrees/

Smartt, D. 3 embarrassing branding mistakes. Retrieved from https://www.thenewsletterpro.com/embarrassing-branding-mistakes/

Reflection on NeuroU 2019

Last week, I had the opportunity to attend the NeuroU conference organized by HCD Research as a medium to discuss the future of neuroscience and its integration into modern consumer research methods. NeuroU 2019 was the first industry conference I have attended, so in this blog I want to reflect on my experience as someone from a purely academic background.

The different sessions and panels at NeuroU offered a wide range of topics from discussion on available neuro tools in market research, system 3 thinking, brand harmony, implicit testing, to the use of data science in market research. Hearing and learning from different speakers who are experts in their fields was fascinating, but what struck me the most overall, was the realization of how a similar concept can be used so differently in academia and in industry. I hope this blog post will give some insights for those who are moving from academia to industry like myself.

The first topic that I am particularly interested in is implicit association. The implicit association test (IAT) is a popular measure in social psychology to detect the strength of a person’s automatic association between concepts (Greenwald, McGhee, & Schwartz, 1998). The idea is that making a response should be easier when closely related items share the same response key. With a background in psychology, I have studied and done research on implicit association in the past few years, mostly in the context of racial and gender biases. Little did I know that IAT is also among one of the fastest growing approaches in market research for its objectivity and cost effectiveness in capturing consumers’ immediate, gut instinct and subconscious responses to brands, new product concepts, and other marketing products (Calvert, 2015). Allison Gutkowski, Director of Communication and Sensory Application of HCD Research, gave a great talk on using IAT together with other physiological measures to study brand harmony. In a case study on a fragrance product using languages on the product package, researchers used IAT and had consumers react to those languages with the brand name as well as its competitors. Results were striking in that consumers did not actually associate the brand and its product with those languages that appeared on the product. In other words, there was no harmony in what the company thought of its product and how customers perceived it. I just found it personally so fascinating how the same concept of IAT can be used in consumer research to help marketers understand their consumers at a deeper level, and hopefully from there be able to predict their purchasing behaviors more accurately. However, something that I would like to hear more about is a discussion of IAT’s reliability and validity in consumer research. In recent years, there has been an ongoing concern about IAT’s reliability (the extent to which a study can produce roughly similar results when retested) and validity (a measure of how effective a test is at measuring what it aims to test) in academia. I believe these issues are also particularly crucial in market research. If IAT cannot meaningfully and accurately predict behaviors, the results of the test would be irrelevant. For example, the study might suggest an incongruence in how consumers perceive a product and how the company markets it, but what if consumers still purchase the product anyway; how would we go about interpreting the results and what would we do then? I would like to learn more about market researchers’ views on reliability and validity and how they handle reliability and validity in their research.

Another talk I found interesting was one that Dr. Morrin and her students gave on olfactory symbolism. Their work is a great example of how academic research can inform and impact the consumer decision-making process. Building on the bouba-kiki effect of the association between speech sounds and the visual shape of objects, Dr. Morrin’s research focuses on a wide variety of cross-modal associations for a number of different scents. Interestingly, her research suggests that crossmodally harmonizing a product’s scent with its package shape can enhance how much consumers are willing to pay for that product. This ties back nicely with the idea of brand harmony, providing your customers with a consistent message throughout all aspects of your products. Results from research like this can give companies insights into how to design and market their products. I hope to see, through my internship with HCD Research this summer, more academic research with real-world applications and industry impacts like this one.

The last piece I want to touch on is the panel discussion by researchers on their experiences in adding new research approaches at their organizations. As mentioned before, my intention in this blog post is to point out my thoughts on how academic research and market research are different. The panel discussion, I think, fits nicely here because it is perhaps something academic researchers do not have to deal with. The speakers discussed the hurdles they faced as they try to apply neuroscience or bring in new technology at their organizations. The most common challenges include getting enough attention from management, explaining the needs for it, dealing with the resistance to change from traditional methods, and managing expectations about the new technology. This is an important conversation to have as more and more companies are trying to incorporate neuroscience and other cutting-edge technologies in their research. Researchers should be aware of these factors before initiating to their management about using technology in their research. Similarly, management teams should make sure they understand the technology fully before deciding whether it is an appropriate method for their practice. It is also perhaps crucial for researchers coming from academia to understand these challenges as they cooperate with companies in projects that involve neuroscience and new technologies.

            Overall, I am glad I had the opportunity to attend NeuroU to learn so much about consumer research from well-versed guest speakers and attendees. I hope to be able to apply what I learned to my future work with HCD Research this summer and gain some hands-on experience in conducting consumer research. I also look forward to exploring ways to incorporate and leverage behavioral science in market research using neuroscience.