Tag: choice based models
As marketers know, pricing is a key component of the marketing mix for any product. In addition it is also serves as the most tactical of the components that a marketer has available to them in-market. The most effective applications of pricing strategy are dependent upon price sensitivities and elasticities of consumers in the market.
A large portion of the research in this area is based around studies done with in-market scanner panel data. Across all available sales data for a product category in-market, models are fit to compute elasticities for the prices that were observed for the time period evaluated. Results from this type of research are robust and reliable.
The shortcomings of this method set in when marketers attempt to reach beyond conditions that previously existed within the market. For example, Brand Z wants to take price to a new level not yet seen in-market. Brand X was never seen at Price A when Brand Y was seen at Price B. Compounding this discussion even further, price is often not evaluated in isolation. Case in point, packages can see a change in volume delivered per SKU or a new creative package with altered messaging is introduced.
Continue May 17, 2013
I was at the store a few days ago, staring at a new flavor enhancer for bottled water. Five flavors in all and three I could do without. But out of the other two, I could not decide which to choose. So I went home with…both.
Earth shattering? No. People buy multiple varieties or brands within a category all the time. But the problem is that if this experience had been a typical discrete choice exercise conducted for the sake of marketing research, I would have had to choose just one. The researcher would never have known that not only was there a single product that met my minimum requirements, but in fact there were two products compelling enough to alter my expected purchase quantity.
This post is not about blasting the pick one exercises available in choice modeling today. In fact, choice modeling remains a tremendous improvement over the ratings scale based conjoints and the TURF batteries that preceded them. But the march is always onward to better, more realistic and reliable research techniques.
Continue July 28, 2011
Last time we talked about pick one and best worst response patterns in scaling exercises for idea sorting. When we do an idea sort within a choice context, we are essentially conducting a single feature choice-based conjoint. SO, the question becomes: can we take anything we have learned about the application of best/worst vs. pick one choice tasks to multi feature discrete choice designs?
Certainly. As we learned from the discussion about idea sorting exercises, the key question revolves around the importance of the middle to lower scaling feature benefits. If they are important to the objectives of the study, then a best/worst discrete choice design is worth considering.
Continue April 19, 2011
Last week, Marc wrote a post that gave a great overview of choice based models. As choice models have spread through marketing research over the past several years, they have utilized several formats. Some choice tasks are simple “Pick One” experiments, Some are “best/worst” or “Max Diff”, some are simple rankings, and some are more complex chip or wallet spend allocations.
Likewise, choice models have spread from traditional conjoint applications to list sorts as well. Flavors, benefits, features or attitudes are all now frequently scaled with choice inspired questionnaire designs. In a list sort, the list is randomized and then broken into sets of four or five. Respondents are asked to evaluate which of the ideas is the most liked in a Pick One task, or they are asked to identify both the most liked and least liked in a Max Diff design. The ideas are then randomized and the task repeated.
A question I sometimes get from clients who are doing a list sort is “When do I use simple pick one tasks and when do I use Max Diff tasks?”
Continue March 22, 2011