As marketers know, pricing is a key component of the marketing mix for any product. In addition it is also serves as the most tactical of the components that a marketer has available to them in-market. The most effective applications of pricing strategy are dependent upon price sensitivities and elasticities of consumers in the market.
A large portion of the research in this area is based around studies done with in-market scanner panel data. Across all available sales data for a product category in-market, models are fit to compute elasticities for the prices that were observed for the time period evaluated. Results from this type of research are robust and reliable.
The shortcomings of this method set in when marketers attempt to reach beyond conditions that previously existed within the market. For example, Brand Z wants to take price to a new level not yet seen in-market. Brand X was never seen at Price A when Brand Y was seen at Price B. Compounding this discussion even further, price is often not evaluated in isolation. Case in point, packages can see a change in volume delivered per SKU or a new creative package with altered messaging is introduced.
Either the MMM is wrong, or you should be spending your marketing budget on Felix the cat.
In this article you will learn:
How to trick a marketing mix model (MMM) into giving you great results for doing just about anything
Where mix models break down
Emerging solutions to address the weaknesses of marketing mix modeling
Marketing mix modeling has had a great run as an “it can do it all” solution to marketers’ questions on marketing allocation. Marketers can input their entire marketing plan into the model and voila, they get an answer on how to reallocate everything using a quantifiable, rational approach.
In 1990 it worked pretty well when we had only TV, print, and the pricing and promotion to worry about. Since then this annoying marketing channel called digital has created a pain in the neck for the MMM. (How do I measure ROI for my display, search, video, mobile, app, social, email, website, tablet, gaming, and Captcha in a MMM?). And that pain in the neck may soon become a migraine.
Today InsightExpress released research that indicates great promise for tablets as a powerful advertising channel. The InsightExpress analysis found that campaigns running on tablet devices are extremely effective at delivering their message and motivating purchase, and either match or outperform established mobile channel norms.
The findings detailed below were drawn from InsightExpress’ Tablet InsightNorms™, a normative database containing results across 43 campaigns comprised of 83 ad executions that show the branding effectiveness of advertising placed on tablet devices.
Thanks in great part to InsightExpress’ involvement in a 14-month long research study conducted by VivaKi’s The Pool, an ongoing initiative to uncover advertising solutions of the future that revealed best practices and key findings for ads placed on tablet devices, InsightExpress is able to offer one of the most robust portraits of tablet advertising effectiveness in the industry.
What would be the perfect way to measure advertising effectiveness? Ideally we would be able to obtain responses from an individual in this world as well as in a parallel universe where the only difference is whether or not they were exposed to the ad. We could then compare the responses knowing that everything else was equal.
However, multi-dimensional travel to measure ad effectiveness is not currently an option. The next best possible alternative would be to randomly assign people to test and control groups. We could then make sure that everyone in one group was exposed to the ad and nobody in the other group was exposed to the ad. While this is possible, it is also unreasonable due to factors such as cost, time, feasibility, etc.
When we measure ad effectiveness we are usually not in a position to randomize and force exposure to respondents. So instead we capture an exposed group and a control group, since we can make sure that exposed people were exposed to a particular ad and control people were not. But since we did not randomize the exposure, there may be group differences. That is, the exposed group may have been exposed to the ad because of some difference from the control group, be it behavioral or demographic. How do we account for these group differences? Enter the Propensity Model.