Tag: experimental design
When I talk to clients about the state of measurement in the digital advertising industry, one of my favorite topics is the elephant in the room: cookies. These so called browser cookies are the currency of digital advertising reach, yet they clearly do not at all represent individuals. However, this was not always the case. Going back to the recent past, a cookie had a different meaning than it does today.
In 1996 the topic of debate was around understanding individual usage. Imagine a family huddling around their one desktop computer with those weird dial up modem sounds. People in the measurement community were sword fighting over who had the better algorithm for determining which family member was in front of the computer. At the time, a “cookie” could have represented 4.5 people or, essentially, a household’s online footprint.
The world has really changed since then. Now, the question of the day is around what I call “cookie amplification.”
Continue February 11, 2013
(This post was also featured on Adotas.com.)
As I explained in an earlier InsightfulAnalytics blog post, what makes online ad effectiveness measurement work is the use of an experimental design. I’ve also mentioned in earlier posts that while experimental design is a fantastic approach and one we recommend, for a variety of reasons clients prefer to run quasi-experimental studies. One of the important aspects of putting together a good quasi-experimental design is to create a control cell that is as equivalent to the test cell as possible. Unfortunately, and if you’ve read some of my other posts you’ll realize this is a trend, that’s just not how things work online.
When I first started doing online ad effectiveness research in 1997 there was no such things as ad server delivered tags. Everything we did for sampling a campaign was hard coded to a page, including the advertising. This made for an extremely easy design. Since there was no complex ad server to worry about, I could randomly redirect visitors to either the page with the test ad or the page with the control ad. It doesn’t get much better than that – pure random assignment of the respondent pool. However, with the advances in ad serving the survey sampling code moved into the ad server and thus began the era of the pop-up and the dreaded bonus inventory.
Continue April 12, 2012
Often misunderstood, always controversial, the ubiquitous cookie is the main mechanism for tracking ad exposure in ad effectiveness studies. True to its pedigree, the cookie enjoys quite a bit of notoriety, frequently showing up in Wall Street Journal headlines. Some consider it to be the Achilles heel of online ad measurement because it’s so susceptible to deletion. The idea that an individual could be exposed to an ad, delete the cookie associated with the ad, and subsequently be sampled for a control cell seems to be a deal breaker to some buyers of ad effectiveness research.
I understand this perspective, but the issue is certainly not so cut and dry. However, before we get into more detail, let’s look at how the cookie is deployed on the average study. The cookies used in online ad effectiveness studies operate in one of two basic ways: storage or identification.
Continue January 17, 2012
Having a basic grounding in the world of experimental design is a necessity in buying online ad effectiveness research today. This is true even if we’re talking about evaluating concepts other than brand messages (i.e. clicks, conversions, etc.). Yet, as I’ve mentioned in my previous post, employing experimental designs in online advertising effectiveness research is hard. Why? It’s the technology. Yes, the same technology that is supposed to make the world that much more measureable also conspires to make experimental designs impossible.
Here are the problems we run into:
1) Money. To run a true experimental design study online today you typically need to take up to 20% of your client’s ad campaign dollars and use them to run a placebo ad to serve as a control ad. Big deal breaker, as few clients are willing to throw 20% of their ad dollars into a PSA ad for the sake of measurement.
2) Complexity. So assuming we secure the appropriate dollars for a campaign, we now need to implement the campaign in a way in which we can create our randomly assigned test and control cells. This used to be extremely complex, but the good folks at Google have recently released the DFA Experiments platform to manage these requirements. Unfortunately, very few people know how to use the tool. The reality is that, as easy as setting up the experiment has become, it’s still an added layer of complexity in getting a campaign live.
3) Compliance. Now even if we manage to get the money, even if we manage to convince the agency to set up the experiment, we have a compliance issue. Most of the biggest sites and pages on the net will not allow you to deploy ad server tags on their pages. So even if you have a fancy DFA experiment set-up to run, if your plan includes the home page of Yahoo, or Facebook, then you can’t include those sites in your experiment. And by not including those sites in the experiment, you’re creating a bias in the data. Most clients are including major portals in their media buys so randomly assigning test and control in the ad server doesn’t work.
4) Deletion. Well, I’ve worked my butt off and secured the funds, set-up the experiment and am now running across only sites that accept my ad server tag. Am I golden? Far from it. In fact the worst problem of all is the fact that I’m relying on cookies to assign people to test and control cells. With 30-50% cookie deletion in a 30 day period, I could have major misattribution between my test and control cells, i.e. I could assume someone didn’t see my ad when in fact they saw it 30 times, but just deleted their cookie. (More on this in a later post).
Continue August 16, 2011