$#!* Happens – A dirty story about ad testing

It was about 11:00 a.m. when we started up the mountain outside of San Pedro Sula in the northwest corner of Honduras. The humid air lay heavy and still in the valley below, causing the fields of sugar cane to shimmer in the hot sun.

We were videotaping b-roll for a few TV spots one of my fundraising clients wanted to test. Our task that day was the same as it had been every day that week: to capture images of the devastating poverty these people suffer.

The camera crew donned their battery belts, cables, and assorted gear and we followed the narrow dirt path toward the shacks above. As we ascended a steep rise and veered to the right, we came across a young boy toting an armload of dry firewood. One of our videographers wanted to shoot this and positioned himself in the middle of the path.

That’s when it happened. And to understand what happened, you must understand the term “wrap-and-throw.”

Many of the people my client helps are so poor they live in makeshift shacks, some of mud or wood, others little more than plastic or cardboard nailed to sticks. These places often have no sanitary facilities. So the residents have developed a practical way to deal with their waste: They wrap it in a small bag and throw it.

Thus, we were walking in a “wrap-and-throw” community. And while the videographer set himself to shoot the kid with the wood, one of our guides trotted ahead to ask the child’s permission. The boy agreed, and the guide came running back toward the cameraman.

A wrap-and-throw lay silently in the path, aged and ripe. A group of unsuspecting, sunblock-smeared gringos stood stupidly smiling three feet away, anticipating nothing but the beautiful picture they were about to record. Our guide’s foot came down hard at ground zero … and the principles of ballistics did the rest.

It gave new meaning to the term “$#!* happens.”

Three people were hit, the brave videographer getting the worst of it, sprayed heel to cheek with the brown, foul-smelling slime. Fortunately, I had been walking ahead of the group, upwind and out of range. But when I heard the ruckus and walked back, it was like a scene in a war movie: Shocked, pale faces. Cries of shock and disbelief. People running in all directions.

The videographer stood stock still, looking down at his body in disbelief, mumbling, “I can’t believe that just happened. I can’t believe that just happened.” Others in the group tried to act concerned as they cautiously inspected their own bodies for damage.

I must admit, I would have been equally disgusted if I had been a casualty of this incident, but I would not have been surprised. In fact, that’s what I found most interesting about it – the utter surprise that this had happened.

Surprise? We had been walking in wrap-and-throw for days. The only surprising thing was that it hadn’t happened before. We knew where we were going and what we were doing. And we knew the risks. In my mind, this was proof that we were out there doing what we had to do to get the job done. That wrap-and-throw was just part of the process.

Which brings me to ad testing. (How’s that for a segue?)

I am forever baffled by the unrealistic attitude so many advertisers have about testing. These delicate souls don’t really want to “test,” they merely want to “confirm.” They expect to rack up victory after victory with little or no effort. Failures are dreaded experiences, not learning experiences. Fear is the reigning emotion, and avoidance is the primary technique.

The first thing I tell everyone who calls me wanting to start a testing program is that the first order of business is to establish a basic “control” ad or direct mail package. Then start methodically testing to improve results. Testing means just that. Testing. Trying things out. Seeing what works and what doesn’t. And by definition, testing means experiencing failures along with successes.

But again and again, otherwise brave business people, who have risked great peril to start businesses, launch products, and develop markets, suddenly quiver in horror at the idea of “failing” in a test. So, what often happens is that good test ideas are talked about but never acted upon. Or new ideas are mauled by committees till they are little more than tweaked versions of previous promotions – safe but ultimately unproductive.

Testing means getting out there – getting your hands dirty and getting the job done. And occasional failures are just part of the process. In fact, the best way to increase your success rate is to increase your failure rate! In other words, the more you test, the more you learn.

The more you learn, the better your results will be in the long run.

And while I’m on a rant about testing, allow me to specifically catalog and comment on some of the most egregious mistakes I’ve seen advertisers make:

Testing haphazardly or running sloppy tests. Testing is a mathematical process. You have to test all the time. You have to test carefully. Otherwise, the numbers just won’t mean anything. If you don’t have the skill or patience for number crunching and analysis, get someone else involved.

Assuming that tests are error-free. Even if you run what you believe are careful, well-conceived tests, never assume that there is no room for error. You should actively seek out mistakes on every level. Whether your test comes out good or bad (but especially if it’s bad), think through the whole process to track down errors.

For a mailing you might ask: Were the mailing list numbers accurate? Were the addresses good? Did all our pieces get mailed? Was the bar code on the reply form correct? Are phone operators and mail handlers carefully tracking every response? Have I made mathematical blunders anywhere? Where else could a mistake be made?

Drawing the wrong conclusions. Too often, people look at test data and jump to a conclusion. “That self-mailer bombed. Self-mailers don’t work,” or “We tested a Christmas appeal, and we lost money. Christmas is a bad time to mail.” This is usually the result of a poorly designed test. Ideally, you should test with the express purpose of measuring one variable. And you must test against a proven control. Otherwise, you may conclude that a particular variable affects the results when it doesn’t.

Making decisions based on insignificant results. Every test must be statistically valid. That means you must reach enough of your audience to assure that you have accurately sampled that audience, and that you get enough responses to accurately calculate your results. When you fall below certain minimums, your results are worthless. Testing is expensive, but you can’t cut corners.

Overlooking an important result. Numbers are data. You only have information once you analyze your numbers and draw conclusions. That’s why it is important to do more than just list your results; you should play around with them. See what might be hiding in all those digits. Is there a trend? Are your results seasonal? How do your results compare with industry averages?

Refusing to repeat a test to confirm results. An accurate test is repeatable. In other words, if your results are accurate, you’ll be able to test again and get similar results. If you don’t, there’s something wrong somewhere.

Filing away results instead of using them. Why test if you just calculate a response rate and throw your report into a filing cabinet? Those numbers are expensive to get, so use them. Analyze every test quantitatively and qualitatively. Show the numbers and write¬† your thoughts and conclusions. Then share the test data with everyone involved. After every test you should know something useful, like “this 2-page letter works just as well as this 4-page letter” or “this offer increased inquiries from our TV spot by 35%.”

Failing to keep a running record of conclusions. Over time, as you see the results of test after test, you will start to see patterns emerge from the numbers. Organize and list this information as a guideline for future testing. Building on your hard-won knowledge will dramatically increase your success rate.

In all cases, testing is about acquiring knowledge, because you don’t “know” anything until you test. Knowledge is the gold for which you should be panning. And testing is not something you do once and forget. It is not something you do only when you have a little extra in your budget. Testing should be – must be – part of your routine, everyday business.

You should forever be in what I call the Testing Loop:

1. Run a test.

2. Analyze the results.

3. Act based on the results.

4. Repeat.

Remember, testing is not about confirming your savvy or proving your biases. All those little so-called mistakes are not mistakes at all. They are nuggets of precious knowledge. They are your most valuable asset.

Indeed, $#!* happens. But that odor is not failure; it’s the sweet smell of future success.

Subscribe to FREE Newsletter / Subscribe to blog by RSS or E-mail

Comments

2 Responses to “$#!* Happens – A dirty story about ad testing”

  1. Fundraising: the “hardwired” formula that works on January 6th, 2009 6:37 am

    [...] and the “dirty” story I told you recently … that happend on the same trip as when I met my sponsored child. Yes, it really [...]

  2. Janice Cartier on January 6th, 2009 1:32 pm

    Just so. Hm, something to think about. I totally bombed on some cover letters and packets I sent out. Was thinking maybe that’s a good place to run a test. Oh I got nice polite responses, but no sign on the dotted lines.
    If the work/offer is good, will the how of the delivery stop them? That’s kind of like an ad.



FREE Newsletter
Get my monthly newsletter and a FREE 16-page Report: 99 Easy Ways to Boost Your Direct Mail Response!
Enter your main e-mail:
Past issues and more info.
Your privacy is guaranteed.