In today’s world, we are constantly being bombarded with e-mail. Some of it is important, relevant, and saved for future use. However, the majority of e-mail is deleted (junk e-mail came out of necessity). How many times have you clicked opt-out or unsubscribe only to get more e-mails from the same company weeks down the road? On the contrary, how many times do you subscribe and sign-up for e-mail? What causes you to decide to unsubscribe or subscribe? Is it a catchy subject line? A terribly designed and confusing layout? Or maybe it’s simply not interesting to you? Marketers must find the answers to these questions so that we can tailor our messages to our consumers and make our messages relevant and more importantly, read.
A recent Clickz article discusses the need for testing when it comes to e-mail marketing. Companies have huge databases of e-mail and often times send out mass mail. Today, companies are getting better at categorizing their subscribers, but that’s not enough. According to my article, a survey conducted showed that of 368 U.S. marketing executives, only 31% conduct the most simple of e-mail testing, A/B tests. Any questions regarding more sophisticated means of testing and the results were lower. The article guesses that the reason this is is because executives are guessing at what their consumers want instead of actually finding out.
The article also suggests that testing increases production time, labor resources, and in turn slightly more money. However, optimizing a company’s e-mail is well worth it. Thus, the following suggestions are made on how to test your e-mail:
1. Test only one variable at a time. If you test more than this, it becomes impossible to determine which variable is causing which response.
2. Make sure e-mails are sent on the same day at the same time. If not, different days and times can account for responses and prove to be variables. It does, however, suggest that a company can choose to test day and time as a variable itself.
3. Tests must be measurable and statistically significant. Many firms would be afraid to send out a possible failure of a test to many subscribers so they may choose only 100, or a small percentage. If a firm has 1 million e-mails and tests to 100, the results are insignificant. The article suggests to test on approximately 5% of the total list.
4. Maintain a control group. Throughout all of the testing make sure there is one control group that is never tested on. This way the unaltered group will be a good way to contrast results over time.
5. Get help. The article urges companies to work with consulting firms, agencies, service providers, etc. because they have more experience and will reduce the knowledge gap.
Ultimately, the article concludes with a few statements about marketing and how fortunate we are to be in a profession where failing and success go hand in hand. There is also a warning, “Better to test a small bit of your list and experience some small failure, then launch a mailing to your entire list and experience failure in a massive way.” Testing, as the article says, should become part of the production process.
All this said, I would like to pose a question to anyone willing to respond. How do you feel about e-mail marketing and do you think it can continue to be successful in the future, or is it slowly becoming obsolete?