Data-driven this and that are all the rage these days. In the world of advocacy, it can be a great help — if you know what you’re doing.
Especially when it comes to digital public affairs, we can often find ourselves awash in available data.
Personally, I love data to help me make decisions. Sure, I use my gut feelings a lot, too, just like anyone else, but when I have access to good data and it makes sense, I want to use it as an aid.
Often, data conflicts with anecdotal evidence. It’s not uncommon to find that what you thought to be true based on casual observation and second-hand information may not hold up when confronted with quantitative evidence. Consider website visits, for example. Just about everyone in the advocacy space measures and respects web traffic data. A falling number could be cause for alarm. But a survey might show that the vast majority of folks are getting your information successfully via email or some other means — distorting the relevance of web visits alone. Similarly, we all remember the big battles that we won or lost, but that may be neglecting the broader picture that comes from looking at all advocacy efforts.
Bad data can be deadly. For most of us, bad data won’t actually kill — unless you’re involved in things like health care or safety. But erroneous data can lead to very bad decision-making. Most of my clients and businesses over the years have relied on websites to drive success to some degree or another. Frequently we have constructed custom analytical solutions tailored to the specific need to help measure success or failure of different approaches. Occasionally, we discover that the data we based decisions on turned out to be faulty due to some bug in our logic or implementation. If the reports you’re looking at to help make smart decisions are based on incomplete or inaccurate data, you may make the wrong call.
Good data can save or make money. At one point a number of years ago, it seemed like one of my businesses was doing very well with Google AdWords as a marketing tactic. And, in fact, it was to a degree. The AdWords program generated a lot of leads. But it turned out that those leads ultimately converted to sales at a lower rate than those that came through organic search efforts. At the time, our systems were not very effective at tracking the difference because the lag between web lead and final sale was significant and not integrated within the AdWords metrics system. When we did discover the discrepancy, we adjusted our AdWords spending — something that both saved us money but more importantly freed up sales resources to focus on developing and converting leads with greater potential.
Human data contributions need to be consistent. While I work with a lot of data that can be automatically generated by computer processes, there are still plenty of occasions where we need to track data that comes from human interaction. The systems in place to analyze this data are only as useful as the consistency with which the data is entered into whatever tracking system exists. This means the data needs to be uniform in structure and obtained for every appropriate activity. The human element introduces a measure of uncertainty and inconsistency into the data set, but frequently these data points end up being the most valuable — precisely because they are harder to obtain and track. The key to success is determining the most narrow but still useful set of information to collect — and then rigorously encouraging and enforcing compliance.
Data can be distracting. While I love data, it is important not to lose sight of instinct, experience, and gut feel. Data often only tells part of the story and understanding its limitations is just as important as knowing the benefits. Moreover, it can become a real distraction for an organization if it lives and dies by data alone. Human relationships matter. Efficiency can actually be diminished by excessive data collection and analysis. Balancing the use of data and other decision-making tools helps foster the greatest success.