Menu Close

The Push and Pull of Analytics

Dec 11, 2015 by Sameer al-Sakran

Most companies talk a big game about the importance of being data-driven. For companies starting out on their analytics journey, it’s sometimes confusing how this is supposed to work, or how to create a sane plan for rolling out a data-driven culture and the tools to support it. We’ll look at what this means in practice, pulling from our experience with providing analytical support for a few dozen companies. These companies span a range of sizes and industries, and include the Expa portfolio, other companies Metabase has worked closely with, and previous companies we’ve helped.

There are two main ways to get data into the hands of those making decisions — Pushing important data to them, or letting them chose when and what data to Pull on their own. Both are important and should be enabled to make the most of the data an organization has collected.

##Pulling Data

There are a couple of situations where it makes sense for a person or team to be able to pull their own data. For decisions that are localized to a team it makes the most sense to allow the person or team making that decision to Pull information about the results of that decision. To make this a little more concrete, let’s use the following as an example:

Context: New accounts on a SaaS product are currently being on-boarded through a series of emails describing product features. However, the churn rate for new accounts is ~15% in their first month.

The decision: To reduce this churn, the customer success team decides to proactively on-board 10% of new customers rather than reach out to them when they see an account is inactive a week later.

The criteria for success: A reduction in churn rate after a month.

So, with this example, what does the company (through its engineering team, IT department, embedded analysts or any other means) need to provide to allow this decision to be made and evaluated in a data-driven fashion?

Firstly, there needs to be a way for the customer success team to pull up a list of new customers for a given time period. While this sounds obvious, unless this was a requirement that was anticipated during product development, you’d need to either ask an analyst this at the start of every week or add it to the set of reports available in your product’s administration tools. Now our heroic team can pull up something like the below whenever they need to:

Example Customers Table

Next, you need to pick the 10% sample that you’ll run the high-touch on-boarding experiment on. Resist the temptation to pick the first 10, or otherwise bias the sample. You should make it possible (and ideally friction free) to get a sampling that isn’t weighted towards the location, size, or other key attribute of your new accounts. If available, some help from an analyst or statistically minded person would be valuable here in generating this sampling process and populating a table for the customer success team to use. At this point, the table that can be pulled up will look more like:

Enriched Customers Table

At this point, you have a list of customers that can be on-boarded. Let’s fast-forward a few months into the future, and examine what we’ll need to provide to allow the customer success team to understand how this decision played out.

First off, let’s see how well the overall program did. For this, you’ll need to calculate the churn in the sample that was aggressively on-boarded. To this, you’ll need to define churn itself. We’ll assume the easiest scenario, where you have a distinct “unsubscribed” status to a user account. To make use it it, the team will need a way ( to tally up the number of accounts by status and on-boarding type.

Results of the experiment : Status by On-boarding

In an ideal world you would come back, the program would be a resounding success, churn would have gone to 0%, and the entire management team could get back to more important business like picking out the color of hardwood floors on the yachts they’ll be buying now that the company has been saved.

However, in reality, most of the time the result of this experiment would be something like “well … it sorta worked.” Typically, on-boarding will have worked really well in some cases, but not others (e.g., in the example below you’ll see that aggressive on-boarding worked really well for accounts from the finance industry, but not at all for ones in education). Taking a look below, you’ll note that 30% of the educational account churned after being aggressively on-boarded, while only 1% of finance accounts churned.

Cancellations by industry

What happens next is what determines whether your company is going to muddle along in business-as-usual, or bulldoze its way through the market and leave your competitors as roadkill. In a typical company this is the end of the process, with perhaps a few meetings about whether it is cost effective to provide aggressive on-boarding in response to a 15% drop in churn. In better resourced companies that realize that net churn is the make-or-break metric of a SaaS business, an analyst or engineer would dig a bit deeper. In our fully wired up Data-Driven Organization, the customer success team itself would be able to dig further and use their understanding of their accounts alongside the tools they are provided to drill into the core underlying dynamic.

Response to on-boarding: Status by Timezone

By drilling into the accounts, our plucky team member would notice that the core problem is that 90% of the education accounts were on the west coast, while 75% of the finance accounts were in NYC (and the same timezone as the company). By setting up the on-boarding sessions too early in the morning (6 a.m. – 9 a.m., PST), they were forcing the poor school administrators to sit through an on-boarding session before their coffee had kicked in. They quickly saw the wisdom in sleeping in, and running all west coast on-boarding in the late afternoon. Churn overall plummeted, the decision (and follow up revision) saved the day, and they were all promoted.

Let’s take a pause here and recap what a good affordance for Pulling information in a company looks like. First off, the data that the customer success team has access to was massaged into a format that was meaningful to them. Practically, this means that the underlying structure of information in their database was designed with an analytical use in mind (and traded off with transactional requirements), or was rearranged and made available to them. While it’s tempting to try to get them to understand the underlying schema, or (for the truly masochistic) learn SQL, the more things look like the sort of information they’d collect in a spreadsheet, the more productive everyone is.

Second, the point of allowing for Pulls is to create a culture of bottom-up, data-driven decision making. If the people closest to a decision are able to ask and answer their own questions on their own operational cadence, this leads to a faster-responding organization. Not only is this faster than having the questions punted to a dedicated analyst pool, but the self-serve nature allows for data to permeate all the little nooks and crannies of a company, and its interactions with customers, partners, and vendors.

Pushing Data

Having espoused the virtues of bottom-up decision making and data access, let’s flip back to the other end of the org chart and talk about how to Push data through a company.

The goal of Pushing data is to make sure that everyone in a team or company wide is aware of a set of core numbers or data points. While we use the term “push” here, this doesn’t necessarily refer to the medium that the information is delivered in, but rather the notion that the company centrally decides this set of core metrics, instead of a decentralized system where everyone looks up whatever number they want.

First off, let’s point out what should be obvious: the fewer distinct points of data you try to push on people the more likely they are to digest them. Less is more in this like many other situations in analytics. The less confusion there is about what’s important, the more everyone will have a shared awareness of what is actually important.

In many companies this centrally determined set of information is called Key Performance Indicators (KPIs), Core Metrics, or something else that sounds official and important. For many companies, these indicate the outcomes the business should be managed against. The more closely tied these are to desired business outcomes the more sanely the overall system behaves. These numbers should ideally be formulated such that they can be managed, meaning that everyone who gets the numbers can take action (or kick off action) to change these numbers. Additionally, there are often a set of metrics (internal or external) that represent the environment the company is operating in. Despite not being things the company can control, they do guide overall behavior and are prime candidates for Pushes. An example would be a financial firm getting a quick summary of overnight activity in other timezones to prepare for their day each morning.

One key here is to resist the temptation to include metrics that make you feel good in place of metrics that let you know how you are doing. While it’s easy to get caught up in the mythology of successful companies that never hit air pockets en route to greatness, the faster you realize some facet of the business isn’t working, the faster you’ll be able to correct it.

As a concrete example, let’s go back to our imaginary SaaS business. Since the overall success of the company is measured by the total number of accounts, it’s tempting to include that as a core metric that gets Pushed. However, while it’s nice to see big numbers that almost always increase, this makes it hard to fully understand how fast the user base is growing or shrinking. A better number would be the change in the number of accounts, or the percentage growth during the previous time period. This makes it crystal clear when growth jumps by 100% or drops by 50%, numbers that would be washed out in the larger total to all but the most eagle eyed readers.

When to Push Data

It’s important to match the frequency of Pushes with the cadence of the decision making around a given metric. If you are managing a number through actions that take a week to plan and execute, getting a dopamine-inducing ping about it every hour is more likely to cause thrash and distraction than productive decision making. It’s also useful to take into account the natural period of the number in question.

So, if we’re talking about churn of accounts, where the effects of any actions by the customer success team will take days or weeks to be absorbed by the user base, and where there is a natural periodicity (people will mainly cancel or fail to renew at the end of a 30-day trial period), it makes the most sense to look at this on a weekly or even monthly basis. Sending out an hourly report on churned accounts will just create thrash, and if you do need to take immediate action on terminated accounts, you should think instead of making it a part of the termination workflow rather than a metric that gets Pushed.

How to Push Data

There are three main ways to push information through an organization. The most old school, but still useful due to the gravity it imparts, is getting in front of people and reading numbers off a deck. All-hands, sales kickoffs, analyst calls, etc., all are places where a small set of information can be delivered in a low bandwidth but heavy fashion. While this is not a scalable way to wire up an organization, it should be remembered that this is a viable and appropriate option for critical audiences or metrics.

While it’s not strictly a “pull,” dashboards are another key place to collect metrics. They are often a good thing to pull up when getting up-to-speed on what happened in the previous day, week, or other time period. While they’re often abused, and require some amount of upkeep, they provide a key place to get the current set of numbers.

Dash Example of a central dashboard

Finally, a staple of Pushing is literally that: pushing information to people’s inboxes, Slack channel, Yammer, etc. An email everyone can check first thing in the morning or on Mondays is super useful for getting the most important numbers out and in front of people. This is also a place where you should be very judicious in what you include. It’s super easy to create an email that no one opens, which defeats the entire purpose of the exercise.

Example of a nightly email Example of a nightly email

Here more than elsewhere it’s vital to remember the golden law of Pushing data: only push data that changes behavior in some way. Attention is a scarce commodity, especially in Push channels, and you should focus on giving people starting points that prompt those with the ability to alter a number to take action.

Putting it All Together

Now that we’ve discussed them separately, let’s talk about how the two means of getting data into teammates hands fit together.

Being sparing on what is Pushed is much easier if you have a truly open and useful means for teammates to Pull the rest of the information. You don’t need to overload their email with every possible sub-metric of a KPI if the tooling you provide makes it easy to drill into and dissect what arrives in their inbox. Similarly, for projects with a natural start and stop, making it really easy to take the results of a team Pulling together useful information and Pushing it to themselves for the duration without requiring outside help allows them to keep tabs on how their decisions are faring.

And Now for a Shameless Plug

Metabase dashboards and our new Pulse feature provide a great, free, open source way to Push data to everyone in your organization. Our easy-to-use non-SQL query interface allows the non-technical folks on your team to Pull information whenever they need it. Download it at and get it up and running in five minutes for everyone on your team!