Great question. So: I will say there is a set of basics for analytics you should know as far as data science. And some basics for analyses. So for data science, I think you have to start learning about Bayesian Mathematics. Bayesian Mathematics, at its core, is just the idea of given measurements and information, we can improve our hypotheses. It’s saying that every time I observe something, I can add information to help me make a better decision. It’s a mathematical way to do that. The other major thing about Bayesian mathematics is it’s heavier on time series, so in Bayesian mathematics, time is often a control. Not a feature. So what that means is we assume that everything is happening in time. This is SUPER-different than a lot of other practices, because time is a feature in those places where you look at tenure, or how long someone has been a member, versus realizing that in time they are changing. And in a startup and today’s age, everything is changing in time at such a rapid place that if you don’t take into account time you are often blinded by the results.
The second dimensionality is analyses. Analyses is learning about variance: how do you reason about uncertainty in the data. When we talk about the analyses we gave, it’s very easy to look in aggregation and see a very big impact. As you dive deeper and deeper, remove variability and time, remove inconsistency, and look for causality, the picture often changes. So most analysts are trained to look at the top level, and ignore the rest. And it’s really dangerous.
The final thing that I would say is: if you cannot explain your analyses, and how you got to the conclusion you made, to a fifth grader, then it’s not worth it. So often you see in a lot of AI tools, and they say “men convert better than women” and you’re like “Cool, how did you get that, why? Am I going to change my whole business? How did you come up with that conclusion?” And they say “Trust us, it’s AI.” That’s BAD. You can’t work with that. You can’t use it! It’s never right. To give you a really good example of the dichotomy of our approach versus the black box approaches, I was working with a company, and we were trying out a big AI product by one of the Fortune 500 companies. And the company said “Our AI solves your churn problem!” And we’re like “What do you mean, solves our churn problem? How did you figure that out, as it’s a complicated problem?” And we go into a meeting, they show us the results of what the AI decided. The answer they came up with was “when someone pressed the cancellation problem, that’s the biggest indicator for them leaving.” This is a real story. The computer can’t reason: this is the dilemma with black box approaches. So learning to explain and analyze is key. And only using approaches that can be explained.