Analysis

Analysis is the final module of our framework and at the same time the beginning of major iterative work to transform data into actions that bring us closer to our desired goals.

Let me start with a small digression: you know the "10,000 hours rule" for achieving mastery? There's another perspective on this: the path to mastery isn't about the number of hours, but about the number of iterations, where each new iteration is an improved approach based on past experience.

What if someone repeats the same actions for ten thousand hours? We'd hardly be talking about mastery evolution, but rather about perfect automation. Learning is a process of correcting mistakes. It's directly connected to the ability to receive and integrate feedback to update knowledge and improve future predictions. Accordingly, our task is to improve our knowledge and do it regularly, preferably quickly.

Entrepreneurs and marketers are like modern agnostic philosophers who understand that the world isn't 100% knowable and that each piece of knowledge is relatively true until we acquire even more perfect knowledge.

In this module, we'll talk about:

  • how data transforms into hypotheses, knowledge and actions
  • when to test hypotheses and when to make bets
  • which metrics to measure
  • how to prioritize hypotheses
  • how to organize regular work with hypotheses

How Data Transforms Into Hypotheses, Knowledge and Actions

To see the hidden logic of hypothesis generation, I suggest a diagram that shows the sequence of processes and their interconnection.

The chain of reasoning
The chain of reasoning

Data

It all starts with data. Data is a huge array of neutral information about some event. Data is just data, only a specific person's interpretation turns it into observations, then assumptions and hypotheses.

Observations

Observations are data that a specific person notices, for example: Andrew says that clicks from posts increased by 20% since April 1st.

Assumption

This is a personal interpretation of one's own observations, for example: Andrew says that engagement from posts increased by 50% since April 1st due to the new storytelling format.

Hypothesis

A hypothesis is an assumption about the present that we want to confirm or refute. A hypothesis is primarily aimed at understanding the world and accumulating knowledge

The structure of a hypothesis is "If we do X, then we expect Y, because Z." For example: "If we switch to a new post format, then we'll get a 30% increase in registrations, because stories engage and create desire to repeat."

Testing hypotheses isn't just experiments, but also additional research — the main thing is that the chosen testing method should allow gaining knowledge in the most economical way. Often at this stage, you can limit yourself to choosing best practices. In such cases, the hypothesis testing stage can be skipped because we're already using proven knowledge.

Knowledge

After formulating a hypothesis, we take actions to test it and gain new knowledge as a result of observations. For example, the observation "Registration numbers increased by only 10%, although engagement increased by 50%" and based on this we assert that "The new format engages people but doesn't necessarily convert to registration."

After gaining knowledge, two situations naturally arise:

  • Our beliefs are strengthened if the hypothesis was confirmed
  • Ideas begin to form on how the current situation can be changed

Beliefs

Beliefs are stable opinions based on experience (at best) or on other people's opinions. The worst thing you can do in marketing is immediately form beliefs without testing hypotheses. Beliefs work in a way that prevents a person from making neutral observations, as we tend to confirm our own point of view. Beware of this trap.

Bets

Here we return to our original situation and think about how to change it. To make a decision thoughtfully, we move to formulating it as a bet: how much are we willing to spend to get the desired result?

A bet is a decision that assumes we know enough to invest resources and accept risks.

Example: "I'm betting on developing a new pricing model that will increase purchases of the more expensive plan by 40%, and I'm investing 5 months of development by one middle specialist for this."

Good bets consider not only the ratio of resources and results (expected utility) but also risks. Mainly, the risk of spending resources but not getting the desired result. To minimize such risk, we need accurate knowledge that helps us predict what will happen after our actions.

Why Am I Telling You This?

  • The chain of reasoning is useful because it helps to understand your own logic and choose when to test hypotheses and accumulate knowledge, and when to make bets
  • It makes clear to others your own process of gathering knowledge and how collective decisions are made
  • Explores others' thinking processes through active questioning

When to Test Hypotheses and When to Make Bets?

Make bets when you have strong vision based on accurate knowledge. A bet is an entrepreneurial and bold way of making decisions under uncertainty. If you don't feel confident enough and don't have access to resources to make a bet, accumulate knowledge through hypothesis testing, and over time you'll gain clarity on what to bet on.

Which Metrics to Measure?

Measure what helps validate hypotheses or confirm bet success, and also general business indicators. Often smart people fall into paralysis when they don't have enough data and don't dare to develop the product without it. There's another extreme — moving only from personal beliefs, without testing hypotheses and measurable bets.

Simple logic for working with metrics:

  1. Determine which metrics you need to measure regularly to understand healthy business dynamics
  2. Formulate a hypothesis
  3. Check if you have means to calculate metrics that the hypothesis targets, and this calculation won't take you 2 weeks
  4. Assign someone responsible for calculation, record point A
  5. Design the experiment: how long and on what sample the experiment is directed
  6. Record data at the end of the experiment

Here I sound like a complete amateur to analysts. The thing is, proper A/B testing culture is much more complex than it seems at first glance. Speaking more formally, A/B is a process where two or more variants are tested to determine the most effective one. Effectiveness is measured by the probability of false positive and false negative decisions. Experiment accuracy determines how much you want to reduce error probability. The world of probabilities is large and not as obvious as it seems.

It's better to consult an analyst to conduct an experiment properly. If you don't have such resource, this shouldn't stop you from an experimental approach. It's not scary to look like an amateur, it's scary not to try and not make mistakes on this path. Our task is to improve our knowledge through iterations, do it regularly and preferably quickly.

Which Hypotheses to Test?

Test the component parts of the buyer-product exchange process that we covered in previous modules:

Segment

Example: If we offer our product to professional marketers instead of novice freelancers, then we'll see a 25% increase in sales conversion due to choosing a more solvent segment.

Value (conflict, benefits, belief in success, overcoming old habits) — test your key offer and its components.

Example: If we offer strategy with a quiz using smart AI assistant instead of regular service registration, then we'll get a more engaged audience and see a 10% increase in conversion to paying customers.

Questions — test questions in the quiz or in sales scripts

Example: If we ask about purchase urgency in the quiz, then we can qualify the segment into subsegments by engagement criteria, thus prioritizing leads for the sales department and increasing response speed by 20%.

Activation

Example: If we offer people a 30-minute video review and a curated comparison table of developer reliability as the first step instead of a PDF catalog of new buildings, then we'll increase conversion to calls by 50%.

Traffic

Example: If we spend $15k on paying 7 micro-influencers on Instagram, then we'll get 2x more website visits than when buying ads from a major influencer.

Collect hypothesis results in a common database so that over time you accumulate knowledge and hypothesis quality grows. A refuted hypothesis is also good — it perfects your knowledge.

How to Prioritize Hypotheses?

There are ready-made frameworks for this, for example: ICE (Impact, Confidence, Ease).

Essence: Quick evaluation of each hypothesis by three criteria:

  • Impact — expected effect on metrics
  • Confidence — how confident you are in the result
  • Ease — how easy and quick the hypothesis is to implement

The disadvantage of ICE is that it is a subjective evaluation where the team ultimately gets the sum of participants' opinions. If the team can't evaluate rationally and systematically, the framework won't be useful.

There's RICE (Reach, Impact, Confidence, Effort), where the Reach parameter is added — how many people will actually see/feel the effect (for example, 100 clients per month).

However, there are also downsides: parameter evaluations require data and experience. If evaluations are "by eye" the result will be distorted.

There's PIE (Potential, Importance, Ease), the principle is the same as in the previous two frameworks — you evaluate hypotheses by parameters you consider important.

Prioritization frameworks are interesting because they give a sense of order in team work, but without strategy, it's still a set of sorted hypotheses. Before implementing a prioritization framework, you need to approximately understand what the key constraint is, what to focus on to unlock growth. Without this understanding, simplicity and obviousness of hypotheses start pulling priorities to themselves.

To form such understanding, I suggest dedicating time to visually developing your own growth model (as I wrote about this in the traffic module), collecting metrics and together with the team diagnosing which area needs correction first.

Working with Segments

Segmentation is the foundation. If hypotheses are running out, don't give results, and metrics are stagnant — return to the foundation, because the foundation is value for a narrow segment of people. The seed of growth is a satisfied customer. Without happy customers, marketing becomes a heavy battlefield where weather and landscape play against you and you'll need huge resources to turn the situation around.

At Marquiz, we increasingly see how quizzes become the default tool for hypothesis testing. Major companies often start checking their promotions with quizzes instead of developing landing pages. The convenience lies primarily in speed and the ability to segment the audience through questions.

How to Organize Regular Work with Hypotheses

Gather regular meetings with key roles of your team, optimal meeting frequency at the start is once a week. First, we check the state of regular metrics, then go to analyzing last week's hypotheses: record indicators, observations and share ideas for future hypotheses. Third, we take hypotheses for next week into work. Over time, hypotheses will start getting more complex and require more development time, then it makes sense to switch to 2-week cycles.

This concludes our lead generation guide in our authorial interpretation, and this is just the beginning. The framework will likely remain the same, but details and material depth will gradually evolve. Thank you for choosing to invest time ❤

If you found our guide helpful, leave a review and follow us on social media.

Leave review

Author — Cojocaru Maxim, designer and Marquiz co-founder

Edithor — Olga Argysheva