Proof of item:
Start making more money with your online experiments
Learn approaches that work, pitfalls you will encounter and lots of smart solutions to get more value out of your online experiments
This course will teach you how to get more wins, bigger wins and more insights from your experimentation efforts.
In just 8 sessions, you’ll learn how to
- Get more wins through a long term impact optimization process.
- Calculate where to run experiments, and with what impact.
- Make hypothesis based on data and psychology.
- Prioritize test hypotheses and experiments.
- Design, start, and stop experiments at the right time.
- Come up with proper insights from experiments run.
- Build up a knowledge library on validated user insights.
- Scale your digital experimentation efforts.
Here’s why most of your tests are flat (or losing)
With today’s tools, anyone can run A/B tests – but it’s not the tool that determines the outcome of the test. The most powerful tool is still your brain.
The outcomes of your experiments are mostly determined by what you test and the right treatments. Most people who run tests, but are not getting uplifts, are just doing it wrong. They’re either doing spaghetti testing (“maybe this will work!”), building tests based on gut feelings, or are just tinkering with the small stuff (button colors or font sizes) instead of solving problems.
This course is designed to teach you optimal testing strategies, so you can start winning
The “secret sauce” to getting more wins – and bigger wins – is about the process for identifying the biggest opportunities, and coming up with optimal solutions. It’s also about statistics: knowing when you have a false negative on your hands, and when to stop the tests to begin with.
Sometimes it’s the post-test analysis that will give you the insights you need to turn a failing test into a win. You need to know what to look for when you crunch the data.
This course is right for you if…
- You’re responsible (even partially) for the conversion rate of your digital channels.
- You’re part of a team that runs – or should be running – online experiments.
- Your company or client has at least 1,000 conversions coming in per month.
This course is probably not for you if…
- An advanced statistics guru that knows everything about frequentist and Bayesian statistics, including the fact that false negatives are a way bigger problem than false positives.
- Part of an organization with the highest testing maturity level: Evidence-based optimization is in the DNA of how the company operates.
- Working at a company (or for clients) with fewer than 1,000 conversions coming in per month (the first four lessons are still valuable, but the last four won’t apply to your situation).
Skills you should have before taking this course
- Basic digital analytics know-how
- Basic user research knowledge
- Some experience with running online experiments (or you’re going to join a team soon that is doing this)
About your instructor, Ton Wesseling
Ton is one of the most respected practitioners in the conversion optimization space. He’s a sought-after digital optimization expert (over 20 years of experience), and recognized worldwide as an influential thinker, writer, and public speaker on conversion optimization and A/B testing.
He founded Online Dialogue – one of the global thought leaders in evidence-based growth. With his team, he helps and trains companies throughout the world to be effective at data-informed growth.
Your full course curriculum
A/B testing mastery
FACT & ACT process
In this class, Ton shares the real value of running online experiments.
ROAR & testing bandwidth calculations
This class covers where you can run experiments.
Customer behavior study
From data, to insights, to setting hypotheses, this class is a how-to on running customer base studies.
Ton shares best practices for determining how to prioritize hypotheses and experiments.
How should you set up, design, build, and measure your experiments? Ton answers these questions (including stopping rules).
Ton explains the best ways to analyze and present experiment findings.
Building up your knowledge database
What kind of insights should you be collecting? How can you best scale your optimization efforts?
Comparing different tests
In this class, Ton compares A/B testing vs. multivariate testing vs. bandit vs. AI, and when each should be used.
Read more: http://archive.is/YBpyt