Conversion rate optimization (CRO) best practices

Updated

Learn the basics of CRO to come up with a CRO strategy for your website.

Conversion rate optimization (CRO) is the process of improving a website so that more visitors take a desired action, like making a purchase or signing up for a newsletter. Creating a CRO strategy can help you understand why people don't complete desired actions and provides you with structure for implementing website changes that drive visitors to take those actions.

This guide discusses what to look at on your site and how to think about approaching a CRO strategy.

Gather data to make informed decisions

Teams that experiment with uninformed ideas tend to find winning experiments roughly 10% of the time.

Teams that use data to inform their experiments win roughly 20-30% of the time. If this were an experiment, we would want to be skeptical of a 100-200% increase in conversion rate and check it for issues. But that’s what you get when you base your experimentation on areas of opportunity: a 100-200% increase in the probability of finding winning ideas.

Identify conversion blockers (areas of opportunity)

Conversions blockers (i.e., friction points, areas of confusion, distractions) are areas of opportunity to focus on with your CRO strategy. Look for things that hinder visitors from getting to where they want to quickly, which culminates in fewer conversions:

  • Lack of transparency (e.g., hard to find return policy)
  • Distractions — visitors’ attention is pulled away from the primary conversion path
  • Unclear labeling — visitors don’t understand the value you provide, or how you effectively solve their problem
  • Superficial copy — clever, funny, or platitude-filled copy doesn't convey value or recognition of the visitor's problem
  • Not enough info to make a decision — visitors have uncertainty about if the product or solution will work for their use case

Identify pages with high traffic and non-trivial drop-off

The webpages that get the most traffic with higher bounce/exit rates are the best targets for your first experiments.

  • These criteria suggest that conversion blockers exist on the webpage
  • High traffic equates to getting results faster — shortening experiment run times, letting you learn and iterate faster

An added benefit of starting on pages with high visibility is that you can show results to your key stakeholders quickly. Use those first experiments to show the power of experimentation and get stakeholders excited about and invested in the process.

Review data your company already collected

Most companies have more data than each team realizes. The data is just not socialized well, so you’ll have to ask around for it. Look into marketing surveys, chat logs, customer feedback from customer service or sales, analytics data, and heatmaps.

Focus on searching for problems, not necessarily how to solve them:

  • Quantitative data (i.e. numerical data) helps find where problems exist
  • Qualitative data (i.e. voice of the customer) adds context to explain the problem

Build problem statements and hypotheses

A problem statement describes a problem you think you've identified. Use data to support the statement and keep it concise. For example, "Analytics shows the CTA click rate is low, and the exit rate is high on the demo page."

The next step can be to do targeted research (e.g., a survey) or analyze the data to look for patterns (e.g., a traffic source produces higher exit rates). Update the problem statement as needed. For example, "The CTA click rate is low, and the exit rate is high on the demo page because most visitors only scroll 20% of the page and miss the product value propositions."

Once you have one or more firm problem statements, develop an ifthenbecause hypothesis. For example, "If we change the top headline to highlight how much time we save our customers, then we will see increased CTA click rates because our prospects will recognize that we understand their problem (limited bandwidth)."

Brainstorm ideas

With one or more hypotheses to reference, brainstorm ideas that solve the problem(s). Consider including cross-functional team members (e.g., engineering, sales, customer service) in your brainstorming sessions — with diverse backgrounds, you can often generate a greater diversity of ideas. Keep the number of people involved between 5-8 per session.

  • Bring a list of problem statements and hypotheses
  • Do not criticize ideas — the point is to encourage ideation
  • Encourage building on ideas — try an exercise of sharing objectively bad ideas and ask what can be done to improve them
  • Come up with multiple ideas per hypothesis — even if traffic volume only allows for one experiment one at a time

Start with simple variations

Start with prominent copy (e.g., headlines, sub-headlines, section headings) — which often provides a high return on investment. Strive to use the language of your customers or prospects, not internal language, to increase the probability of visitors understanding the messaging. Your goal is to be direct with maximum clarity. Consider these themes:

  • Value propositions — highlight all the ways your products provide value
  • Reassurance — express why you're a good choice and/or highlight social proof
  • Incentivize — motivate prospects, create urgency, or provide additional value to get them over a friction point
  • Differentiation — showcase how you differ from competitors and what makes you uniquely suited for prospects
  • Simplification — shorten phrasing, hide less critical copy to let key messaging stand out, rephrase for easier consumption

Create a roadmap

An experimentation roadmap is an overview looking ahead that outlines your experiment plans, often over the next month or quarter to align with monthly/quarterly goals. A roadmap helps you organize and share your optimization plans.

Common tools to make a roadmap

  • Trello, JIRA, or similar project management tools with kanban views
  • Excel or Google sheets that serve dual purposes as prioritization and roadmap
  • Gantt charts noting when processes associated with each experiment take place

Roadmaps should be flexible

Review your roadmap weekly to see if you need to make any adjustments based on new data, changing priorities, or other factors. Make adjustments deliberately. For example, if the results of an experiment indicate great opportunity in a previously unidentified area, adjust your roadmap to act on it.

Execute experiments effectively

Aim to go from an idea generation to a live experiment in about 5 business days. There are a few considerations to keep in mind as you plan and create your experiments to set yourself up for success.

Pick the right optimization type for your planned experiment

Traditional test AI-optimized Manual personalization
Experiment runs unaltered until statistical significance is reached Leverages AI to continuously show the variations that yield the best results. You can add additional variations into the mix at any time. Use rules to lock in which audience sees which variation. You can add new variations for different audiences. 
Your intention is to find a singular winner and update the base site Your intention is to maximize for conversions Your intention is to personalize a specific audience's experience
Example use case: You’re introducing new functionality (e.g., hiding a navbar option) Example use case: You're not sure which audiences will like which variation idea Example use case: You're showing a localized phone number to a specific region (e.g., Canada)

Define metrics for a successful and unsuccessful conclusion

All experiments should include primary success metrics — in other words, define when an experiment should be considered a success and when it should be considered unsuccessful.

  • For traditional test (A/B style test), the conclusion is when statistical significance is reached and a winner is declared
  • For AI-optimized, use the Strength metric — where >70% is a success and <30% is an unsuccessful experiment 
  • For manual personalization, you'll have to decide what your success metrics are to determine when the experiment has concluded (e.g., Variation A has a 10% higher conversion rate than the Base (no change) variation)

QA the variations in your experiment

Make sure your variations look and function as you'd expect across desktop, mobile, and tablet resolutions. You should perform the QA at least once before the variation goes live and once again after the variation goes live.

Share your results with stakeholders

Share updates before an experiment launches and after it reaches its defined conclusion. Always share updates with leaders and key stakeholders, but you can also include your entire organization where you post notifications of new write-ups. If sharing by email, consider aggregating and summarizing on a monthly basis to limit the number of emails sent.

Provide an option for feedback. Anonymous feedback often yields better comments. Encourage folks to share their thoughts on the results, the next steps of your roadmap, and any ideas they might have. This approach can help with ideation and get folks invested in the process, which builds excitement.

What to share before an experiment launches

  • The hypothesis and what led you to its creation
  • Screenshots of the treatment(s) and the base page content
  • What metrics will be measured and why
  • (If A/B style test) estimate of expected duration

What to share after an experiment reaches its defined conclusion

  • The hypothesis and what led you to its creation
  • Screenshots of the treatment(s) and the base page content
  • Metrics measured and why
  • The results and business impact
  • Learnings from analysis
  • What your next steps will be and why (including if it was a dead end and there are no direct next steps)
  • Kudos to teammates involved, including those who contributed to the idea, designed the treatment(s), built the experiment, analyzed the results, etc. 

Do not shy away from unsuccessful experiments. What you learn can, in some cases, can be even more valuable than successful experiments. Folks often have strong feelings about hypotheses and react strongly when data disprove them. Evangelizing them encourages reaction and participation from the organization, and helps to foster a “test and learn” mindset.