Introduction
Madelyn: Thank you so much for joining us for our Conversion Rate Optimization webinar.
My name is Madelyn Newman, and I’m a Product Marketing Manager here at CallRail. I have been here for about two years, I actually started out as a Customer Success Manager, talking to a lot of awesome marketers that are using CallRail today, and now I’m on our marketing team.
Joining with me today is my colleague, David Schroeder.
David: Hi, everyone. Thanks so much for joining us.
David runs all of our conversion rate optimization programs here at CallRail. So he’s going to be doing a lot of the talking today, but we’re really excited you guys are all here.
David: Great, thank you so much. My name is David Schroeder. A little bit about me, I came right out of college and I went straight into graduate school at NYU, I’ve got a Masters of Fine Arts degree and finished in 2008. That eventually led to me working in the film and video production world. Eventually, I got a full-time position at a international disaster relief organization as a video producer.
I got a lot of experience in video production as well as content and content strategy. It was around that time that I really started to get interested in the world of analytics and conversion rate optimization. I really started to wonder, you know, what kind of video or what kind of content piece really drives to action?
That led me to pursue a Master of Science degree in integrated marketing at Northwestern University, which I finished about a year ago. Since that time, I’ve been with CallRail since November as the Conversion Optimization Manager. So let’s get going, a couple of things, you know, conversion rate optimization it is one of my favorite topics, obviously, and that is because it combines a lot of different things.
It combines data, analytics, statistical modeling, design, messaging, persuasion marketing, customer journeys, and it brings all of that art and science to this beautiful intersection right there in this one discipline. So a lot of people think that conversion rate optimization is running A/B tests on different button colors.
We’re all familiar with examples of someone that’s say changes their button from red to green and sees a 231% increase. I would encourage you to disabuse yourself of that notion. So we’re going to cover a number of things today.
Today’s agenda
- What is Conversion Rate Optimization?
- How do I implement a culture of experimentation in my company or agency?
- How does CallRail bring experimentation to non-digital channels?
We’re going to cover, what is it and what is conversion optimization not? What does a program of optimization look like in your organization? What are some of the tools out there? Now, we’re going to go into a couple of special topics, things like branding, SEO, personalization, AI and the future.
Then we’re going to get into some really specific examples of how you can be doing this kind of testing in your campaigns or your organization, how you can test for offline experiments, and how you can use CallRail in those experiments. So let’s get in here a little bit more. I really want you to start to embed this phrase, and that is, “a culture of experimentation” when you think about conversion optimization.
And that’s another way of saying that optimizing your customers’ experience for your bottom line, it’s an ongoing process of testing the effectiveness of all of the elements that make up someone’s experience, whether it’s images copy, email headlines, page layout, timing of the message delivery, audience segmentation, even products and pricing models.
For example, if you think that your principal value proposition on your product or service is speed and ease of use, and you run a test on different value propositions, and find that your customer base responds really strongly to messaging about price, well, that could inform you as to how you will continue to develop products and services knowing that your customer base is very interested in pricing.
Now, in a culture of experimentation, as a marketer you are running experiments on everything that you can, all of those elements that I mentioned already. And you want to run experiments so that you can measure and test and refine what is working, you want to discover hidden value.
For example, if you have an email nurture campaign set up with an existing email list, those are two assets that you have, and you run an A/B test on different messaging in that nurture, and discover that it’s much more effective than you ever revealed a hidden value that your company already had and was already sitting on.
Then the third, of course is to evolve your business model, and that’s something that we just discussed. And one natural outcome that comes from all of this experimentation is also that campaign performance improves, and typically your campaign costs will decrease. So generally, as you step back and think about this in a broad picture, it follows that the more experiments that you run, the more uplift in your conversion rate that you’re going to see, that you will discover, and the greater impact that you’re going to have on your ROI.
Master optimizers often evaluate their programs on metrics like volume, velocity, and win rate. You want to be running experiments in as many different places as you can. If it’s in campaigns, it’s copy and image, it’s audience segmentation, it’s timing, it is how you are addressing remarketing if we’re talking about a website.
A lot of the examples I’m going to be giving have to do with websites specifically, but we are going to be talking about these broadly, to say that this is applicable for mobile apps and offline channels, which of course we’re getting to. But to give you an idea of scale, and where this goes to, really large and mature programs and large organizations are going to be running hundreds or even thousands of experiments on their website every week.
Of course, we need to bring up and mention that the number of experiments that you’re going to be running is limited by the complexity of your site, but also the volume of traffic. So two good examples of things that I want to point you to, is check out a podcast Louder Than Words by John Bonini, and the episode with Lindsay Pettingill of Airbnb.
You can also check out the Harvard Business Review article by Wyatt Jenkins, A/B Testing and the Benefits of an Experimentation Culture. These are both great resources to check out if you’d like to dig deeper into this. But at this point, we’d like to take a little bit of a break so that we can take an audience poll, and then we’re going to move forward talking about how you can build a culture of experimentation in your organization.
We want to know how many of you guys are already currently using optimization as part of your core strategy, if you’re kind of already running experiments, if you’re new to experimentation, you’ve never tried it before, so let’s go ahead and see.
Only about 15% of you are experts, you’re running some ongoing experimentation as a part of your strategy, but there’s definitely still a lot to be learned I would say. So, we’re glad you guys are with us.
That’s great and good to know. Thanks everyone for pitching in there. So as you step back and say, “Okay, how am I going to develop a culture of experimentation program in my organization?” And there’s some specific stages for you to evaluate, there is identifying problems, there is developing ideas, prioritizing tests, running experiments, and then finally reporting.
As we’re talking about identifying problems, that’s the first place you’re going to start. You may already have a good idea about what is and is not working on your site or in your campaigns. But if you don’t, Google Analytics, that’s a great place to start.
You want to begin tracking conversions on pages or site wide. You want to examine your user flow. You want to conduct a bounce rate analysis. And you really want to start digging into your segmentation performance. This is really important, because understanding, is my conversion rate for this specific page being affected strongly by one segment of traffic over another?
Is it paid traffic versus organic? Is it new versus returning? And how can I customize what’s on that page for a unique audience? And if there is a really underperforming audience, where did they come from, and should they be going somewhere else? Do I need to create a different route for them to flow in? So, like for example, say you’ve got a webinar signup page, and you’re comparing the number of users to the page, and how many actually completed.
This is a good way to start identifying leaks. Now, some pages of course the conversion is going to be really small, we’re used to conversion rates of 1%, 2% in some places. But if you suspect that 60% of participants on a page click “signup for the webinar,” and then of those people only 1% complete an actual signup, then that is an indication to you that something is going wrong with that content.
Now, there’s a number of other tools there for you to look at. Heat maps and click maps, these are tools that show you which elements on a page users interact with. You can look at scroll maps to see what percent of visitors scroll down to what depth on your page. Session recordings, if you’d like to actually watch users interacting with your site.
Pop up surveys, to glean more data. And then finally, usability studies, and this sort of falls out of the range of data. This is qualitative data, and it’s really valuable, but we’re going to come back around to this.
So once you’ve got a whole bunch of these options… You know, hold on, let me back up, there’s a couple other points I want to bring up here, other techniques. And this really gets into sort of the psychology of persuasion marketing, and it has to do with things like generating value proposition design, creating personas, incentivizing and price anchoring.
These are a lot of different ways for you to generate these ideas. But once you’ve got all of these ideas, now you really need to stand back and you need to prioritize what exactly are you going to run first? And this is probably one of the most crucial stages of the process, because experimentation is limited by the volume of traffic that you receive on your site.
You can really only run one experiment at a time in each location on your site, and in some cases you can only run a single experiment on your site at a time. So deciding what you’re going to be running first and next is crucially important in terms of when you are going to start seeing uplift.
When you’re considering how to prioritize, there’s three factors that you really want to look at. There is impact. If the hypothesis is correct, how large of a potential uplift could I see on my KPIs? And is this one of my principal user flow pages? Is this my homepage?
Is this a pricing page? Or is this a low traffic page? Is this something like an About Us page or a Careers page? Another factor to consider is the difficulty in creating and running this experiment. So what is the time that it’ll take to run? What is the time that it will take to produce a variant? Because it may be that you just need to switch out a short line of copy.
Like that can be done in 5 to 10 minutes. But it may be that you need a complete redesign on the page, and you’re going to have to pull in the design team, and you’re going to have to pull in the development team, and it could take a couple weeks. So that’s really important to consider. The third element to consider is what I would call strategic fit.
How does this align with not only my marketing goals, but also the goals of the entire organization? And where does that fit in that priority? So a couple of helpful hints for me, I find it really helpful to do this on a con bond [SP] board. It helps me to prioritize different ideas and hypotheses by location.
I’ll have the homepage and a list of experiments I want to run there, and then there’s the pricing page and a list of experiments there, and then I know my paid traffic landing pages. So that’s been very helpful. Another thing to discuss here is experiment design.
We can’t go too deep into this because it really goes out of the scope of this webinar. But it’s important to plan your study so that you’re gathering the right type of data, you’re gathering a sufficient sample size, you also have a sufficient statistical power.
You need to be sure that you aren’t unintentionally biasing your experiment in one way or another, which is called the confounding factor. And right now this makes for a really great transition into actually running experiments and testing. All of this, this process so far is what I would call a cyclical process, it’s ongoing, it continues to happen year round, and you’re always looking for improvements, you’re always observing variations and traffic flow, you’re all always looking at the results of previous experiments, and you’re trying to dig in and figure out, “How can I continue to iterate and continue to improve?”
That’s less the case if you’re working specifically in campaign management, then I think optimization takes more of a specific stage. But, in general, if you’re building a conversion optimization program, that part is an ongoing process. And then we get to what I would consider more of a conveyor belt, which is the testing and reporting.
Here you or a team of people is going to be grabbing experiments off of that prioritization board, and getting them out the door to see how long they’re going to run. And this is a really important thing that we need to bring up. And I would say, proceed with caution as you begin to look at your test results and your experiments and begin to interpret them, because the test results can be very easily misinterpreted.
Now, any tool that you’re going to be using to run these tests is going to have a lot of statistical models built in, and they’re likely going to be a blended model of both Frequentist and Bayesian Statistics. Now, when we talk about this Frequentist Statistics is what we traditionally think of as statistics, normal distribution curves, median, interquartile range, standard deviations, etc.
It’s this type of inference that draws conclusions based on the data frequency and the proportion. Now, Bayesian Statistics deals coming from Bayes Theorem, it deals a lot more with probabilities and type one and type two errors, which are false positives and false negatives.
Again, I don’t want to go into too much depth in the statistics because it is a very deep topic, and it’s not really the goal of this webinar, but I do want to point you in the right direction so that you can hopefully avoid some of the stumbling blocks and the potholes. So, a couple of things that I just want to mention up front is statistical significance.
Do not use this as a way of determining when your test is over. Statistical Significance is really a way of determining if the difference that you see in performance is real, or whether it is due to random chance. Number two, you really need to understand statistical power.
I’m going to give you a tool here in just a second that’s going to help with this. But you want to shoot for around 80% on statistical power. If your results are underpowered, if you don’t get enough traffic to the page, your test is just doomed to fail before you even begin. Number three, you want to determine sample size on your test.
A lot of times the advice is to determine the sample size before you begin the test, and there are sample size calculators out there all over online, Optimizely has a really solid one that I use quite a bit. So the caveat I would say to this is the sample size calculators are going to want you to input certain values, like how much traffic are you going to get to the page?
And what is the current conversion rate? And what is the uplift that you want to be able to detect? Now, if you’re running a new test or creating a new landing page, you may not know these specific inputs. It may be wise to just get the test up and running, but be sure that you do calculate that sample size to ensure that you are running a properly powered experiment.
Some other advice, there’s a lot…a lot of these tools will give you the option of running multivariate tests. In general, I would disadvise that practice, unless you just have a huge amount of traffic, you probably want to be running an A/B test.
Also, if you’re running more than one variant, say you’ve got a headline change and a hero image change, well, right there with those two variants you’ve got four different iterations that you need to test. So if you want to be able to understand what is actually driving the change, that you may or may not see in your data, you really need to be testing one variable at a time.
Number one, the caveat to that and the one exception to that rule is, I would say if you are just trying to do a radical redesign or you seem to be really stuck on a page, and you just want to see like, “What can we do?Let’s just throw something out there, something completely different.” I think that is a good chance to run a multivariate test, but in general you want to stick to A/B tests.
The other thing I would say is, don’t be obsessed with the statistical significance. There are places where you absolutely have to run that test all the way through to be sure that your results are 100% significant. Or excuse me, not 100, but that you are confident in your results. But your role as a conversion optimizer is not to run experiments.
Using these statistical methods are tools, they are not destinations in and of themselves. A lot of times if you don’t have the traffic, you can use a Bayes probability to determine a winner.
A lot of times if you don’t have enough traffic to really run a good data-driven test, it may be wise to lean into usability studies or surveys, get people’s reaction, what stands out on the page, and redesign based on those comments. I want to point you to a tool that I think is really great, and that is abtestguide.com.
Again, that’s abtestguide.com. And that’s going to have both a Frequentist and Bayesian calculator in it so that you can enter your data and get a full breakdown on all of the information that you need, from power, to standard error, to uplift, it’s a really fantastic tool, I highly recommend that you look into that.
The other thing that you can look into is an article called Understanding A/B: Testing Statistics to get REAL Lift in Conversions, and that’s on a site called optimizesmart.com.
– And we’re going to be sending out all of these resources after the fact as well, so if there’s anything else that David’s mentioned that you guys specifically are looking for, just send it in the chat box as well.
– That’s great. Thank you. So, one last thing I would add in there is you may need to be mindful of calculating your down funnel implications on tests and revenue implications on tests. And it depends again on the test and what exactly you are calculating for.
If it’s just a click-through rate on a campaign, probably not as crucial. But if you are looking at a page that has multiple consumer journeys on it, you need to be able to calculate the impact you’re going to have on all of these different journeys.
There are many results where we ran a hypothesis and found that, okay, yes, we did get a higher click-through rate to say, our pricing page, but that resulted in a drop in demo requests, and our ultimate revenue outcome was actually lower. So even though the test looks like a success it’s actually not.
That’s very important to consider. Finally, this brings us to reporting. So once you’ve gone through and you’ve really analyzed and pulled out your results, and you’re confident, you’ve got to bring those results back to your key stakeholders, to people that are going to be implementing these results.
But the really important step that sometimes gets missed here is that these results can often show insights that can be extrapolated to other marketing initiatives. So for example, if you think that your value is how quickly you deliver your product, and your customer base seems really to respond to messaging about quality, well, that could be very informative for competitive analysis.
That could be very informative for other messaging and for campaign development. So I think those are the real gems that you get out of running experiments that are very important to share. And that’s why it’s important that you’re not just quietly running experiments in a bubble, you’re helping to build a culture of experimentation.
And there’s a strong argument to say that you should be making the results of these tests public, put them up on a TV screen, share them in company meetings, so that everyone begins to think about iterating in this way. Successful companies iterate, and we’re all very familiar with that. Other good ideas is to set up a system of sourcing test ideas from the entire community, from the whole team, and reward people that submit successful ideas, and give them a little recognition, a pat on the back so that people start to think about ways that they can be optimizing their processes.
So, moving on let’s briefly go over some of the tools that we have. So you’re A/B testing platforms, you’ve of course got Google Optimize, a great place to get started. Optimizely is an enterprise grade tool. It’s a great tool but they’ve got a $50,000 price for, and actually value their web tool at $30,000, so you may want to think twice about that.
Same with Adobe Target, it’s an excellent tool, but it’s an enterprise grade tool. We’re currently using VWO, Visual Website Optimizer, and I think this is a great tool that can scale with your organization. It’s good for small to medium-sized businesses. And of course, there’s a number of other platforms, I’d recommend you go to G2 Crowd to take a look at some of the other options out there.
Some of the research tools that are out there that provide things like surveys, heat maps, session recordings, Hotjar, crazyegg, fullstory is a really robust tool, SurveyMonkey. And some other great places to look, unbounce or Landing Lion give you a great way to iterate on landing pages, Conversion XL and WIDER FUNNEL is really great.
Quickly, let’s cover just a couple of special topics before we dig into our examples. One, is branding. So, because of I think where conversion optimization originated, a lot of people think that branding and conversion optimization are really antithetical to each other.
But I don’t think that’s the case at all, it’s really important to remember that for a lot of companies and organizations brand equity is a huge part of their valuation, and it’s not something that you want to compromise. In other situations, all that matters is just driving clicks. So there is a spectrum of how you evaluate tests in terms of branding.
You think about organizations like Apple or Microsoft, Apple obviously has incredibly strict branding guidelines. But that’s not to say that they aren’t running any experiments, or they aren’t doing any testing.
Now compare that with say overstock.com or Amazon who are likely testing everything, from logos to brand colors. So there’s a spectrum when it comes to branding. Second, I would just like to touch on SEO, search engine optimization. This is very important because as you’re running tests with different content on different pages and serving different pages, it is possible to penalize yourself according to the algorithms, so it’s something very, very careful to think about.
Now, one thing I would mention is that Google has said that they are in favor of testing and optimization, obviously since they released a product in that category. But it is wise to think about. So for example, we work in the call tracking space, so if we were to test various headlines on our homepage we are very careful when we are iterating those different versions of that headline to make sure that every version we create has the keyword “call-tracking” in it.
If organic traffic is a big part of your business, make sure that you’re mindful about that. Third, personalization. This is really where we start to get into the future of conversion optimization and where it’s headed. This is the idea that different content performs better for different segments of your audience traffic, so new versus returning, or geographic, or firmographic.
The idea is now that you can send different traffic to different landing pages or different versions of landing pages so that they get the experience that will lead to the highest conversion rates for them. And this is really, like I said, where the industry is going.
And if you look at some tools that are coming out now, I’ve seen recently a tool called Intellimize. FunnelEnvy has developed a product, also OneSpot. And these are tools that run somewhere between $5,000 and $10,000 per month. And these tools drive traffic based on specific data cues, whether it’s firmographic data enrichment, whether it’s reverse IP, whether it’s geographic, and drives different traffic to unique landing pages.
These tools, using machine learning algorithms and AI, basically determine the winners and the flow of experiments without your having to be involved at all. And this is one reason that I didn’t spend more time on the statistics and math section of this webinar, is because these processes and techniques are starting to be automated in the same way that campaign management is being automated.
In the future traffic is just going to be driven by these algorithms to the unique content that the algorithm believes will be highest converting for those users. And presumably as we are able to get closer and closer to that one-to-one personalization, more and more versions of that content is going to be created.
I think there’s also a huge room for expansion to say that developing these tests will start to be automated itself. So there’s a really fascinating company out of the UK called the Prezi, and their technology optimizes and creates email headlines, that’s all of it all it does, but it outperforms human writers something like 99% of the time.
As you start to integrate these content creation tools with experimentation, machine learning driven capabilities, we’re going to see the way that websites function and the way that optimization takes place, change. So, that’s kind of a very high-level overview.
But let’s start to dig in to some use cases, some studies, and get into how this specifically relates to CallRail. We know that CallRail is a fantastic tool for bringing powerful analytics to your call activity, with seamless integrations like Google AdWords, Analytics, Salesforce, HubSpot, a lot of other platforms.
We know that there’s a lot of great features like dynamic number insertion and visitor tracking, so that you can see what web activity is leading to a phone call. We also know that CallRail is fantastic for tracking offline channels and phone calls. So you might be saying to yourself, “Okay, this optimization and experimentation is great, but my business runs on phone calls, and my chief call-to-action is people viewing my website and then calling me, so I guess I’m out of luck here.”
Well, CallRail has these great integrations with platforms like VWO or Optimizely so that you can see which variants are generating the most phone calls. So, one version of your page is going to automatically get one phone number, and another page, another variant will get a different number.
So then you can dig in and say, “Oh, okay, so which of these was more successful, pricing page A or the original?” So that’s a way that that we bring, we connect the dots between that web experience and those phone calls.
But CallRail also opens up a whole world of offline analytics and measurement, and that means that all of a sudden now you can start to be running tests with CallRail and your offline channels. So whether we are talking about events, or billboards, or radio spots, broadcast TV, print ads.
My wife used to work in an organization where she was handling the logistics for 50 events each year, and she was just one person on the team. So imagine an environment where you create three different event materials, send them to three equally-sized events and see which of those events produces the most phone calls.
Another variation is to say, okay, I want to go to different events, I’m going to send the same materials to both but we’re going to put different phone numbers, so I know that this conference actually performs at a much higher converting rate than conference X. Same with billboards, if you’re going to be releasing a huge number of billboards, it makes sense to test at least three or four different designs, so you can see which one performs better.
Same with radio spots, say you’re testing different content, does a radio spot with music work better than one that does not? Is it important to mention a phone number three times versus four times? What’s the difference there?
Now typically, when people start running experiments like this, you see an increase of around 20% to 30%, and that’s very broad, obviously, it changes with every campaign and every industry. But when you are testing and examining what is going to work better, and you make this a part of your routine workflow, you will inevitably start finding out what works better.
You know, I love a great case study, INSIDE THE CAVE that was about the Obama campaign in 2008. And when these kind of analytics and techniques were really brought into the world of political campaigns, and they were running tests on various different email headlines and having huge results. And when they went back to the key stakeholders, the political advisors, the quote that they had was that, “Look, basically, we found that our guts were worthless.Our instinct never went with where the actual results landed.”
Making this a part of your routine flow, testing and experimentation is really critical, and it’s why most savvy and sophisticated marketing departments out there consider conversion optimization to be part of their core strategy and core functionality.
Madelyn: I think we did want to ask, since David just went over some great ways that you can start using CallRail to test online and offline, we kind of wanted to see who out there was interested in learning a little bit more or maybe if you’re already using CallRail to prove ROI with some conversion rate optimization, which is awesome.
– Awesome. So, and one, we’ve got a few different questions and comments throughout about asking David if you were willing to share any examples. Like I know you mentioned the cambian [SP] that you use to prioritize different projects, and a few of the different websites and tools that people can use, and would you be willing to share those after the fact, and when we follow up with everyone via email?
– Yeah, there’s definitely some very specific examples, I can send screenshots. So we’ve got a lot of great materials and great resources that we can definitely connect you with.
– Awesome. So, Robert asked this question, so, at what call volume number does this level of tracking start to ROI? So basically, you know, doing A/B testing in CallRail, at what point in time does that make sense based on the spend that you’re doing for call tracking?
– You know that that really comes down to a question of, how are you going to be able to determine statistical significance or probability of success? And that’s why it’s really important to determine sample size ahead of time before running experiments if you can, because you can achieve statistical significance and experiment with just 100 people if the difference is big enough.
If you divide that into 2 segments of 50 people each, and one segment gives you one phone call, and the second gives you 25 phone calls, obviously, you’re going to see a big impact, and it is definitely worth it. But if you are looking at a conversion rate of 3%, and you want to be able to detect a 5% uplift, and keep in mind that is 5% of 3%, so you’re going to need a very, very large sample size, and I believe that’s over 100,000.
So, understanding what your sample size is and going to a sample size calculator, like the one that we mentioned Optimizely has, there’s a whole bunch of them out there. Start running some numbers in there to see where you’re at, and to see, what is the difference that it would make?
I’ve seen a number of studies, and I would suggest that every campaign benefits from running testing, and that you should dedicate anywhere from 18% to 20% of your budget to that testing. But I would say, look at this channel overall, whatever you’re considering running a test on, determine what is the income or revenue that you get from this channel, and then say, “Okay, what kind of an increase would I need to justify that spend?”
So if I got a 25% increase in performance, and how much does it cost to create this iteration, and set up the test, and how much time? And is there an ROI there? That’s how you really want to approach determining whether or not you’re ready for testing.
– Awesome. And that kind of segues into the next question that we have, which you mentioned Bayesian statistical models earlier. So why use that specific model to test statistical significance?
– Yeah, so this gets back to the idea that the more tests you run, the more likely you are to produce good results, to find winners, to implement winners, and to actually increase your end results, your bottom line. So, the advantage of leaning into Bayesian statistics is that if you can say that, “This is a result that is maybe not directly tied to revenue, but I know, I’m pretty sure that this is going to perform better…” and you know what, the Bayesian statistics says that, you know, there’s a 90% probability that this is going to outperform this test, even before you get to statistical significance, then you can declare a winner.
So a lot of the times you’re going to see messaging out there from various testing organizations saying, “Marketers that use our tool run 10 times the number of experiments.” Well, how can that be? If you’ve got exactly the same amount of traffic, how am I going to be able to run experiments faster?
Well, it’s by leaning into tools like Bayesian statistics, that you can cull experiments before they ordinarily would have run their entire cycle.
– Awesome. And then one other question that we have, and guys, please feel free to use the chat box and send in any more questions you have. So you mentioned that CallRail uses VWO. For those who aren’t familiar, that’s Visual Website Optimizer. And someone wants to know if CallRail integrates with VWO, and what kind of tests you have currently run on the CallRail side of things, so for us, what are some cool things you’re testing?
– That’s a great question. And yes, we do integrate directly into VWO, which makes it so easy for us, is that when you go into look at your dashboard in Visual Website Optimizer, and it shows you the results from variant A and variant B, automatically there are two goals added in there to see how many calls did this version generate versus that, and how many text messages?
So it’s incredibly easy, and the integration is so simple, I love it. And what was the second part?
– The second part was, what are some tests that you’ve recently or that you’re currently running for us here at CallRail?
– Yeah, that’s great. So we’re running a test right now, and this is really helpful. So the homepage on our site used to have two call-to-action buttons, one going to pricing, and then one going to request the demo page. And we’re running an experiment now to say, what if we added the free trial button right there on the homepage?
So for those that are already familiar with CallRail, they’ve already checked it out, and they know they just want to get right into a test, let’s skip those additional steps in the process. Let’s reduce friction, and let’s get them right into a test. So let’s add a button that says, “Start a free trial right now.” So we’re adding another consumer path, decision journey onto our homepage, so they can go to pricing to request a demo, they can call, they can chat, and they can get right into the free trial.
So that’s a great test, in that it gives us a really clear example of one test variant over the other, has a whole bunch of clicks on the free trial button. So obviously, that test is outperforming along that metric. But I really need to go back in and calculate, okay, so for all of those people that just clicked free trial, well, how many of them did not click on Pricing, and did not click on Request Demo?
Now then, what is the conversion rate of people that go to the pricing page and then click free trial? And do those numbers diminish to a significant degree where the additional free trials are not balancing out, and we’re actually losing customers? Does it change if I add in the phone calls?
Because we know that phone conversations are a really high converting channel. So if someone is say, going to a request demo page, versus just picking up the phone and calling, how does that impact the bottom line and the total number of customers that we eventually have, because by adding in additional steps, that adds additional places for website users to fall off.
So that’s one example of a test that we’re running right now that requires down-funnel calculation, that requires call tracking integration, and that is going to have a significant impact on our bottom line.
– Awesome. And then we’ve got one more question, it looks like that’s coming from Polson [SP]. And so Polson wants to know, have you noticed any trends in your results specific to companies with offline measurements compared to online?
– I have not specifically noticed any results. Or I wouldn’t say that I have the data to back that up. In general though, I think that the implications between online and offline testing are the same, in that you…those marketers that commit to rigorous and continuous testing see improvements, and you know, metrics that are thrown around are usually around 20% to 30%.
So, I don’t have anything specific I can point you to, but I would say that the theory holds, and that if you begin and continue to run experiments in your offline channels, you will see uplift.
– I think also Polson something that might kind of help you, some examples that we’ve heard from our current customers whose right…if they’re already using CallRail then their main form of leads is phone calls, which for us isn’t necessarily the fact.
We definitely get phone calls, and we want more of them, but we’re not as unfortunately, internally getting as many phone calls as we know a ton of our customers are, but some specific results that we’ve heard is, you know, maybe on a specific landing page, actually looking at the test of conversions of you know how many people in general are filling out this form, and how many people are picking up the phone and calling, okay you know it looks like there’s more people calling.
Or what happens if I make the phone number bigger? What happens if I put the phone number inside of a red click-to-call button? Certain things like that, and then they’re able to actually effectively test those results inside of CallRail and seeing how those small tweaks on web pages can actually help you improve the offline conversions over the online conversions, which is maybe more powerful and meaningful for your business if that’s where you’re actually going to close the sale is over the phone, and you know the data does show that phone calls are better, more involved leads, or ready to buy.
So there’s definitely certain tests that we have heard from our customers that they’re running that kind of comparatively show that the offline you know maybe does do better than the online depending on your business model, but it’s always great to test it for whatever it is that you guys are doing. Awesome.
Well, it looks like that is it, and we don’t have any more questions. So thank you guys so much for joining us today. Thank you, David for doing an awesome job, being so informative, and we’ll definitely follow up here with an email, so be on the lookout. And as I mentioned earlier, we’re also going to be posting the recording of this on our website as well if you’d like to share it.
Awesome, great. Thanks, guys.
– Thanks so much, everyone.