top of page

David Blinov: You Can’t Hack Your Way to Doubling Sales

It’s been around five years since we first heard about growth marketing in a big way. There was a lot of talk around town, but few seemed to grasp the full potential behind this seemingly new discipline. What about growth hacking? Sounds even more efficient, especially for marketers hungry for results. In reality, growth marketing can be defined as systematic marketing experimentations.


Although understood by few, growth marketing has sneaked its way into traditional marketing departments, both B2C and B2B. David Blinov is a man who can tell you everything about growth marketing. Running his own B2B growth marketing agency, The F Company, David has had the chance to work with prestigious B2B businesses in the Nordics and can give you the full spectrum of the phases and terms one should cover when implementing growth marketing.


David will be speaking at our upcoming B2B marketing and sales seminar in Vilnius.


As a practitioner, how would you define growth marketing?


As a growth marketer, I feel that the term has been thrown around quite a lot these past couple of years. I’m a CEO of a B2B growth marketing agency and even I regard it as a useless buzzword.


Growth marketing by definition is systematic marketing experimentations. What growth marketing means for us at The F Company is practicing marketing that relies on rapid testing and learning. It’s an experimentation cycle, where instead of putting all eggs into one basket, coming up with 1-2 campaign ideas and then crossing your fingers for them to work – you test many different approaches in short sprints and then make your decision based on data, not opinions. You quickly kill what doesn’t work and focus your energy on the things that work much better.


Do you think the same approach could be used in other disciplines? For instance, if I’m writing a book and would be interested in finding the optimal narrative for a given chapter.


Many different things can be tested, things unrelated to business or marketing altogether. A lot of people seem to think that growth marketing is a new concept. One of the oldest books ever written about advertising is from 1923 and called Scientific Advertising by Claude C Hopkins, who says that you can send a thousand different ads to one target audience, change it a little bit and send another thousand to a different audience, and then see who responds better. This concept is not new. Modern marketers have rediscovered it and are now applying it via A/B testing for example.


From a sociological point of view, critics claim that in order for A/B testing to work, the sample size needs to be quite big, which is rather rare in B2B. Hence, marketers are making important decisions based on marginal differences.


Yes, our B2C marketing colleagues are more lucky when it comes to sample sizes. In a B2B environment, a lot of the clients, their entire addressable audience globally is 10,000 people. If they target billion euro companies and bring it down only to c-level executives – you can’t physically come up with more people. If they rely on A/B testing and experimentation, you are absolutely right, in order for us to make decisions the data has to be significant. Here, without getting all scientific, I would recommend to use one of the many A/B testing sample size calculators, which will tell you exactly how many emails you need in order to be able to make valid conclusions.


That validity is something we call statistical significance. For marketers, the magic number is 95%. Email newsletter variation 1 is 95% likely to be better than email variation 2. So when our clients ask us the same question, what do we advise? What we say is that, of course, you want to get as close to statistical significance as possible: the higher that number, the more likely you are ought to be right. But then there is an interesting quote from Jeff Bezos, who claims that most decisions should be made with around 70% of the information that you wish you had, because the impact from quick action could be much bigger. At the end of the day, it’s an opportunity cost. There is a cost of not acting on the information you have and instead waiting for 95%.


In very practical terms, most of the time when we reach 70-80% confidence with our campaigns – we pause the experiments and take the opportunity. We could be wrong, but most of the times we are right.


Another aspect about marketing experiments is the mindset. Corporate culture can swing both ways: the Apple way or the Google way. The first tweaks its product to near perfection and the second keeps testing, failing and sometimes winning. How does that fit with the overall growth marketing theory?


With the example you gave one might assume that the Apple way is the right way because they are so successful. In reality, Apple is in a very unique position because they have the resources to perfect the product.


If you want to be competitive, you need to be able to test, iterate, release new versions of both your products and marketing campaigns. Collect data and then improve based on that. From my experience with interacting with hundreds of B2B companies, they all come back to the same idea of a company culture that doesn’t go well with failure. They would love to test, but they are not allowed to fail. Statistically, only about 10% of marketing experiments are successful. If your culture is focused on winning all the time, then those experiments just seem like a waste of time and money.


Companies like Amazon and booking.com run thousands of experiments. It’s engraved in their culture. They test, collect data, improve upon the things that work and arrive to the perfect version of the product much quicker than you and I would. Booking.com has made experimentation available to everyone in the company, so people in different roles, from customer success to marketing to sales can run tests independently and learn from those tests without the approval from their management. Statistically 75% of booking.com employees run their own experiments that test different versions of the check-out pages, messaging, ads, images, simply to see what resonates best.

Interested in learning more about B2B growth marketing? Sign up for our upcoming seminar in Vilnius.


Another interesting part of growth marketing is prototyping. They release prototypes and sometimes even launch sales prior to the product being ready. There is a delicate line here, because one might easily damage their reputation if they don’t live up to the promise.


At the end of the day, it comes down to the promise. I have an interesting example from my own career a couple of years ago when I was involved in a crowdfunding campaign for a smartwatch. We needed to secure a couple of hundred thousand euros in order to secure more funding and actually build the product. We didn’t even have a prototype when we started the campaign, just a 3D render. We understood that we needed to build a product that would cater to the needs of our audience, but we had no idea what those needs are. We ended up building 50 different landing pages that told a slightly different story about what this product could do, what it could look like, who it could appeal to. And then we told 50 different stories, to 50 different categories of people. We ran paid campaigns to drive traffic to that page and we asked people to leave their contact information, so when that product eventually gets into production we would contact them. By collecting this data we realised that there is a very clear buyer persona based on the interest people were showing: a certain category of middle-aged men. We avoided legal trouble because we didn’t promise anything. When we launched the crowdfunding campaign, we focused on that particular audience and even used the emails that were collected during that campaign. The campaign was exceptionally successful.


How willing are people to respond to such tests? Isn’t there a risk of people losing interest.


Right now we’re talking about a very narrow use case. As long as it’s clear that this is hypothetical and you try to measure the interest, then it should be just fine.


There are of course many different ways for people to indicate their interest. With some current clients we measure content consumption: we tweak the content and see which version gets consumed the most.


Then the statistical part comes into play: whenever you test a product, iteration or an ad campaign, you need critical mass of data to make those decisions. In business-to-business, we might be talking about a very specific product with a potential audience of a few thousand people. How would you use the same strategy in that case?


The more narrower the audience gets, the more value should be put towards qualitative data. We have a case from Swedish company called GetAccept, a sales enablement platform. They sell to decision makers in certain industries. They wanted to figure out which direction to take with their marketing communications. Firstly, they interviewed 400 people in their target markets, 400 companies they would love to close in the next few years. They did this via an online survey. Secondly, they picked up the phone and recorded 16 hours of conversations with their existing customers - the people you would like to talk to because they have already trusted you with their money. They gathered data across various topics and got an idea of where they should go with their marcoms. That’s the qualitative route.


How can you convince 400 decision makers to respond to your questionnaire?


There is indeed a very high chance that you might not get all 400 to respond. Sometimes incentives could help or there are research companies you can hire. Getting that information is going to cost you some money. They didn’t pick up the phone and call 400 people, but they did pick up the phone and call their existing customers because the barrier is much lower and the trust is already there. This data was combined with the internal interviews they conducted within their own team, the people who interact with clients on a daily basis.


With more traditional companies, the product tends to be less exciting. Say, they produce a certain component or industrial machinery and make good money, but the marketing is very dated. How could these companies implement growth marketing?


One third of our clients are those traditional businesses, where marketing has always played the role of sales support. And all of the sudden, especially after corona – marketers are now expected to be the revenue driver. Growth marketing can help with that, but it can be very hard to convince the management that things are going to be done differently now.


The first thing we recommend is to start from a pilot project. Choose one area or market and then for 6-12 months, depending on your sales cycles, run marketing experiments. Collect the data and use it as ammunition when you walk into that management meeting, tell them that if you follow this formula, then we are going to improve our performance.


Now when it comes to the things we can test with traditional companies, it’s very similar to our more modern customers: value proposition, website pages, landing pages, different marketing channels, different buyer personas, etc. You would be surprised by how many customers don’t have a clear idea of who they should be targeting. It could be that they defined buyer personas 10-15 years ago and they are still targeting the same people with the same messages without stopping to think that maybe the people have changed or the needs have changed. The space for experimentation is unlimited, even for very traditional B2B companies.


Based on your experience, what are the biggest mistakes or false expectations for beginners?


The biggest false expectation is that growth marketing is some kind of shortcut to success. I especially have a problem with the term growth hacking, because it sounds as if you can hack your way to doubling the sales. The reality is quite the opposite: it takes a lot of time and work.


In terms of mistakes, I can easily name one, which I keep seeing with 80% of our Nordic customers. They keep coming to me with the same problem, which is that although they run experiments, they feel like they are not getting anywhere, it feels like starting from scratch every time. And that is because they are not running tests systematically. You need to build a process that would allow you to plan, record and analyse the experiment results in a way that you actually learn from. Use a simple spreadsheet where you define the sample size, goal and timeline. Once you have done that, list all your experiments and prioritise them using the ICE model (impact, confidence, ease). You assign points from 1-10 based on the three factors, you start with the experiments that are easier to do and that you think will make the most impact. And finally, you record the results. The next time you start an experiment with the same audience, market and product, you are going to go through that database you have in the spreadsheet, see what you have tested and build the next set of experiments based on the things that actually work. You will end up with a collection of a lot of learnings that employees across different markets and departments can tap into. They know exactly which value proposition works for a particular audience, how to talk to them and which channels to use.

bottom of page