What are experts doing to generate traffic and grow an audience around their content? What’s working, What’s not, and how is that changing?
It’s clear what works is changing
At each point in the funnel (traffic, subscription, sales), things are changing.
On the one hand, traffic is harder and more expensive to come by, users’ inboxes are inundated with offers, and market competition on paid solutions is only growing.
On the other hand, technology and monetization approaches are evolving, with patron subscriptions, pay-what-you-can, bundles, influencer sponsorships, niche ad monetization platforms, and better tech enabled workflows.
Right now I’m taking a three-pronged approach of survey, interview, and website data analysis.
Series of Five Question Interviews
The five question survey was about 20 minute call format loosely based around these questions:
- Can you tell me what last week looked like for you? I’m looking for an idea of how you spend your time.
- What’s your biggest focus this year from a marketing standpoint?
- Compared to last year, what about your approach to generating traffic, subscriptions, sales has stayed the same and what’s changed?
- Is SEO something you care about?
- If you could wave a magic wand and be able to do anything that you can’t do today, what would it be?
The goal was to better understand sentiment and challenges around generating consistent traffic.
Goal: Inform hypotheses, create a resource to pull from for coding dimensions of marketing efforts, and gauge willingness for deeper interviews.
Disseminated how? Email outreach, Twitter, relevant Facebook groups, LinkedIn.
Email: I’m doing some research to better understand how experts serving large audiences online approach increasing website traffic. I’d love to get your input. Would you be willing to take a 5 minute survey?
Optional opt-in: Would it be okay if I reach out to you for an interview? It would be 100% confidential.
Website audits at scale
Graph-based approach with ScreamingFrog + Neo4j
I’ve spent a fair amount of time developing a workflow to semi-automate crawling websites, extracting relevant data about content, technologies used, navigation items, and combine it with other relevant information, like third party data from ahrefs.com and semrush.com.
Layering in additional data
Real life relationships are discoverable just by crawling the web. This is what Google does. It maps relationships between entities, assigns value to those relationships based on a host of factors, and then uses them to determine what sites should rank for queries around what topics.
I am hoping to combine correlational data (like rankings and website characteristics and link-based relationships) with self-report data of people doing things to intentionally generate traffic to sites. In the future, I’ll be looking at more formalized process to experimental designs (changing sites, tracking effects).