When you add a link to your website to a tweet then check your web analytics you’ll be excited to see the rush of traffic you suddenly get. All those people checking out your stuff. Hey, let’s try that again…
If you are promoting a product or testing out a new content marketing campaign, you quickly become disappointed that so few of those apparent ‘clicks’ convert to a download, a share, an inquiry or some other post-social action. You start to ask yourself questions like “is there something wrong with the page content?”, or, “is the website misbehaving?”. What’s up? Why is all this initial interest evaporating into an invisible nothingness? And what can I do about it?
We do the leg work to find out what and why…
A couple of days ago we ran an experiment to see what the deal is. We tweeted a self-promotional tweet [view the original tweet here] with a link to get a free landing page, guessing that due to the timing and content we were unlikely to get much interest from our regular followers.
We shortened the link with Bitly so we could get their count of clicks. On the end of the expanded link a genuine offer really existed, with a small addition to grab the browser “user-agent” string that most browsers send when they visit your page. The “user-agent” string is anonymous and just shows some technical combination of meaningless characters from which you can guess the operating system, the browser, the version and so on. It is by no means foolproof, let alone accurate. But it does give some clues, and allowed us a quick way to count the page hits.
With all that in place, we now had a count of what Bitly counts as clicks and what our web server counted as hits. The hits are shown on an hourly basis in the following chart.
Wow! We’re amazing, right? Wrong.
In fact, or the 106 hits that are shown here, Bitly only registered 3 as real clicks. You read that right, THREE. Which means that 103 were other random hits on our landing page that were really messing with the analysis.
What were these rogue hits? Well, here’s the browser user-agent strings we received:
If your screen is large, or you have great eyesight (or probably both), you’ll see what a jumble of information the browser user-agent gives. In some, like the first we see
RebelMouse/0.1 Mozilla/5.0 (compatible; http://rebelmouse.com) Gecko/20100101 Firefox/7.0.1
What does all this mean? Well, more or less it is telling us clearly that rebelmouse.com is the provider of the technology hitting the page (14 times in this case). And that to ‘help’ the page present nicely (not mangle itself so RebelMouse can’t read it), it also pretends to be a version of Mozilla Firefox running the Gecko layout engine (the part of the browser that draws all your nice page in a recognizable way). Whatever the naming, RebelMouse is basically an aggregator and reads pages with a robot for presentation on your own website. These are not human clicks.
Then there are more tech ‘browsers’ hitting the page. Java/1.6.0_35 is the second, and I know for sure that there aren’t any commercial browsers that present themselves under the name of just a version of a programming language (the marketers would have intervened first!). So we can rule out a quarter of our hits in the top two lines.
Now, the next two are interesting:
Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-US; rv:18.104.22.168) Gecko/20100316 Firefox/3.6.2
Mozilla/14.0 (compatible; MSIE 9.0; Windows NT 7.2; .NET CLR 4.0.3705;)
These look like regular human browsers but Bitly didn’t register them. Extra digging was required to find out why (and I won’t bore you with that). But by now we’ve ruled out 40 of our 106 so-called clicks. To robots.
So we’ve wasted all that effort on a bunch of dull tech that can’t even do anything with our page? Keep reading, the fun part is still to come…
This got confusing - let’s back up a bit
And so we can dig in further, but what you’ll see without forensic work is that in just this quick test we also received visits from these robots:
- (something using) PhantomJS
That’s a whole lot of robots. And a whole lot of non-clicks.
Why do these robots ‘click’ links? Why not just read the tweets?
Partly the robots want to see the content of a tweet, so they can present a summary of the linked page nicely to users. Does that mean that users are actually consuming some of your content, so the robot ‘click’ could be considered valid?
The answer here is probably no. A human reader of this robot mangled data is is probably just seeing a brief link preview, a short description and no more (just like what you see when a page is shared on Facebook). That is enough for a human to decide if they do or they don’t want to read your stuff.
What is the takeaway?
You can get super-excited by your web analytics, or you can look carefully at what is going on. The real reason for a landing page is to try and elicit a response from a user. Only through some form of human interaction can you really tell if your campaign is working, but equally it is hard to know if your landing page is just scaring people away when you see so many abandoned visits. Take care in how you read your web visits.
The big deal though is to pay attention to the meta tags at the top of the page HTML. Or for UI-developed web pages, when you are asked for a description of the page, spend more than a few seconds writing it. Robots extract this information to show in link previews. These few words may be all many of your audience (reading through an aggregator, or a Facebook or LinkedIn share) ever see of your page. A poorly thought out description could completely put people off ever clicking the page, even if the tweet sounded great.