A Google research team has developed and tested a system that ranks content based on accurate information, rather than incoming links. The new approach hasn’t left the test lab yet, but a report (PDF) from the tech giant describes a method that could shape the future of search – for users and marketers alike.

Sadly, the paper doesn’t make the experiment sound as exciting as it could, so it only seems right we summarise the key points and highlight what it could mean business and website owners everywhere.

The quest to measure quality content
All we ever hear and talk about as marketers is the importance of quality content, but search engines still don’t know how to measure it. It’s no easy task either, when quality is something far too subjective to measure by numbers alone. Google’s best shot so far is a complex combination of ranking metrics that includes inbound links, social signals and a range of other data to rank pages for every search.

It’s a flawed system though, as proven by the string of algorithm updates released by the search provider, as it fights to regain control of its own results pages. It’s been an aggressive backlash from the US firm, but a lot of progress has been made and it clearly has no intention of stopping now.

Accuracy could be the next big signal
All eyes in marketing are on Google algorithms of the future and this report from one of its research teams could be a massive insight into what lies ahead. It describes a new system, based on the current method used to rank pages by the amount of “quality” inbound links. Except this tweaked version counts the number of accurate facts in any given piece of content, using Google’s vast Knowledge Vault as a reference.

It uses this number to determine a Knowledge-Based Trust Score and help measure the trustworthiness of pages and its content. And while the system hasn’t gone live yet, tests have been very positive – at least according to the team behind it.

Does accurate info always mean quality content?
Well no, of course it doesn’t. But the report does make a point of explaining that a Knowledge-Based Trust Score isn’t designed to replace the existing algorithm, rather become an integral part of it.

After all, not all websites are designed to report accurate information and they shouldn’t be penalised for sticking to their purpose, or forced into changing it. While it’s also worth asking how qualified Google is to decide what counts as accurate information. Google’s Knowledge Graph is massive, but how does it plan to deal with breaking news or discoveries, when there’s no existing information in the vault – or debatable subjects where facts and stats are so easily manipulated?
Maybe fact stuffing will be the next black hat technique if Google isn’t careful how it implements this into its algorithm (assuming it does at all). But let’s cut the research team some slack for now. At least the concept of accurate info in itself sounds like a step in the right direction. Few people will disagree that false information is a sign of poor quality and if Google doesn’t get too authoritarian with it all then this could be a valuable addition for business owners and users alike.