Danny Sullivan

  • Danny Sullivan’s Blog
  • Popular posts
  • Archives
  • About Danny Sullivan
  • Contact

Google’s Love For Newspapers & How Little They Appreciate It

April 6, 2009 by Danny Sullivan

It was a hostile audience. It was June 2007, at a conference center in London, where newspaper and magazine publishers were hearing how a new industry-backed search engine rights standard called ACAP was coming along. The day ended with an “issues” oriented panel. The audience didn’t seem that pleased with me telling them they were full of it on how important they thought they were and how awful they thought they had it from Google in particular.

I didn’t phrase it like that, but that was the essence of my attitude. I’d rarely encountered so many people in one place with such a sense of entitlement. Worse, these were supposedly my own people. Newspaper folks, where I got my start in journalism. What an embarrassment.

I’m not talking the rank-and-file of newspapers, however — the reporters and editors doing the grunt work. This crowd was full of publishers or editors of a different type, not wordsmithing and story assignment but looking out for the business issues.

ACAP — the Automated Content Access Protocol — was a convoluted system being developed at the time to “solve” the problems that newspapers and some other publishers felt they had with search engines. In particular, that they felt they should be able to selectively decide which pictures could be printed, how long stories could be listed and a number of other things all of which largely already could be controlled through existing systems (my past post, Search Engines, Permissions & Moving Forward In Copyright Battles, goes into this in more depth).

ACAP’s real goal, of course, was to establish a way that newspapers could demand Googlegeld, their own version of Danegeld, a tribute tax they felt entitled to get just for being listed in Google. The panel started with a progress report on how ACAP was going, with the audience then asking the panel questions or simply making statements.

Over and over, people kept using the phrase “quality publishers” and how they hoped ACAP would protect these publishers and how “many” publishers were behind it.

I’d had enough. I can’t recall my exact spiel, but it went something like this. I explained to that group that ACAP was far from backed by most publishers. That on the internet, there were millions of publishers, while the newspaper groups backing ACAP mounted to a few hundred, if that. That these millions of publishers have a diverse set of concerns about search engines that ACAP was far from addressing, since it was so newspaper-centric. That an online shopping site is also a publisher, as is a small blog, as is a social media site, as is a vertical news site — and none of these groups had been invited to participate in the hallowed discussion of a supposed new robots.txt 2.0 system.

I also explained that unlike virtually all other publishers on the internet, newspapers were given extraordinary special status with Google. They were among the very select few to be admitted into Google News and receive the huge amounts of traffic it could send their ways. That many small blogs with excellent content struggle for admittance that these other publishers just got handed to them on a silver platter.

I then got very personal. I explained that I was also a journalist, publishing what I considered to be quality content as well. Indeed, I’ve published content on my topic (search engines) that I know has been of far superior quality than that published by many supposedly “quality” publications. So for them to argue they were somehow “quality publications” deserving special treatment was arrogant not to mention simply incorrect.

And now I’m hearing the same old crap again, and I’m feeling the same way I did back then. Some samples in the past few days. First from Robert Thomson, editor-in-chief of the Wall Street Journal:

Meantime Thomson said it was “amusing” to read media blogs and comment sites, all of which traded on other people’s information.

“They are basically editorial echo chambers rather than centres of creation, and the cynicism they have about so-called traditional media is only matched by their opportunism in exploiting the quality of traditional media,” he said.

Robert, I’ve been creating original content on the internet for about 12 years longer than you’ve been editor of the WSJ. Shut up. Seriously, shut up. To say something like that simply indicates you really do not understand that all blogs are not echo chambers.

I mean echo chamber? Sorry, that’s the mainstream media, too. I cannot tell you how many times I’ve seen stories emerge on the internet only to later appear in a mainstream publication. The mainstream papers read what the web publishes, then write their own stories, then all the mainstream pubs do their own versions of echoing each other.

I like getting quoted in the Wall St. Journal and all. It’s nice for the profile, and on the odd occasion I might get surprised with a link back to my site. But is this story about getting traffic from search engines (I got quoted in it) from the Wall St. Journal echo or original content? What was the Wall St. Journal putting out in 2007 that dozens of independent blogs about SEO hadn’t said already, in more depth and in more quality?

And what the hell is AllThingsD? Why are you running that and the Wall St. Journal. Are you just echoing the WSJ there? No, of course not. But why don’t you have it within the Wall St. Journal, since that’s the hub of where your traditional quality is supposedly at.

But let’s not stop with Thomson. Let’s go on up to Rupert Murdoch, who says Google’s stealing his copyright in a recent Forbes article:

“Should we be allowing Google to steal all our copyrights?” asked the News Corp. chief at a cable industry confab in Washington, D.C., Thursday. The answer, said Murdoch, should be, ” ‘Thanks, but no thanks.’ “

Let me help you with that, Rupert. I’m going to save you all those potential legal fees plus needing to even speak further about the evil of the Big G with two simple lines. Get your tech person to change your robots.txt file to say this:

User-agent: *
Disallow: /

Done. Do that, you’re outta Google. All your pages will be removed, and you needn’t worry about Google listing the Wall St. Journal at all.

Oh, but you won’t do that. You want the traffic, but you also want to be like the AP and hope you can scare Google into paying you. Maybe that will work. Or maybe you’ll be like all those Belgian papers that tried the same thing and watched their traffic sadly dry up.

Perhaps all the papers should get together like Anthony Moor of the Dallas Morning News suggests in the same article:

“I wish newspapers could act together to negotiate better terms with companies like Google. Better yet, what would happen if we all turned our sites off to search engines for a week? By creating scarcity, we might finally get fair value for the work we do.”

Please do this, Anthony. Please get all your newspaper colleagues to agree to a national “Just say no to Google” week. I beg you, please do it. Then I can see if these things I think will happen do happen:

  • Papers go “oh crap,” we really get a lot of traffic from Google for free, and we actually do earn something off those page views
  • Papers go “oh crap,” turns out people can find news from other sources
  • Papers go “oh crap,” being out of Google didn’t magically solve all our other problems overnight, but now we have no one else to blame.

Look, I jumped out of newspapers back in the early 90s because it was clear they didn’t know what to do about online. I will never forget being in a conference room at the Orange County Register when it was being debated whether the paper should go to CompuServe, AOL, MSN or freaking Prodigy. Prodigy! And I had SEEN THE WEB, and I knew that’s where things were going — so I got out. And since then, I’ve watched the papers fumble along.

The papers can’t get coordinated on anything. Anyone remember Pathfinder, that was supposed to be the Time-backed portal for news. Yeah, that did well. What, a decade of the web, and none of the papers could put together their own version of Hulu? The only thing you can all agree on is that you hate Google News for “stealing” so much from you — despite Yahoo News still being the larger news site. But Google makes a better target, plus I suspect some papers might have favorable placements with Yahoo that makes them not want to yell about the Big Y.

Stop yapping. If the papers think they’re such hot stuff, make your own Hulu. Get on with it. But spare me this whining. The AP is on again to protect news content from “misappropriation,” whatever that is — and if it’s that I can’t link to an AP story with a short summary, bring it on.

To the AP, I ask again that someone read my open letter from last year, Hey AP! How About Running A Real News Web Site?. Fix your problems; don’t look for scapegoats.

As for being legal, let’s talk now about the dirty secret of how newspapers operate. They misappropriate content all the time.

Look, I was in a newsroom for years. A newspaper graphic needed doing? You found a book with a drawing, used that without asking the author for explicit permission because shoving in a mention in the “source” line was good enough. Following on a story that a rival paper wrote? You damn well read that other story, which got you up to speed, but heaven forbid you ever mentioned that the other publication came out with the news first. If you did, that was only if you could do a story that suggested you had the “real” scoop that the other publication had wrong.

I was particularly bemused by the Daily Telegraph’s editor going off on Google about two years ago, given as I’ve covered before how twice I had material from my web site outright stolen by the Daily Telegraph. In my years of reading the Daily Telegraph, I was also bemused at how they treated private photos on Facebook as if they were their own exclusive picture library. Some woman died? Well, she’s dead — let’s just use photos from her Facebook profile that we can get. No need to ask permission.

Geez, people are blocking Google Street View cars in England, but did any of those people ask the news photographers shooting the scene if they got releases for taking their pictures. Newspapers “ripoff” people all the time shooting their pictures without permission because “it’s news.” (They actually have protections allowing them to do this, but I think you get the bigger point).

Yeah, AP, when you’re questioning the legality of search engines, let’s open up that big can of worms of what your business model is all about. That’s productive. Rather than fix your problems, keep doing those dinosaur death throes.

[Postscript: See Larry Dignan’s AP eyes news aggregators; Risks exposing its lack of value add for some specific examples of AP stories that aren’t exactly original content

What about the Guardian? As it tells [PDF] the UK government (and see this Telegraph article and this from PC Pro):

  • Search engines and aggregators have things “skewed heavily” in their favor
  • Since search engines get the “lion’s share” of news-related revenues (though the Guardian doesn’t back this up), news publishers are in jeopardy
  • Search engines actually generate too much traffic, which means the Guardian has too much inventory and can’t make as much money
  • There’s no way for the Guardian to take money directly from consumers (apparently charging for subscriptions, like the Guardian does offline, hasn’t been thought of as a solution for online)
  • Blocking search engines isn’t a solution because there’s then “no alternative route to market.” (Amazing — Google sends too much traffic, but pulling out and reducing the traffic flow means they won’t make more money — instead, they apparently won’t get found at all. So much for their content being so compelling that people might just go to them directly)

Gosh, it was about a year ago I sat at a panel at the Guardian, designed for its reporters, and talked about ways they could (and they wanted) to generate traffic from search engines. Doing keyword research, looking for trends, all that. And Google was by far — by far — the biggest referral of traffic the Guardian got. If I recall, it sent something like 3 million visitors to the Guardian per day.

Poor babies. See my memo to Murdoch above on how to install a robots.txt file. And don’t whine people won’t be able to find you. If you’re that good, they’ll seek you out.

Seriously, the Tribune and the New York Times saddled themselves with debt, and that problem is somehow Google’s fault? The Guardian’s had a decade to figure out how to earn off the internet, and it complains to the UK government that it can’t succeed? And Murdoch complains about Google at the same time his own company works to draw more traffic from Google through SEO efforts — just like every other major newspaper out there? WTF?

My suggestion is simple. Stop looking to blame Google for your failings. Figure out a better business model rather than blowing hot air about the privileged positions you occupy.

One example of this is First Click Free. That’s a program from Google that allows you to put your registration-only or view-by-payment content directly into Google. Newspapers have been able to use that program for about three years, if not longer. In contrast, “ordinary” web sites only got the go ahead late last year.

First Click Free is also a huge solution to the supposed problem the Guardian and others put out there about charging. It’s an express license from Google to charge people for content and yet still have your content get traffic from Google. The Wall Street Journal especially knows this well — see my Reading The Wall Street Journal For Free Despite Its Google News Cloaking post for more about what it does. And despite that article explaining how to bypass the Journal’s pay wall, I highly doubt most people will. For most of the Journal’s readers, it’s probably more convenient just to buy a subscription in the same way I’ll just buy a DVD or MP3 file rather than hunt for a free version online.

Newspapers get special treatment, both with First Click Free and with the extraordinary amount of traffic they get from Google. And while their top managers go off on renewed Google rampages, they still continue to work to get even more traffic. It is stunning hypocrisy, and certainly not what you’d expect from smart business people. But given how badly their papers seem to be going, I suppose they aren’t so smart.

Finally, don’t diss the blogs. My past post Blogs & Mainstream Media: We Can & Do Get Along get more into this.

Filed Under: Newspapers

About Danny Sullivan

Danny Sullivan is a former analyst and journalist who works for Google to help educate the public about search, to explore and explain issues that may arise with search, and take feedback from the public to help promote solutions.

Danny Sullivan on social media:
Facebook - Instagram - LinkedIn - Mastodon - Medium - Snapchat - Twitter - YouTube

Posts By Month

Posts By Category

Privacy Policy