Facebook “likes” on your pages? Don’t count on them.

If your site includes Facebook “like” buttons to encourage people to share your content, be careful about how you use those numbers — or how seriously you take them.

Clint Watson writes in  Facebook Like Button Count Inaccuracies:

The Facebook “like” buttons you see embedded on websites incorrectly report the number of “people” who “like” something. Specifically, the button can inflate the displayed count of people.  While this is fine when all you want to do is track some general level of “engagement” with a particular item, it was not accurate for the use I needed – counting each “like” as a vote in our BoldBrush Online painting competition.

What I needed is a way to get the number of actual people who “like” something.  And there is a way to retreive that information from Facebook, but it is often a different number from what is shown on the “like” button itself.

If you are a geek – here’s the bottom line of this post:

If you’re using the Facebook “Like” Button Social Plugin and you need an accurate count of the actual number of people who have clicked the “like” button, you can’t rely on the number reported by the button itself.  You need to retrieve your URL’s “fan count” number via Facebook’s Open Graph API.

Hat tip to Zach Seward for bringing this to my attention.

Everyblock’s New Geocoding Fixes

Tech Cocktail Conference - 08.jpg
Adrian Holovaty. (Image by Additive Theory via Flickr)

Recently I wrote about how a Los Angeles Police Dept. geocoding data glitch yielded inaccurate crime maps at LAPDcrimemaps.org and the database-powered network of hyperlocal sites, Everyblock.

On Apr. 8, Everyblock founder Adrian Holovaty blogged about the two ways his company is addressing the problem of inaccurate geodata.

  1. Latitude/longitude crosschecking. “From now on, rather than relying blindly on our data sources’ longitude/latitude points, we cross-check those points with our own geocoding of the address provided. If the LAPD’s geocoding for a particular crime is significantly off from our own geocoder’s results, then we won’t geocode that crime at all, and we publish a note on the crime page that explains why a map isn’t available. (If you’re curious, we’re using 375 meters as our threshold. That is, if our own geocoder comes up with a point more than 375 meters away from the point that LAPD provides, then we won’t place the crime on a map, or on block/neighborhood pages.)
  2. Surfacing ungeocoded data. “Starting today, wherever we have aggregate charts by neighborhood, ZIP or other boundary, we include the number, and percentage, of records that couldn’t be geocoded. Each location chart has a new “Unknown” row that provides these figures. Note that technically this figure includes more than nongeocodable records — it also includes any records that were successfully geocoded but don’t lie in any neighborhood. For example, in our Philadelphia crime section, you can see that one percent of crime reports in the last 30 days are in an ‘unknown’ neighborhood; this means those 35 records either couldn’t be geocoded or lie outside any of the Philadelphia neighborhood boundaries that we’ve compiled.”

These strategies could — and probably should — be employed by any organization publishing online maps that rely on government or third-party geodata.

Holovaty’s post also includes a great plain-language explanation of what geodata really is and how it works in practical terms. This is the kind of information that constitutes journalism 101 in the online age.

(NOTE: I originally published this post in Poynter’s E-Media Tidbits.)

Reblog this post [with Zemanta]

WSJ & the Kindle: Puzzling Relationship

What might a larger-screen e-reader look like? Here's what Plastic Logic plans to release later this year. Whether Amazon will follow suit remains to be seen.

What might a larger-screen e-reader look like? Here's what Plastic Logic plans to release later this year. Whether Amazon will follow suit remains to be seen.

Over the weekend, while I was reading the Wall Street Journal on my Kindle e-reader (I pay $10/month for that subscription), I noticed this headline: Amazon Is Developing Bigger-Screen Kindle. I found the article interesting for several reasons — including that the sole source for the headline’s claim is the unnamed group, “people who said they have seen a version of the device.” I was even more surprised to read that “the new Kindle could debut before the 2009 holiday shopping season, they said.” That’s pretty damn ambitious.

…WSJ.com also noted that an Amazon spokesman “declined to comment on what he called ‘rumors or speculation.'”

Hmmm… could this be a replay of the rumors of an Apple tablet computer that have been recurring for years? (Thanks for the reminder of that, Ron Miller.)

A larger-format Kindle would indeed be an attractive product to many consumers. It would be even more appealing to news organizations that are already selling (or are considering selling) Kindle subscriptions to their content. The Kindle’s current screen size significantly constrains formatting and excludes advertising — and thus news revenue potential for this device.

When considering this story’s conspicuously scanty sourcing, I noticed that this article did not acknowledge that the Wall Street Journal — and every other news org selling Kindle subscriptions — stands to benefit financially from the availability of a larger-size Kindle. In other words, the Journal used a definitively-worded headline to amplify an unconfirmed rumor that, if true, might eventually increase its e-reader revenue stream. And this claim has been widely repeated.

Of course, Amazon’s alleged forthcoming Kindle is not the only emerging larger e-reader option…

Continue reading

HuffPost’s citizen journalism standards: links required (News orgs, take a hint)

huffpostLast week the Huffington Post posted its standards for citizen journalism. It’s a pretty short, basic list — just six requirements — that reads like journalism 101.

However, many news organizations still could take a lesson from the second item on HuffPost‘s list:

2. Do research and include links to back it up. Whether you are referencing a quote, statistic, or specific event, you should include a link that supports your statement. If you’re not sure, it’s better to lean on the cautious side. More links enhance the piece and let readers know where you’re coming from.”

It amazes me how often I still see mainstream news stories which completely lack links, or which ghettoize links in a box in a sidebar or at the bottom of the story…

Continue reading

Los Angeles Police Geocoding Error Skews Crime Maps

LAPDcrimemaps.org has some recently revealed geodata flaws.

LAPDcrimemaps.org has some recently revealed geodata flaws.

Crime maps are one of the most popular and (in urban areas) ubiquitous types of geo-enabled local news — and they’re a staple of the Knight News Challenge-funded project Everyblock. This data comes from local police departments — but how reliable is it?

On Sunday, the Los Angeles Times reported a problem with the Los Angeles Police Department’s online crime map, launched three years ago…

LAPDcrimemaps.org is offered to the public as a way to track crimes near specific addresses in the city of Los Angeles. Most of the time that process worked fine. But when it failed, crimes were often shown miles from where they actually occurred.

“Unable to parse the intersection of Paloma Street and Adams Boulevard, for instance, the computer used a default point for Los Angeles, roughly 1st and Spring streets. Mistakes could have the effect of masking real crime spikes as well as creating false ones.”

Apparently the LAPD wast not aware of the error until alerted by the Times…

Continue reading

Tracking a Rumor: Indian Government, Twitter, and Common Sense

This morning, as I check in on the still-unfolding news about yesterday’s terrorist attacks in Mumbai, I noticed a widely repeated rumor: allegedly, the Indian government asked Twitter users to stop tweeting info about the location and activities of police and military, out of concern that this could aid the terrorists.

For example, see Inquisitr.com: Indian Government trying to block Twitter as Terrorists may be reading it.

Rumors — even fairly innocuous ones — really bug me. Mainly because they’re so easy to prevent!

I’m trying to track this particular rumor down, but haven’t been able to confirm anything yet. At this point I’m skeptical of this claim. Here’s what I’ve found so far…

Continue reading

Fixing Old News: How About a Corrections Wiki?

NYtimes.com
Any news org should be able to do more with corrections than this…
Denver Post 8/30/2007, p. 2B
Or this… What? You can’t see the corrections on that page?
Denver Post 8/30/2007, p. 2B
…Look way down here in the corner

Even the best journalists and editors sometimes make mistakes. Or sometimes new information surfaces that proves old stories — even very old stories — wrong, or at least casts them in a vastly different light. What’s a responsible news organization to do, especially when those old stories become more findable online?

On Aug. 28, Salon.com co-founder Scott Rosenberg posted a thoughtful response to a Aug. 26 column by New York Times ombudsman Clark Hoyt: When Bad News Follows You.

In a nutshell, the Times recently implemented a search optimization strategy that increased traffic to its site — especially to its voluminous archives. This meant that stories from decades past suddenly appeared quite prominently in current search-engine results. The Times charges non-subscribers to access archived stories.

Hoyt wrote: “People are coming forward at the rate of roughly one a day to complain that they are being embarrassed, are worried about losing or not getting jobs, or may be losing customers because of the sudden prominence of old news articles that contain errors or were never followed up.”

“…Most people who complain want the articles removed from the archive. Until recently, The Times’s response has always been the same: There’s nothing we can do. Removing anything from the historical record would be, in the words of Craig Whitney, the assistant managing editor in charge of maintaining Times standards, ‘like airbrushing Trotsky out of the Kremlin picture.'”

Hoyt’s column offered no options for redress. He didn’t suggest that the Times might start researching more disputed stories or posting more follow-up stories. Nor did he suggest that the Times might directly link archived stories to follow-ups.

Rosenberg asserts that the Times has an obligation to offer redress. Personally, I agree. Plus, I’ve got an idea of how they (or any news org) could do it — and maybe even make some money in the process…

Continue reading