Local, mobile, paywalls, Google, more: My latest KDMC news for digital journalists posts

Over the last month I’ve fallen behind on noting here what I’ve been writing at the News for Digital Journalists blog on the web site of the Knight Digital Media Center. Here’s a quick roundup of what I’ve covered there since late February…

Continue reading

Sunshine Week, March 13-19: Acceptable advocacy for journalists

For several years, I’ve loved Sunshine Week — a campaign by the American Society of News Editors to call for more government transparency.  It’s one of the few times that journalists and news orgs are willing to engage in direct activism, which makes for a lot of amusing verbal gymnastics.

Today at the Knight Digital Media Center, I wrote about new advocacy/awareness tool from Sunshine Week: a model proclamation that news orgs and other activists/advocates can customize, publish, and challenge specific government officials and agencies to adopt. It gets into specifics, at least to some extent.

See: Sunshine Week shows how to call for open government

It’s a good start, but here’s what else I’d love to see from Sunshine Week…

Continue reading

Video will dominate mobile data traffic by 2015, and why that will probably cost you more

My new CNN Tech mobile blog post is about Cisco’s prediction that video will comprise 2/3 of mobile data traffic by 2015.

See: Video will dominate mobile data traffic by 2015, forecast says

The catch: Thank to lax net neutrality rules passed by the FCC last December, wireless carriers are free to charge users extra for any kind of mobile content they choose — even if it’s available for free via wired connections.

US Census upgrades American FactFinder tool, new data coming soon | Knight Digital Media Center

For journalists and others who use Census data, the American FactFinder is a key research tool. It just got a pretty major upgrade — although the 2010 data isn’t included yet. Apparently that will happen “in the coming months.

I wrote more about this for the Knight Digital Media Center at USC site: US Census upgrades American FactFinder tool, new data coming soon | Knight Digital Media Center.

Everyblock’s New Geocoding Fixes

Tech Cocktail Conference - 08.jpg
Adrian Holovaty. (Image by Additive Theory via Flickr)

Recently I wrote about how a Los Angeles Police Dept. geocoding data glitch yielded inaccurate crime maps at LAPDcrimemaps.org and the database-powered network of hyperlocal sites, Everyblock.

On Apr. 8, Everyblock founder Adrian Holovaty blogged about the two ways his company is addressing the problem of inaccurate geodata.

  1. Latitude/longitude crosschecking. “From now on, rather than relying blindly on our data sources’ longitude/latitude points, we cross-check those points with our own geocoding of the address provided. If the LAPD’s geocoding for a particular crime is significantly off from our own geocoder’s results, then we won’t geocode that crime at all, and we publish a note on the crime page that explains why a map isn’t available. (If you’re curious, we’re using 375 meters as our threshold. That is, if our own geocoder comes up with a point more than 375 meters away from the point that LAPD provides, then we won’t place the crime on a map, or on block/neighborhood pages.)
  2. Surfacing ungeocoded data. “Starting today, wherever we have aggregate charts by neighborhood, ZIP or other boundary, we include the number, and percentage, of records that couldn’t be geocoded. Each location chart has a new “Unknown” row that provides these figures. Note that technically this figure includes more than nongeocodable records — it also includes any records that were successfully geocoded but don’t lie in any neighborhood. For example, in our Philadelphia crime section, you can see that one percent of crime reports in the last 30 days are in an ‘unknown’ neighborhood; this means those 35 records either couldn’t be geocoded or lie outside any of the Philadelphia neighborhood boundaries that we’ve compiled.”

These strategies could — and probably should — be employed by any organization publishing online maps that rely on government or third-party geodata.

Holovaty’s post also includes a great plain-language explanation of what geodata really is and how it works in practical terms. This is the kind of information that constitutes journalism 101 in the online age.

(NOTE: I originally published this post in Poynter’s E-Media Tidbits.)

Reblog this post [with Zemanta]

MediaCloud: Tracking How Stories Spread

Last week, Harvard’s Berkman Center for Internet & Society launched Media Cloud, an intriguing tool that could help researches and others understand how stories spread through mainstream media and blogs.

According to Nieman Lab, “Media Cloud is a massive data set of news — compiled from newspapers, other established news organizations, and blogs — and a set of tools for analyzing those data.

Here’s what Berkman’s Ethan Zuckerman had to say about Media Cloud:


Ethan Zuckerman on Media Cloud from Nieman Journalism Lab on Vimeo.

Some of the kinds of questions Media Cloud could eventually help answer:

  • How do specific stories evolve over time? What path do they take when they travel among blogs, newspapers, cable TV, or other sources?
  • What specific story topics won’t you hear about in [News Source X], at least compared to its competitors?
  • When [News Source Y] writes about Sarah Palin [or Pakistan, or school vouchers], what’s the context of their discussion? What are the words and phrases they surround that topic with?”

The obvious use of this project is to compare coverage by different types of media. But I think a deeper purpose may be served here: By tracking patterns of words used in news stories and blog posts, Media Cloud may illuminate how context and influence shape public understanding — in other words, how media and news affect people and communities.

This is important, because news and media do not exist for their own sake. It seems to me that the more we learn about how people are affected by — and affect — media, the better we’ll be able to craft effective media for the future.

(NOTE: I originally published this article in Poynter’s E-Media Tidbits.)

Reblog this post [with Zemanta]

Many Eyes: Turning data into pictures

Data is a key part of many stories. IBM’s Many Eyes is a free online library of tools that give you options for visually exploring all kinds of data — even for analyzing text documents. It also lets you share and embed your visualizations.

You can upload your dataset to Many Eyes and apply various visualization types to that data — kind of like using filters on images in Photoshop. You can customize your display.

Many Eyes is a useful tool not just for publishing information, but also for analyzing information to see what the story might be, or where the anomalies are.

Here’s an interactive visualization I just created:

Earlier on Poynter’s E-Media Tidbits I wrote about how you can use some Many Eyes tools like word tree for document analysis.

Many Eyes meet the New York Times: On Oct. 27 NYTimes.com launched its Visualization Lab, where anyone can create and share visual representations of selected datasets and information used by Times reporters.

Many Eyes is just one of the projects from IBM’s Visual Communication Lab.