At the intersection of data and journalism, lots can go wrong. It’s possible that while your story is true, it’s also wrong. New York Times data journalist Robert Gebeloff shares his tactics to avoid that: how not to publish a true but wrong story ever again.
Just because you can scrape it, doesn’t mean you should. As a data journalist, when is web scraping the right choice? And, more importantly, when is it right for you? Sophie Chou explores the technical and ethical challenges of scraping data.
At Visualized.io data artist Stefanie Posavec talked about her preference of analogue methods to make visualizations. Praising the shaping force of data gathering by hand and discussing visualizations as performances.
The work of David McCandless exists for 80% out of data wrangling and research and only 20% of designing. Why? Because he designs his understanding of data – and understanding is difficult. It takes time and effort to turn data into information into knowledge.
Data not available? No way. The team building WAMU and NPR coproduction ‘Deals for Developers’ had the same problem. Julie Patel shares their success story.
Nicolas Kayser-Brill shares how the Journalism++ team made their nominated app Detective.io – where investigative journalists can structure their data.
A techie journalist and a journalistic techie made the interactive map In Flight for The Guardian, and got nominated for a Data Journalism Award.
Alejandro Sanabria shares the making of story of a Data Journalism Award nominated web app, “The deal is converting data into information and knowledge”
Journalists share their data horror stories. Because learning from the data errors others have made, is the fastest way to become a good data journalist.
For ‘Riot rumours: how misinformation spread on Twitter during a time of crisis’ The Guardian teamed up with the Manchester eResearch Centre; and got nominated for a Data Journalism Award. An interview with professor Rob Procter.