Vox Media and its tech site, The Verge, have launched a gadget blog called Circuit Breaker. In a slight twist, Circuit Breaker will publish mainly on Facebook.
Paul Miller, who previously served as a senior writer for The Verge, will edit Circuit Breaker.
In a post announcing the new site, Verge co-founder and editor Nilay Patel said Circuit Breaker will cover a new wave of gadgets.
“The first age of gadget blogging was defined by the mobile revolution, and we think a similar revolution is about to happen in new categories like drones, VR, and the Internet of Things,” wrote Patel.
IBT Media’s Newsweek has named Ken Li managing editor. Li is succeeding Kira Bindrim, who is taking a role with Quartz.
Li most recently served as a founding editor of Re/code. He has previously worked for Reuters, The Financial Times, The Industry Standard and The New York Daily News.
“As Kira Bindrim heads to Quartz, we are thrilled to welcome Ken into the Newsweek family and eager to have his sharp news sense and digital savvy guiding our operations,” said Newsweek editor Jim Impoco, in a statement. “Ken’s background includes a great mix of writing, editing and strong newsroom leadership that are critical for the managing editor role.”
In other Newsweek news, the magazine has made the following changes: Kevin Dolak has been named national editor; Margarita Noriega has been named executive editor of digital; and Iva Dixit has been promoted to social media editor.
Quartz has named Kira Bindrim editor of its talent lab. Bindrim comes to Quartz from Newsweek, where she served as managing editor since 2013.
Prior to Newsweek, Bindrim worked for Reuters and Crain’s New York Business.
“Some of us have had the pleasure of working with Kira in the past, and have learned to appreciate her energy, humor, and astonishing ability to get things done,” wrote Xana Antunes, editor of new initiatives at Quartz, in a memo.
Bindrim begins her role at Quartz May 9.
Believe it or not, 10 separate parties want to buy Yahoo.
According to Bloomberg, Yahoo received 10 first-round bids for its core businesses, ranging from $4 billion to $8 billion. Let’s assume the party offering $8 million for Yahoo, which recently announced a net loss of $99 million, was drinking when it made that move.
Companies remaining in the bidding process could find out as soon as next week, as Yahoo looks to narrow the field. Once the bidders are slimmed down, Yahoo will give them more access to internal documents and high level staffers.
New Republic CEO Guy Vidra is stepping down. According to a memo from Vidra that was obtained by Politico, Vidra is leaving the magazine at the end of April.
“I want to let you know that I have decided to transition to an advisory role at TNR,” wrote Vidra. “I will continue in this capacity through the end of the month and will then make myself available to help as the magazine moves forward to a promising future under [publisher] Hamilton Fish and [owner] Win McCormack.”
Vidra’s resignation marks the second big name New Republic departure in the last few weeks. On April 14, editor Gabriel Snyder said he was leaving the magazine.
In a solid case of “newspaper said, tabloid said,” the New York Times is disputing a New York Post story about massive layoffs coming to the Times.
The Post reported that the Times will cut “hundreds” between August and November. The Post also pointed out that Times’ execs are currently conducting talks with the Times’ union about reduced severance packages.
Times executive editor Dean Baquet was not happy about the Post’s report, and said it was “totally made up.” NPR’s media correspondent David Folkenflik tweeted that Baquet admitted the newsroom will shrink, but the Post’s story was also “cheap guess work.” Folkenflik added that the Times is indeed talking with its unions, and more will be clear later in the year.
In other words, bookmark this report. Because while Baquet is denying the Post’s account, it’s certainly possible that eventually we’ll see the Post was correct all along.
Gannett Company—owner of more than 100 newspapers including USA Today—wants to expand. The company has offered to buy Tribune Publishing for $815 million. The deal, proposed on April 12, includes Gannett assuming $390 million of Tribune’s debt.
Gannett wants to make this move because—despite adding debt and more brands—it’ll save the company roughly $50 million in “synergies” savings.
“We believe Gannett is uniquely willing and able to propel Tribune into the position of strength that will allow its beloved and historic publications and other assets to survive and thrive in this challenging environment,” wrote Gannett CEO Robert Dickey, in a note to Tribune CEO Justin Dearborn. “Given the opportunity to benefit from the significant premium and near-term liquidity, we are confident that Tribune’s stockholders will embrace our offer.”
Tribune Publishing, which owns The Chicago Tribune, The Los Angeles Times and nine other papers, is reviewing Gannett’s offer.
“On receiving the April 12 proposal, the Company [Tribune] communicated by telephone to Gannett that the Board of Directors would engage financial and legal advisors to assist it in reviewing the proposal,” said Tribune, in a statement. “The board is now engaged, with the assistance of its advisors, in a thorough review. The board is committed to acting in the best interests of shareholders and will respond to Gannett as quickly as feasible.”
People regularly ask what tools to use or what programming language to learn for data-driven journalism (ddj). There is no right answer for it, especially considering that technology and tools available are evolving quickly in the field.
Nathan Yau from FlowingData recently described how he works in data visualization. His post applies perfectly to data-driven journalism tools:
“What tool should I learn? What’s the best?” I hesitate to answer, because I use what works best for me, which isn’t necessarily the best for someone else or the “best” overall.
If you’re familiar with a software set already, it might be better to work off of what you know, because if you can draw shapes based on numbers, you can visualize data.
Before I dive into my typical workflow and tools for 2016 so far, I should mention that I work as the sole data journalist in my newsroom. It is more common in news outlets to have data/visual journalism teams, with people specialized in specific sub-areas of data-driven journalism. My workflow is pretty much data journalism on a shoe string.
Also, by ideology and because I am a nerd, I use (nearly) solely open-source free tools. Again, it is just because these are what I am more familiar with. But if there was a proprietary framework with which I can do things faster and better, I would switch in a heartbeat.Data Acquisition, Cleaning, Formatting
Tabula: Sometime you have to deal with a data journalist’s worst enemy: data trapped in a PDF. This simple tool, no coding required, makes the process of getting a data table out of a PDF less painful.
Open Refine: I usually work with raw data directly from R. But if your data are too messy, cleaning it by scripting or manually in a spreadsheet can get tedious. Open Refine makes data cleaning interactive and reproducible. It brings the best of both worlds of scripting and manual cleaning.
LibreOffice/Google sheet/MS Excel: The less I use a spreadsheet software, the happier I am. Excel is unfortunately still a standard format to distribute data. I typically use it to inspect data and for basic data cleaning or reshaping.
R: I will come again later to my beloved Swiss-Army knife language R. R is a free open-source statistical computing language. A statistical framework sounds like overkill to publish stories for the masses? Just think of it as one of the most popular programming languages to deal with data. There are heaps of packages to extend its functionality and it has a large helpful user community.
You can scrape data with R (with rvest for instance, similarly as with Python’s Beautiful Soup) or get data directly from open data portals’ APIs (World Bank, Eurostat, …). But R really shines to shape your data (merge, subset, aggregate, etc.) with packages such as tidyr & dplyr.Analysis
In data-driven journalism, it is critical to explore your data rapidly. This means querying your data with questions you have or looking for patterns or outliers in your data.
Data exploration is typically an iterative process where new questions or ideas arise as you dig into your data. To me, nothing beats R for exploratory data-analysis. You can quickly reshape your data and produce a vast array of different graphics suited to address any questions you might have. The R package ggplot2 is particulary helpful for that.
Furthermore, with R markdown you can create sleek PDF or HTML reports mixing code and the resulting graphics. This is a great feature to document your work, but also to publish your complete methodology along with your story. Similarly, as with scientific papers, the methods used in data-driven journalism should be explicit, transparent and reproducible.Production Graphics Static Data Visualizations
R (ggplot2 + Inkscape/Illustrator): Default graphics produced with R might only appeal to engineers... With a few lines of code, though, you can greatly improve and template the chart’s look. (Check, for instance, this ggplot2 graphic.)
It is often important to add text and explanation to your graphic. This can be done, of course, programmatically in R, but if you have a lot of annotations it can get tedious. R graphics can be saved as a PDF or as SVG and manually edited in Inkscape (free & open-source)/Adobe Illustrator to add an “annotation layer.” This is, for instance, how I created the graphic below.
I aim to produce in the future more static graphics using only R, though. If you are pursing some kind of mobile-first strategy, you may want to use parsimoniously large interactive graphics and produce more “responsive” vector graphics. Vector (SVG based) because you want your graphics to look crisp and pixel perfect on any screen size. Responsive design to handle different device also sizes elegantly but in terms of layout. For instance, this graphic made with R shows multiple maps. Depending on you screen size, you will have many map boxes on one row, and if it is small it will have fewer.Interactive Data Visualizations
datawrapper: Data-driven journalism ≧ fancy dataviz. I suppose people should know that data-driven journalism is much more than fancy data visualizations. Data stories do not always need innovative graphics to best convey a message. Standard bar or line charts often work best to make a point. For that, I am fond of the charting tool datawrapper. It is open-source but offers cheap paying options for hosting responsive interactive charts. It is used across our newsroom by all journalists. We got a datawrapper-chart layout fitting our website, so I am not tempted to spend time on minor design tweaks as I typically do when I code a graphic. And it recently extended its chart options: choropleth, bubble map, faceted bar charts, bullet chart, etc.
R + rCharts / htmlwidgets: d3.js is hands-down THE programming language for interactive data visualizations. For me, though, who is not proficient in javacript/d3.js and the fact that I have to create graphics in ten languages for the media I work for (including right-to-left Arabic), coding data visualizations from scratch in d3.js is too time-consuming.
Binding from R to leaflet.js (map) or Highchart (charting library) to name only two packages, offer a wealth of interactive graphic possibilities. Here are some examples (click on the thumbnails to see the interactive graphics):Some interactive data visualisations produced from R.
That’s about it. This post is longer than I expected, but I feel my workflow is more complicated to explain than to use. I would be curious to know how other ddj people/teams work or any tips to do things faster or better.
This story originally appeared on Medium and is reprinted with permission.
Duc Quang Nguyen is a data journalist and project manager at swissinfo.ch. A data scientist turned data-journalist, he's enthusiastic about data mining, open-data, open-source tools, and data-driven journalism. He holds a Ph.D. in computational biology from Oxford University.