Over the past six months, I've been working for OCCRP to productise Aleph, a powerful search tool for investigative reporters. This is a little tour of it's key features, and a little view into the future development agenda.
How can journalists search and analyze collections of documents on their own computers with simple tools? At last weekend's DataHarvest, we ran a workshop trying to answer that question. This write-up to covers using Apache Tika for content extraction and regular expressions in Sublime Text as an advanced search tool.
In the free software/open web community, the notion that the web should be decentralized is more than a shared ideal, it is a piece of dogma. But are we really promoting a progressive vision of the web, or fighting a losing battle to avoid political engagement?
Data warehouses are used in industry to manage the many datasets accrued inside a company that might be relevant to reporting and analysis. I want to propose a similar pattern for investigative journalism.
The first beta version of SpenDB features a small set of well-designed features for data import and analysis. Now the platform is ready to be adopted by anyone interested in exploring financial data, from budgets to procurement.
A few weeks ago, the US team of Hacks/Hackers announced their plans to turn the network of journalism innovators into a collaboration with Google News Labs, starting with an event in Berlin. I tweeted about this, and Phillip Smith wrote a thoughtful reaction. Given this invitation to debate, I wanted to outline my criticism in more detail.
Over the past few months, I have spent my weekends simplifying and modernizing the OpenSpending codebase to create SpenDB - a prototype-stage, light-weight data loading tool and analytical API for government financial data.
If we want to make open data relevant to investigative journalism, we have to simplify the way people access it. We must create a way for our data tools to talk to each other and trade information about the companies and people we are researching.
Developing open data standards is all the rage. In fact, chances are that you're drawing one up right now (I am). In that case, here's a list of things you may believe about your data standard, but that are probably not true.
I've had the chance to contribute to two influence mapping projects in South Africa and Mozambique. While both projects focus on finding possible conflicts of interest within a small group of politically exposed persons, their approach has been very different.
When we discuss data journalism, we often tend to think of nicely formatted spreadsheets full of financial data or crime stats. Yet most journalistic source material does not take the form of tables, but it comes in messy collections of documents, whether on paper, or scraped off a web site.
Building Grano started with a desire to map political and economic influences. Developing it further has made us re-examine our motivations: why would journalists want software to help map out the connections between people in politics and industry?
When software developers write code, they often use tools called IDEs, integrated development environments, that provide contextual information needed to manage the complexity of modern software. What would such a workspace look like, if it were designed for journalism?
I've worked with OpenOil in an effort to find oil contracts which have been been published as part of filings to the US stock exchange regulator, the SEC. While these contracts are not usually public, companies are required to file full contract documents as part of their annual reports under certain conditions.