World-wide privatizations collected by the World Bank
18 December 2011
The World Bank [has collected](http://rru.worldbank.org/Privatization/) a database of major (i.e. at least one
million USD) privatizations in developing countries from 1988
to 2008. The dataset - which is classified by country, sector and
privatization type - is a fascinating view into the development of
countries such as China, Russia and many states in South America.
I've [imported this dataset into OpenSpending](http://openspending.org/wb-privatizations)
to be able to generate custom aggregates. This is of course risky,
as the list is unlikely to be comprehensive and does not include
smaller privatizations on a local level, which are likely to make up
the bulk of the financial volume.
Some interesting views include the breakdown [by sector](http://openspending.org/wb-privatizations/sector)
and [by nation](http://openspending.org/wb-privatizations/from). But the import into
OpenSpending is problematic in other ways: the dataset does not contain information on
who bought the privatized entities and therefore isn't a generic spending dataset.
I've therefore decided to model it in a reverse mode. The individual entries do
not represent financial transactions but the transferred assets - the source is
given as the source country, while the recipient is the newly formed company.
This may be an interesting preview into the problems OpenSpending will face as
it begins to include balance sheet information.
A few weeks ago, the US team of Hacks/Hackers announced their plans to turn the network of journalism innovators into a collaboration with Google News Labs, starting with an event in Berlin. I tweeted about this, and Phillip Smith wrote a thoughtful reaction. Given this invitation to debate, I wanted to outline my criticism in more detail.
Over the past few months, I have spent my weekends simplifying and modernizing the OpenSpending codebase to create SpenDB - a prototype-stage, light-weight data loading tool and analytical API for government financial data.
If we want to make open data relevant to investigative journalism, we have to simplify the way people access it. We must create a way for our data tools to talk to each other and trade information about the companies and people we are researching.
Developing open data standards is all the rage. In fact, chances are that you're drawing one up right now (I am). In that case, here's a list of things you may believe about your data standard, but that are probably not true.
I've had the chance to contribute to two influence mapping projects in South Africa and Mozambique. While both projects focus on finding possible conflicts of interest within a small group of politically exposed persons, their approach has been very different.
When we discuss data journalism, we often tend to think of nicely formatted spreadsheets full of financial data or crime stats. Yet most journalistic source material does not take the form of tables, but it comes in messy collections of documents, whether on paper, or scraped off a web site.
Building Grano started with a desire to map political and economic influences. Developing it further has made us re-examine our motivations: why would journalists want software to help map out the connections between people in politics and industry?
When software developers write code, they often use tools called IDEs, integrated development environments, that provide contextual information needed to manage the complexity of modern software. What would such a workspace look like, if it were designed for journalism?