A few things. The first thing is I have had a background in digital, data and security for a while. In fact, I was recently reflecting that my first job in the digital space was with an organization in the UK called the Central Office of Information. It’s been disbanded now - but it was a government agency responsible for government communication. And what’s interesting about the COI is that George Orwell worked there. It’s believed it was what inspired 1984 in many regards, and is what the “Ministry of Truth” was modeled on in 1984. So that’s really synchronistic, in and of itself.
My first client at the COI was the Royal Air Force, whom I worked with on using digital media to recruit young pilots. What’s interesting about fighter pilots is that they peak young, at age 21 in fact. So air forces recruit them very young, 17, usually being the youngest age they can be recruited legally. So I worked with the RAF to create campaigns to attract young people to join the RAF. It hadn’t dawned on me until recently the significance of using digital media in that way to influence people.
I then went on to various digital roles, but I think the next major milestone for me was joining Diageo, the world’s biggest beverage company, I worked across a broad portfolio of Diageo’s global brands helping them use the internet to engage with consumers more effectively. I tell this story, because when my boss recruited me into Diageo, he said “We need someone to help us crack Web 2.0.” Diageo was quite mature as a digital marketer at that time. They’d originally had a pretty decentralised digital marketing framework and were getting into trouble in some markets around the world, so they deployed a highly centralized platform to bring order and consistency to their digital operations around the world. But this became an obstacle for their agencies trying to do rich, innovative work, especially as social media started to take off.
So they brought me on board to crack that problem. I led the design of their new platform, which was designed as the first truly federated enterprise digital marketing platform. It just dawned on me the relevance of this; what we realized back then was that truly centralized platforms stifle innovation, and become very restrictive, but we also realized that completely decentralized models (and this was pre-Blockchain or any kind of immutable governance capability) became like a Wild West, were very inefficient at enterprise scale, difficult to govern and did not enable reuse of anything developed at the edges. So we developed a platform that centralized the aspects that needed to be centralized, which incidentally was mostly data, and decentralized development at the edge for speed-to-market and innovation purposes in end markets.
This was really novel, and became recognized as a best-in-class digital engagement platform by Gartner, Forrester and the like. The interesting thing as well is that when Facebook did its IPO, Diageo was their reference use case for enterprise partners - it was actually the only use case in their IPO roadshow video. It certainly achieved what I went there to do! It’s funny to think of my connection to Orwell on one hand, and Facebook on the other. But in many ways, Diageo is where my interest in data as a resource was born.
Looking at how we used data to influence people was fascinating - on one hand - but I become most interested in how we could use data to create better experiences over the internet through, with better personalization, relevance and customization. However, the more I looked at it, the more I realized we could only do this by sharing large amounts of personal data with 3rd party recommendation engines like Google and Facebook. I started to question the intentions of these companies who’d have our data and realized that I didn’t trust their intentions, and that was a problem.
And as I looked at it more, I began to see that there were bigger issues around data. Around this time, the Arab Spring had taken off in Tunisia and Egypt, and we were experiencing a world where on one hand digital platforms were causing disruption, bringing big social changes, but we we’re also confronted with the fact that people’s lives were now in danger and at risk from their digital footprints. We started to see activists and their journalist counterparts being exposed to serious risk by way of their digital footprints. So we started to see big privacy and human rights issues, involved with data. At that stage, we started to ask “How could we enable more privacy, more security, for people around the use of the internet and their data?”
We identified 3 types of people who would be interested in privacy at that point - this was very early. Captains of industry, like CEOs and senior managers, celebrities, and activists/ journalists. So we started prototyping secure messaging solutions to enable activists and journalists to communicate without exposing themselves, using things like the TOR network to anonymize their communications. Our strategy back then was to deploy a secure messaging app in places like Syria where activists were in danger, using that to shape a narrative that our data is valuable and that we needed digital rights to control it.
Then the Snowden’s revelations happened, and the world changed overnight. With “Snowden-gate” we realized that what was happening in Syria was also happening in London, New York and all over the world. We realized our digital footprints massively exposed us all, everywhere.And that was interesting in many ways. One of the first things we did as a result was to establish our NGO, the Internet Foundation. We realized at that point that we not only needed to look at digital rights as an extension of human rights, but that we also had to look at them from the perspective of international law. It was pointless looking at digital rights solely within the context of a nation state’s legal structure; we had to look at them from a macro scale, like human rights, and protect them globally.
As such we launched an initiative to call on the UN to establish a Universal Declaration of Digital Rights, as an extension of the Universal Declaration of Human Rights - looking at human rights and how we define them in the digital era while providing a mandate and legitimate framework for using technology to protect them globally, across borders.
This remains our North Star, in terms of how we operate and define policy today. So that was the first thing we did post Snowden-gate. The next thing we tasked our NGO with creating was a framework for how companies should use consumer data in a responsible and sustainable way. We call this our Clean Data Charter. Our position on this is that if data is the oil of the twenty first century (as often touted) then we are creating a Clean Data framework analogous to Clean Energy. In that sense the Universal Declaration of Digital Rights, is about the state and the citizen, and the Clean Data Charter is about companies and consumers.
Generally speaking, we see three big issues, considerations and epochs in how people think about data. The first is privacy and security – restricting who can access our information. The second is data management: if you have security and privacy for your data, you’re more open to being able to use your data in ways you’re comfortable with. That includes choosing who you share your data with and using your data to access services that bring value to you. The third consideration is then data trading: how you can monetize your data, and use it to bring you economic value as well as other forms of value.
So essentially, when we set up the NGO The Internet Foundation, we set it up to look at policies from a citizen’s rights perspective, from a consumer’s rights perspective, and from a sustainable development policy perspective to enable citizen centric, ethical and sustainable innovation. But I think the most important thing about this period was that we were able to evolve from solely having a security and privacy focus towards integrating a more comprehensive data management and data trading framework. Privacy became table stakes but - once you have privacy, what can you do with it became our focus.
So we started to imagine the kinds of things you could use your data to do in a world where you have more privacy and agency. Having a separate vehicle to address advocacy, rights and policy set us up to think about innovation and commercialization of data in a much more healthy and focused way. Our Clean Data Charter data has now evolved into an Ethical Data Standard that we’re developing with the British Standards Institute as an ISO standard we can use to audit companies. And we’re now engaged directly with the UN on how to move the Universal Declaration of Digital Rights forward as part of the UN’s 75th Anniversary next year.
The last point to bring us to how we arrived at metaMe - as we looked at the space and developed policy and standards there were two problems we had to crack to enable ethical data marketplaces to emerge: (A) how could we enforce policies and (B) how could we track whether companies were being compliant with policies? And that remained a tough nut to crack technically, until the Blockchain movement came around. Blockchains, were very exciting for us, because we finally found a technology that enabled us to audit and enforce the rules and policies for how data should be used in a technical manner that was dependable and reliable. So for us, metaMe evolved from the ability to align blockchain technology and with our policy, advocacy and governance work.
It is often said that Blockchains enable money to be exchanged as easily as information.
metaMe’s model is to use Blockchain to enable information to be exchanged as securely as money.