In my last post, I said that identity is the sharing of personal data in context, and defined the layers of personal data types that we share. I consider this to be the WHAT of identity. Now I want to look at the HOW of identity.
In some ways, I believe identity is a result of a Hierarchy of Sharing – like this:
What I’m trying to show here is that we share personal data selectively – we filter it – so that others get enough information about us so they can identify us, and so that we can express who we are (and what we believe).
There are two important points here
- The filtering of personal data is another name for privacy – how we decide what to share, with whom, how and in what context – this is trust-based, as we’ll see in a minute
- Identity is an outcome of this filtering – we base our identities on the underlying personal data (and therefore rely on the sources of that personal data)
This helps explain how my identity is created, using my privacy filters and my personal data (this could be Self Data, Being Data, Attributed Data or Created Data – see here for what I mean by these). It’s also shows that it’s created in context, with my permission. But what about Inferred Data – stuff which is created about me but by others, for others (like your credit score). This creates a different identity – an Inferred Identity:
At the bottom is Inferred Data about you (your guessed location, your guessed intentions, your guessed financial history, your guessed age). This type of data is usually generated, stored and analysed by companies to help them drive sales and retain customers.
In order to make use of inferred data, organisations use rule-based assumptions (they need to use rules because these assumptions are processed by computers to manage large numbers of customers). The result is an inferred – and not real – identity. Who they think you are. This identity isn’t a true reflection of you; at best it’s someone similar to you. In my last post I called this a ‘hollow you’. It’s almost always never held in context (they don’t really know what, where or why you are), nor is it endorsed by you (it’s all done behind organisational walls).
So if we agree that a real identity has to be based on real personal data, shared in context, we should look at this idea of privacy in more detail – how we get from data to identity.
My privacy means “I can trust you”
I choose to share things with those I trust. The more I trust, the more I am likely to share. The less I trust, the less I am likely to share. Trust and sharing are directly correlated. Privacy is about choosing what to not to share. So it’s not a leap to say that ‘my privacy’ is simply the set of rules I use for trust-based sharing.
Put another way, privacy is a way to ensure I can trust you, so that when I share information about myself I can believe that it will be handled in the right way (more on that in another post soon).
I said earlier that identity is how I express who I am (or what I believe) to others. This idea that privacy is a filter begins to makes sense: privacy is choosing what clothes I’m happy others to see me wearing (or indeed not – see this great post on Clothing as Privacy System); it’s choosing what music I’m happy others to hear me listening to; it’s choosing what religious (or indeed non-religious) words I’m happy other people hearing me speaking; or what medical information I’m happy to tell other people about. Privacy is a filter.
My identity means “You can trust me”
Trust has to be between people. It’s not something that exists on its own (I don’t need to trust myself). So really trust is how certain you can be of others’ identity (indeed this makes sense: if we are not sure WHO someone is, we trust them less).
In other words, my identity means you can trust me. But how do you know that it’s my real personal data being used – how can you prove my identity? Regardless of the type of data being shared (be that my education qualifications, my bank statement, my history of eating out or my driving license) I think there are there are two types of trust here: direct and indirect.
This is where we have a direct relationship – personal experience so that we can be certain of a behaviour or outcome e.g. “I’ve worked with him for 20 years – I trust him to turn up on time”, “I have eaten here before – I trust this restaurant to serve great food” or “I taught her to drive – I trust her not to crash my car”. Security technology also enables direct trust. For example I log on to work systems using my user name and password – my employer (or rather their systems) can trust who I am because we have a direct relationship.
This is where we have no direct relationship so we have to rely on another person or group to validate the person’s identity (who they are or what they believe). Most usually this is a professional body or company e.g. “He has trained for 7 years at medical school – I trust his diagnosis” or “She has a taxi license – I trust her to take me to the station”, or indeed “This man has a plastic ID with a company logo – I trust him to come and take the gas meter reading”.
Trust through networks
I believe we’ve moved from a world of mostly Direct Trust (where we knew everyone around us well) to one of increasing Indirect Trust (where we know many more people, but less well). On a daily basis we now connect, produce, share, learn and play with people from all over the world. New communication tools – most obviously the internet – mean that we can now connect with many more people than previously possible. Some of these people are on the other side of the world, but most are much closer to us, even local. As we discover and connect with new people – wherever they are – we have an increasing need for Indirect Trust. Our digital world makes this even more important, when copying and distributing data have zero marginal cost. It’s easier than ever to copy and share our personal data – and indeed our identities.
So how do we manage trust in a world in which it’s increasingly difficult to validate sources of data, and indeed be able to trust those providing Indirect Trust?
The answer is networks. Now, I don’t want to open up a discussion on security or technology here, but I think it’s worth thinking about how relationship networks can help us solve some of these problems.
Direct Trust is created between two people, one-to-one. Over the last few years social networks have gone some way to helping us build Direct Trust, mainly because we have to validate each other in order to share privately (e.g. we must both ‘friend’ or ‘follow’ each other). Sharing has increased as a result. Indeed, in general, a generation of ‘digital natives’ share far more about themselves than previous generations are comfortable doing. Social networks have enabled this culture, not created it – sharing is inherently human, and not something new – the networks have just helped us connect with each other in a trusted way.
But I think social networks as we use them today are limited in how they build and foster trust. Firstly, so many of our social networks are just massive – we are connected with too many people (definitely larger than Robin Dunbar would suggest we need to really know each other) and so we can’t truly know everyone, or indeed be able to trust everyone directly.
Secondly, social network relationships are ON or OFF: We are Friends or not; Colleagues or not; Followers or not. And so we are forced to have open or closed relationships with everyone we chose to connect with (and as a result share everything we make public from wish lists and party photos to requests for help, updates of where we are and what we’re doing to gossip). This, as we have seen with Facebook, means we are often given overly-complicated privacy controls so we can manually tinker with exactly what we want to share and with whom. (And of course we can do the opposite with notification controls: so we can manually tinker with who shares what with us). In my experience, and my observations of others around me, it’s all a bit complicated: most people end up leaving the settings as open or closed. As a result, we tend to put up with over- (or indeed under-) sharing. This doesn’t build trust – in fact it undermines it.
Ultimately it becomes difficult to have anything other than Indirect Trust with the vast number of people in our networks. Previously I’ve tried to show that identity is all about context. What the current set of social networks lack is deep context. I can be connected to friend-of-a-friend I met at a party once, but I can’t necessarily trust them out of the context of that party. And I can be a ‘friend’ to a multinational corporation, but I can’t necessarily trust all their products or services.
What we need is smarter ways to build Indirect Trust through networks, with context. And that’s why I’m excited about companies like www.connect.me, a US start-up looking to put the relationship back into the network. I’m looking forward to a time when I don’t need to ‘friend’ or ‘follow’ a person or company to build trust – when we can vouch for each other in context e.g. for a friend’s cooking skills or fluency in a language, or for a company’s customer experience, or indeed a particular product (especially if that’s all I want to vouch for).
We do of course need new tools and standards to help us authenticate and share data sources, and better, intuitive tools to help us manage privacy around our personal data. Much of this is being accelerated by those working in the Vendor Relationship Management community. But first we need to recognise that identity is contextual, and therefore so is trust – and that we need smarter ways to manage trust in context.