[So I’ve been meaning to write a new post for a while now. In fact for about a year. The last 12 months have been great thinking time and great family time. But it’s now time to get back online properly and put some new posts up. For people to read, for people to think about; hopefully for people to discuss and respond. Make of it what you will – I’m all ears. And eyeballs. And keen to hear and see what you think. Most of all I want to start a discussion. Spark some conversations. Perhaps some relationships. And you never know, maybe even some transactions. (For those that don’t get the reference – go and read #Cluetrain. Welcome everybody, I’m back.]
Today’s thoughts are brought to you by the letter P and the number 1. P stands for Personalisation and ’1′ stands for the fact that there is only one you. Whilst there may be many versions of who we want to show at any one time, there’s only one real you (for a bit of background on some of that, see my previous posts).
You see, it all starts with internet advert click-through-rates. The last time I looked – and I must confess that I haven’t checked for a while (but very much doubt if things are today much different) – on average, internet banner adverts in the US get a click through rate of around 0.1% (that’s one in 1000, to be clear)
(Not for nothing, but one of my favourite facts from this same study suggests that Facebook achieves half of this (one in 2000), whilst Google get four times this (one in 250), almost 10 times that of Facebook. Curious indeed.)
Anyway, my point is this: the online advertising machine – the same one that over-valued Facebook at their IPO and according to Doc Searls is in a bubble – needs ever more data every day to stay relevant. It needs to gather up increasing amounts of juicy data about things like our habits, preferences, transactions, shopping trips and travel plans. In an effort to improve a terribly low rate of 0.1%, it’s understandable that marketing teams all over the world are asking for more data. Data about you, about people like you, and about people like people like you. And their friends.
The thing is as they gather more data, it all ends up on a spectrum of ‘use-value’ (as opposed to ‘sale value’ – see this great post for more). At one end of the spectrum the use-value is zero. The data is wildly wrong; it clogs up the customer relationship management (CRM) tools of our businesses, and fuels badly targeted ads and other ‘personalised’ services which in turn clog up our daily lives and devices.
At the other end of the spectrum the use-value is huge. The data can be used to deliver helpful, timely and relevant services to the right person at the right time. But the problem here is two-fold: First, the service is often ‘targeted’ at a ‘consumer’ that is ‘owned’ by an organisation. The purpose is purely commercial, and is really just about finding qualified leads for another sale. The individual is rarely actually involved. Second, it’s only when the data gets really accurate that we begin to ask questions about who has our data and why. We start to ask questions about trust.
When an organisation has worked out where you are, what you’re doing, possibly who you’re with, what you might need and when, we begin to wonder what else they know, what other data they have and what else might it be used for. It’s taken the public debate around PRISM both in the USA and Europe for us to reflect on what data we’re comfortable sharing with others (perhaps in the name of security) and what data we’d rather keep to ourselves.
And so personalisation is a tricky business. It’s about finding the balance of use-value for the individual and use-value for the organisation. When we get it right, everyone wins. But when we don’t, our businesses waste time and money and our lives get noisy; worse, we can sometimes lose trust – not only in the people or organisatoins involved, but in the whole system.
This is the first post about personalisation. It’d be good to get your thoughts – which companies get it right and why? What do they do that’s different?
I’ll bet that some have found a better way to spend money than on a 1-in-a-1000 advertising lucky dip. And they don’t irritate us in the process.
In my last post, I said that identity is the sharing of personal data in context, and defined the layers of personal data types that we share. I consider this to be the WHAT of identity. Now I want to look at the HOW of identity.
In some ways, I believe identity is a result of a Hierarchy of Sharing – like this:
What I’m trying to show here is that we share personal data selectively – we filter it – so that others get enough information about us so they can identify us, and so that we can express who we are (and what we believe).
There are two important points here
- The filtering of personal data is another name for privacy – how we decide what to share, with whom, how and in what context – this is trust-based, as we’ll see in a minute
- Identity is an outcome of this filtering – we base our identities on the underlying personal data (and therefore rely on the sources of that personal data)
This helps explain how my identity is created, using my privacy filters and my personal data (this could be Self Data, Being Data, Attributed Data or Created Data – see here for what I mean by these). It’s also shows that it’s created in context, with my permission. But what about Inferred Data – stuff which is created about me but by others, for others (like your credit score). This creates a different identity – an Inferred Identity:
At the bottom is Inferred Data about you (your guessed location, your guessed intentions, your guessed financial history, your guessed age). This type of data is usually generated, stored and analysed by companies to help them drive sales and retain customers.
In order to make use of inferred data, organisations use rule-based assumptions (they need to use rules because these assumptions are processed by computers to manage large numbers of customers). The result is an inferred – and not real – identity. Who they think you are. This identity isn’t a true reflection of you; at best it’s someone similar to you. In my last post I called this a ‘hollow you’. It’s almost always never held in context (they don’t really know what, where or why you are), nor is it endorsed by you (it’s all done behind organisational walls).
So if we agree that a real identity has to be based on real personal data, shared in context, we should look at this idea of privacy in more detail – how we get from data to identity.
My privacy means “I can trust you”
I choose to share things with those I trust. The more I trust, the more I am likely to share. The less I trust, the less I am likely to share. Trust and sharing are directly correlated. Privacy is about choosing what to not to share. So it’s not a leap to say that ‘my privacy’ is simply the set of rules I use for trust-based sharing.
Put another way, privacy is a way to ensure I can trust you, so that when I share information about myself I can believe that it will be handled in the right way (more on that in another post soon).
I said earlier that identity is how I express who I am (or what I believe) to others. This idea that privacy is a filter begins to makes sense: privacy is choosing what clothes I’m happy others to see me wearing (or indeed not – see this great post on Clothing as Privacy System); it’s choosing what music I’m happy others to hear me listening to; it’s choosing what religious (or indeed non-religious) words I’m happy other people hearing me speaking; or what medical information I’m happy to tell other people about. Privacy is a filter.
My identity means “You can trust me”
Trust has to be between people. It’s not something that exists on its own (I don’t need to trust myself). So really trust is how certain you can be of others’ identity (indeed this makes sense: if we are not sure WHO someone is, we trust them less).
In other words, my identity means you can trust me. But how do you know that it’s my real personal data being used – how can you prove my identity? Regardless of the type of data being shared (be that my education qualifications, my bank statement, my history of eating out or my driving license) I think there are there are two types of trust here: direct and indirect.
This is where we have a direct relationship – personal experience so that we can be certain of a behaviour or outcome e.g. “I’ve worked with him for 20 years – I trust him to turn up on time”, “I have eaten here before – I trust this restaurant to serve great food” or “I taught her to drive – I trust her not to crash my car”. Security technology also enables direct trust. For example I log on to work systems using my user name and password – my employer (or rather their systems) can trust who I am because we have a direct relationship.
This is where we have no direct relationship so we have to rely on another person or group to validate the person’s identity (who they are or what they believe). Most usually this is a professional body or company e.g. “He has trained for 7 years at medical school – I trust his diagnosis” or “She has a taxi license – I trust her to take me to the station”, or indeed “This man has a plastic ID with a company logo – I trust him to come and take the gas meter reading”.
Trust through networks
I believe we’ve moved from a world of mostly Direct Trust (where we knew everyone around us well) to one of increasing Indirect Trust (where we know many more people, but less well). On a daily basis we now connect, produce, share, learn and play with people from all over the world. New communication tools – most obviously the internet – mean that we can now connect with many more people than previously possible. Some of these people are on the other side of the world, but most are much closer to us, even local. As we discover and connect with new people – wherever they are – we have an increasing need for Indirect Trust. Our digital world makes this even more important, when copying and distributing data have zero marginal cost. It’s easier than ever to copy and share our personal data – and indeed our identities.
So how do we manage trust in a world in which it’s increasingly difficult to validate sources of data, and indeed be able to trust those providing Indirect Trust?
The answer is networks. Now, I don’t want to open up a discussion on security or technology here, but I think it’s worth thinking about how relationship networks can help us solve some of these problems.
Direct Trust is created between two people, one-to-one. Over the last few years social networks have gone some way to helping us build Direct Trust, mainly because we have to validate each other in order to share privately (e.g. we must both ‘friend’ or ‘follow’ each other). Sharing has increased as a result. Indeed, in general, a generation of ‘digital natives’ share far more about themselves than previous generations are comfortable doing. Social networks have enabled this culture, not created it – sharing is inherently human, and not something new – the networks have just helped us connect with each other in a trusted way.
But I think social networks as we use them today are limited in how they build and foster trust. Firstly, so many of our social networks are just massive – we are connected with too many people (definitely larger than Robin Dunbar would suggest we need to really know each other) and so we can’t truly know everyone, or indeed be able to trust everyone directly.
Secondly, social network relationships are ON or OFF: We are Friends or not; Colleagues or not; Followers or not. And so we are forced to have open or closed relationships with everyone we chose to connect with (and as a result share everything we make public from wish lists and party photos to requests for help, updates of where we are and what we’re doing to gossip). This, as we have seen with Facebook, means we are often given overly-complicated privacy controls so we can manually tinker with exactly what we want to share and with whom. (And of course we can do the opposite with notification controls: so we can manually tinker with who shares what with us). In my experience, and my observations of others around me, it’s all a bit complicated: most people end up leaving the settings as open or closed. As a result, we tend to put up with over- (or indeed under-) sharing. This doesn’t build trust – in fact it undermines it.
Ultimately it becomes difficult to have anything other than Indirect Trust with the vast number of people in our networks. Previously I’ve tried to show that identity is all about context. What the current set of social networks lack is deep context. I can be connected to friend-of-a-friend I met at a party once, but I can’t necessarily trust them out of the context of that party. And I can be a ‘friend’ to a multinational corporation, but I can’t necessarily trust all their products or services.
What we need is smarter ways to build Indirect Trust through networks, with context. And that’s why I’m excited about companies like www.connect.me, a US start-up looking to put the relationship back into the network. I’m looking forward to a time when I don’t need to ‘friend’ or ‘follow’ a person or company to build trust – when we can vouch for each other in context e.g. for a friend’s cooking skills or fluency in a language, or for a company’s customer experience, or indeed a particular product (especially if that’s all I want to vouch for).
We do of course need new tools and standards to help us authenticate and share data sources, and better, intuitive tools to help us manage privacy around our personal data. Much of this is being accelerated by those working in the Vendor Relationship Management community. But first we need to recognise that identity is contextual, and therefore so is trust – and that we need smarter ways to manage trust in context.
So recently I’ve been trying to get my head around identity, privacy, trust and personal data. And how it all fits together, and why that’s important.
I thought it’d be best to go back to basics and try to define identity, so here are a few thoughts.
Identity is context
If identity is how you express yourself to others – a statement of what you believe, ‘who’ you are – then it must vary by who you’re with, where you are, what you are doing, when you are doing it, and why. For example, my identity can be a number of things.
- Football supporter (e.g. who I’m with)
- Temple-goer (e.g. what I believe in)
- Volunteer (e.g. what I’m doing)
- Soldier (e.g. what I’m wearing)
- Employee or pupil (e.g. where I am)
- Conference attendee (e.g. what interests me)
I can of course be any of these things at the same time (I can be a football supporter at work, or a soldier while volunteering), but the important thing is that it’s all about the context.
Identity is sharing
If identity is about you in a context, then it must also be about how others perceive you in that context (indeed we often say “I can identify WITH her” – that we have a connection with each other in some way). This means that by definition, identity has to be about sharing – sharing things about ourselves with others – so that an opinion can be formed.
But what is being shared?
- The logo from a personal device?
- A uniform?
- A username and password?
- A medical record?
Of course it’s all of these and much more. It’s all personal though – all Personal Data.
Identity is personal
Before we can understand identity we must first look at personal data.
What is personal data? It’s clear to me that this term means so many different things to different people. I think now is a good time to define what exactly personal data is, and how different types of personal data have different characteristics. So here are some terms I think are helpful to explain what Personal Data means.
This is the stuff you’re born with – blood type, sex, finger print, genome information, date and location of birth. It’s both self-evident and usually captured at birth. A lot of it is called your ‘biometric’ data.
The value here is two-fold: as a set of data it is entirely personal to you and can’t be duplicated; and it’s perpetual – it will never change.
These are the types of data that are a result of being alive. Height, weight, religion, sexual orientation, BMI, shoe size and health diagnoses are all types of Being Data. These types of data are likely to be steady-state for most of your life, though are subject to change during times of physical, emotional or spiritual growth or upheaval. It’s captured in various ways; most often as medical records.
The value here is being able to see patterns of cause and effect, and to influence behaviours throughout our lives.
This is the data that is attributed to you by others, or is data that you claim yourself (and which is usually validated by a 3rd party). This includes education, awards, achievements and most government data held about you – driving license data, criminal record, National Insurance number and census data.
Attributed Data is also any record of things you own (or look after) – including your car, your credit card, white goods, pets and mobile phones. As such, it also includes your contact details: your address, phone number, email address and social media handles e.g. Skype, Twitter etc. (of which of course, you can have multiple versions e.g. work email and personal email).
The value of Attributed Data is that we can determine levels of trust: so that we can both trust others and ask others to trust us.
This is pretty much everything you generate and produce yourself: photos and videos, status updates, browsing and shopping history, banking and financial records, plus the records of all your communications (phone calls, SMS etc.). It’s also your intentions, your wish-lists and personal ‘check-ins’ (a la Foursquare).
Importantly, Created Data must be published somewhere, even if that’s in your own private documents.
Created Data reflect snapshots of you, frozen in time. The value is that together this data tells a story about you, the richness of which increases over time.
This is the last category of personal data; and it’s not really personal. Inferred Data is data that someone (or something) has assumed about you. This data is not produced for you, nor is it generated on your behalf; it is there to serve some other purpose. Some would say this is so that a company can better ‘target’, ‘acquire’ or ‘own’ you.
Inferred Data includes your credit score and other segmentation data that companies hold about you. The value of inferred data is ultimately for organisations to make sense of their customers.
It became clear to me as I was writing this that these different types of personal data are actually layered upon each other, a bit like Maslow’s Hierarchy of Needs. See the diagram below.
Self Data is the core of us – indeed it’s used by forensic and security teams to prove who we are. Being Data is a by-product of living our lives; of interacting with each other and of changing as we grow older. Attributed Data is the next layer, which in Maslow’s terms is social: data about things we are responsible for and about how we interact with our community/ society. Created Data is at the top of the hierarchy, as this is about self esteem and self expression (note that much of this is a product of our digital society – consider how much Created Data exists for a homeless person, or a Buddhist Monk).
You’ll see that the there are two halves – the ‘real me’ is on top – visible and shared – and the ‘hollow me’ is underneath – not usually visible to us as it’s held by others (usually organisations). I’ve done this deliberately to show that the real me is your actual data – my actual medical records, my actual location, my actual intentions. This is very different to the Inferred Data which is really guess work, and which is produced by – and only ever for – others.
You can see that Inferred Data can reflect any type of personal data – inferred sex or age, inferred state of health, inferred level of education, inferred address, inferred financial activity, inferred location, inferred intentions. Inferred isn’t real; it’s based on assumptions.
So it’s clear – to me at least – that identity is contextual, about sharing and about personal data. And that inferred personal data can result in an inferred identity – a ‘hollow me’.
I believe these ideas underpin what we mean by privacy and trust, but more on that another time.