This is the third in a series of posts about why personalisation needs context.
In my last post I wrote about Joe Pine’s idea that we can create new value by customising goods, products, services and experiences. And I wrote about how I believe we’re starting to do the next logical thing: we are customising experiences to create personalised ‘Moments’.
In this post I want to look at how companies differentiate themselves – and why so many organisations get it so wrong when it comes to customising services and experiences. And what this means for this idea of Moments.
Let me start with an example. I was recently travelling into work when I received a text from my mobile phone company telling me about a new loyalty offer from one of their ‘trusted partners’. It was a coupon to use that day at a local coffee shop. The problem was that the coupon had been sent to me based on where I was at the time. Which was on a train travelling about 50mph through that particular area.
It was a waste of a text. A waste of my time and attention. And a waste of money for the trusted partner (who I’m sure had been sold a ‘targeted loyalty solution’ as part of a ‘customer engagement service’ from the mobile operator). Even if they could have predicted that I wanted a coffee at that moment – ignoring the fact that I’d bought one before getting on the train – they’d failed to understand the meaning of my location. They were guessing.
(As an aside, I’ve just looked through the other ‘loyalty’ offers texted to me by my mobile operator. Some insight here if you bother to look: A cider offer (I expect based on my demographic profile), very expensive headphones (again, my demographic?), an invitation to visit a newly-decorated department store (based on location), a beer offer (I don’t drink beer), a discount sofa range (who knows), lottery tickets (don’t play), a lunch offer (the closest match – sent at lunch time), and a deal for an ice lolly. All waste. All diluting the market message. And all reminding me of Don Marti’s brilliant writing about ads and signalling here.)
So I started thinking about this more generally – why and how so many organisations make these assumptions, and waste time, effort, money and the relationship with the customer. Why guesswork is considered the smartest way to engage with customers.
Looking back at Joe Pine’s ideas about Mass Customisation, he describes how companies are able to differentiate: with products, it’s mainly about price. With services, he believes it’s about improving quality. And with experiences, it’s all about being authentic – or rather, two specific types of authenticity:
- Being true to others – doing what you say you will
- Being true to yourself – being consistent about who you are
By mapping these two types onto a ‘two-by-two’ matrix, he shows that there are four possible types of experience. Here’s a picture of what he means (some are my own examples):
Top right: It’s an authentically ‘real’ experience. They are themselves and do what they say they will. Like going to a traditional Italian family restaurant – they take a very real pride in serving you and discussing suitable wines, insisting that you sample their home-made tiramisu, and giving you an espresso on the house, because it’s what they are passionate about.
Top left: It’s kind of authentic, but there’s something missing. They do what they promise but they really aren’t truly being themselves. Like being served by the clichéd Fast Food Burger Guy – yes, he sells you a burger, but he doesn’t love his job. His “have a nice day” as you leave feels empty.
Bottom right: It’s an authentic experience, but it’s a ‘real fake’. Like going to DisneyLand – in every way it’s about family entertainment, but you’re not really in the Magic Kingdom.
Bottom left: It’s a completely fake experience. Like being the victim of a phishing attack – the people contacting you are not who they say they are, and don’t do what they promise.
I like Pine’s thinking about these two types of authenticity. So it got me thinking about how I could look at customised experiences – my idea of ‘Moments’ – in the same way.
How can companies differentiate themselves when it comes to personalisation?
I believe there are two things organisations will need to understand:
- Who I am. But not just my identity generally – it has to be ‘who I am right now’. They’ll need to understand my persona – who I want to be seen as – because this changes over time. At some times of day I’m a parent, at other times I’m a football supporter, sales executive, friend and so on.
- What matters to me. Again, this doesn’t just mean ‘my preferences’, but what I want or need right now. Am I looking for advice? Am I sharing with friends? Am I shopping or just browsing?
So like Pine’s view of experiences, I’ve had a go at mapping out personalised Moments in the same way. Here’s my view of the four outcomes:
Top left: When the organisation asks me who I am, but then guesses what matters right now: it’s Facebook (and other ‘social commerce’).
- Them: “Please sign in and let us know exactly who you are.”
- Me: “Hi, it’s me again.”
- Them: “Hey! We’ve been following your activities online (and in some places offline too), and what your friends have been saying recently, and we thought you might be interested in this Expensive Car. And an ‘Ice Watch’. And a ‘Dream holiday in Malaysia”.*
*these are not random examples – I just logged into Facebook to see what ads were being shown to me…
Top right: When the organisation asks me who I am, but asks me about what matters right now: it’s BillMonitor.com.
- Them: “Hi, how can I help you?”
- Me: “I’m looking for a new mobile deal.
- Them: “Ah, what kind of deal are you looking for? We can help you look through all the options on the market right now, but if you let us know how you use your phone we can find you the best fit.”
- Me: “Well here’s how I’ve used my phone for the last few months….”
Bottom right: When they don’t know who I really am, but go some way to asking what I want: it’s uSwitch (and other price comparison websites).
- Them: “Hi, what are you looking for today.”
- Me: “Here’s some basic information about what I need.”
- Them: “Great! Here’s a list of things you might be interested in, but because we don’t really know who you are (and can’t verify any of the stuff you just told us) you’ll have to speak to the providers directly.”
Bottom left: When they don’t know who I am and completely guess what I want: it’s spam.
- Them: “Would you like to buy this car?”
- Me: “How did you get this address? Please delete me from your database.”
- Them (8 minutes later): “Would you like to buy this car? Or this one? Or this one?”
Back to my texted-coupon-on-a-train example. I wrote earlier that they had failed to understand the meaning of my location – or rather, the context of my location. I believe that personalisation goes wrong when no one’s asking about the customer’s context, or no one’s listening. It’s being sent a ‘targeted’ advert for a car, not knowing you just joined a car club. It’s being recommended a book on Amazon based on your shopping history, not knowing you actually hate the author (your previous purchase was for a friend). It’s being sent coupons for pregnancy products based on your shopping history, when you haven’t yet told your family you are expecting a baby.
The opportunity here is to help organisations understand their customers better. Who they are and what matters to them – at that moment. The problem is that this is very hard to do at Internet Scale. Which is why most organisations rely on standardising processes, service models and customer experiences. It’s the easiest answer when all your customers look the same, and you’ve had a 150 years of practice. (though it’s unfortunate that standardisation like this can unintentionally kill innovation.)
I continue to be excited by everything going on around Personal Clouds, Trust Frameworks and Vendor Relationship Management. Because I believe that these ideas and others will help organisations build relationships with individuals, not just drive transactions. And in doing so they’ll be able to listen to, and understand, customer context.
It’s worth noting one point of caution here, made by Doc Searls after I put up my last blog post. Almost all personalisation as we know it today is ‘vendor-side’ and not ‘customer-side’. Put simply, it’s usually done TO us – or at best WITH us – rather than done BY us, the customer. You know, the people paying for stuff. My simple view is that personalisation is all about understanding Moments; this means listening to – and likely for – customer signals.
I believe we’re about to enter a very interesting few years where customers will start to have tools to express who they are, and what they want – when it matters. And organisations will be able to listen, and address those customers directly. I think it was Doc who a while back reminded us that we overestimate what will happen in two years, but underestimate what will happen in ten…
It’s an exciting time indeed.
Relationships are everything.
Relationships are the reason we look after each other, the reason we reproduce, the reason we form groups and ultimately the reason we evolve. Relationships are simply one of the fundamental parts of being human.
These relationships with others – our family, our friends, our neighbours, our colleagues our customers – are all based on different levels of trust. When we talk about ‘deep’ or ‘strong’ relationships, we just mean to say that we trust each other a lot, sometimes unconditionally. It’s obvious then – but worth making the point – that when we don’t trust each other we form weaker or perhaps shallower relationships. Trust and relationships are not only related, but symbiotic. They need and feed off each other.
Personal data is naturally one of those things we share with those we trust. Information about who we are, what we are doing, where we are, our physical and emotional selves and so on. But sometimes when we share personal data, trusting that what’s shared will be handled with care, there are unintended or unwanted outcomes. In this post I wanted to look at those unwanted outcomes from sharing personal data, and some of the steps we take to manage it.
Who said you could do that?
When we share our personal data, it’s sometimes used in ways we don’t agree with, in ways we didn’t sign-up to. I think there are three of these outcomes…
- Being contacted without permission (or good reason)
- Being impersonated without permission
- Being exposed without permission (or good reason)
When we have a relationship, often implicitly or perhaps culturally we agree the rules of engagement – how often, when and where we are happy to contact each other. And because we have a relationship, we are able to set those boundaries (and reset them when they are crossed). But sometimes we are contacted by people without our permission or good reason, and by people or companies with whom we have no relationship. So the first of the unwanted outcomes is about spam, stalking and unsolicited advertising. In order to contact us, people either need to obtain our contact details (phone numbers, email address, twitter handles etc.) or they need to track us so they can target their communication by knowing where we are, what we’re doing, or what device we’re carrying (here’s a great link to a recent New York Times article entitled That’s No Phone. That’s My Tracker).
The second is about identity theft. That is, someone we don’t know using our personal data in order to access our money, our government benefits or citizen rights (for example using our passport information to get into the country). Sometimes the data is obtained through phishing, and sometimes it’s hacked. Experian recently released a report showing that more than 12 million pieces of personal information were illegally traded online by identity fraudsters in the first four months of 2012 – outstripping the entire of 2010 (interestingly, about 90% of it was password/ login combinations). Regardless of how our personal data is obtained, it’s often being used to impersonate us without our permission.
The third is more interesting – it’s about permitting, or seeking to have control over information about us which is shared with others. Naturally, we’re pretty good at doing this for our physical selves – we use clothes and curtains to keep private what we don’t want other people to see. But when it comes to personal information it’s different. What are the ‘clothes and curtains’ for our personal information? Is it even possible?
The thing is, information has some interesting characteristics. George Bernard Shaw once said (something like): “if you have an apple and I have an apple and we exchange these apples then you and I will still each have one apple. But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas.” His point was that some things behave as if they’re abundant. It doesn’t matter how many times you copy them and share them, the original remains the same, as do the copies. These things are known as ‘non-rival’ goods. This idea of abundance is a powerful one, because it helps explain how we treat abundant things.
For a long time, sharing things was limited to people who were in the same place at the same time, or limited to those who could write things down, copy them and take the bit of paper or parchment away. In other words, there was a cost to sharing, a friction to sharing. And so sharing was contained, for better or for worse. But then the printing press came along, then later the telephone and more recently the Internet, and we’ve been able to copy information at an increasingly low cost. In fact today, the costs to copy are pretty much zero – as Kevin Kelly brilliantly puts it, “The internet is a copy machine”.
Anyway, sharing our digital information has now become so easy and so cheap that all day, every day we share things without thinking. And like Bernard Shaw’s ideas, we’re now sharing our personal data abundantly – perfect copies of this data can be made and shared widely at pretty much zero cost. And this abundance of sharing begins to scratch away at the idea that we’re losing the sense of relationship with whom we share our personal data. Where, how, why and when it is shared is often unclear to us. And with the loss of these relationships, we’ve lost the trust in how that data is handled; people started contacting us without permission, impersonating us without permission and sharing information about us without permission.
Protecting our privacy
Let’s take an example to bring this to life a bit. Earlier this year, much was written about how Target, a goods retailer in the US, figured out a teenage girl was pregnant before her father did.
Aside from the fact that there are some social and ethical issues to be explored here, the point is that whilst Target were correct in their analysis, they contacted the girl about the pregnancy without her permission, and they exposed her personal data without her permission. As we go about our daily lives we leave a digital exhaust – a digital footprint – and our personal data is often left behind like Hansel and Gretel’s breadcrumbs. Track enough of it (like what lotions and vitamin supplements you buy) and compare this with other known group behaviours (like those who you know are pregnant and who are buying baby clothes, nappies and pregnancy books) and of course it’s possible to make some accurate assumptions about an individual. I’ve previously called this your ‘inferred data’. So it’s understandable that we’re becoming more wary about what is being shared – both with and without our permission – and we’re seeking to protect our privacy to avoid these unwanted outcomes.
To look more closely at how we protect ourselves, I’ve broken down the lifecycle of personal data:
- Data is produced (or observed if it’s self-evident);
- Data is captured and stored;
- Data is analysed or processed; and then
- Data is used
Here’s an example of this in action…
- I wear clothes that expose my Harley-Davidson tattoo
- My tattoo is seen by the man serving me at the bar
- The barman makes an assumption – that I’m into biking and believe in what Harley-Davidson stands for
- The barman strikes up a conversation about bikes, and because he too is into bikes, we share information about each other. The result is that we start to trust each other. We form a relationship. He might even give me a beer on the house.
Now let’s take a more obvious digital example…
- I browse the web using an internet browser
- Using cookies, my browsing activity is tracked by the web sites I visit
- My behaviours are analysed – both in real time and afterwards
- My subsequent web browsing is targeted with ads to better ‘personalise’ the service. Importantly, the targeted ads are paid for by companies trying to build a relationship with me. But it’s not really a relationship. And there’s no trust. It’s really just a transaction at best, and I’m seen as a sales lead to be sold on
This use of my personal data means I get a better experience (like remembering my ‘shopping basket’) and sometimes I get a good deal on my purchases. But mostly it just makes my browsing experience a bit noisy because the ‘targeted’ ads are assumption-based and are often more miss than hit. These two examples highlight how it’s the context of sharing that determines the permissions to share – some are explicit, while others are implicit – and therefore the outcomes i.e. stronger relationships and lower prices or instead a loss of trust and shopping frustration. As we live more and more of our lives online these issues have become increasingly apparent, and there are now many groups and bodies who are looking at the social, ethical, economic and political issues surrounding personal data.
I see that these projects fall into two camps… The first are looking at who knows what about us – in other words, steps 1 and 2 above. For example there is lots of work going into making the public aware of exactly how much data is being captured about them, by whom and for what purpose. The second group are looking at how this data is handled once it’s captured; that’s steps 3 and 4.
Privacy in action
Now rather than delve into the ins and outs, rights and wrongs of digital privacy (not least because there are many more qualified people than I who have written credibly about it, and at length), I wanted to point to some of the main activities aiming to help us manage our personal data and avoid those unwanted outcomes I suggested at the start of this post.
Below is a list of some of the main things going on around personal data; I’ve broken them down into the stages of the personal data lifecycle, steps 1-4. (Note that some of these are links to specific projects, and others are just linked to sites that provide more information)…
Who’s in control?
A big part of sharing our personal data is the bargain we make with online services when we agree to give up a bunch of data in return for some utility – a better deal, access to my friends’ information, accurate search results and more. Cory Doctorow highlights one of the great underlying issues here when he points out that “…even if you read the fine print, human beings are awful at pricing out the net present value of a decision whose consequences are far in the future.”
So I would suggest that we’re sharing our data abundantly, and not really ‘pricing in’ the full cost of doing so. The thing is, culturally we’re so much more comfortable with scarcity. When things are scarce we value them more highly, and when things are abundant we treat them cheaply (in Clay Shirky’s words, abundance means ‘cheap enough to waste’ and therefore ultimately ‘cheap enough to experiment’). And so it is with our data – we value it and so want it to remain scarce. Our instinct is to hold on to it, restrict it, secure it and sometimes misdirect others around it (like when we give out a fake email address to avoid getting spammed). And yet we give so much of it away, not really fully aware of the T&Cs under which we agree to share it. This pretence of scarcity means we end up saying things like ‘who owns the data?’ or ‘who controls the data?’, something pretty much impossible once it’s been shared in this digital age.
In my view, we should instead reflect on the idea that our personal data is now in many ways a non-rival good, it’s abundant, and perhaps behave differently around it. That would mean we would instead say things like “who has access to the data” and “what are people doing with my data”. It would mean new terms and conditions for sharing, perhaps those under which we can feel more confident about how our data is being used, and under which we can benefit from the products and services exchanged. Sharing would be more transparent, and we’d have the right to take action if our data is incorrect, or there’s an abuse of the data. Once we get some degree of visibility of who has our data, in what format, why and how they are using it, I think something interesting will happen: trust will emerge. And with that trust, new relationships. Indeed, we may begin to actually share more – an idea already proposed by those looking at Volunteered Personal Information. And as we share more – under clear and transparent terms – everyone will win: new products and services will become available (think of patientslikeme.com but for everything), our existing services will get even better because they will matter to us (and not be based on guess work), and guess what, we’ll feel better about it all because there won’t be a sense of any hidden agenda with our personal data, which after all, is personal.
A couple of suggestions
So I’d say that we need two main changes to how we behave around our personal data
- We need to recognise that we can’t control data in every circumstance: instead lets accept that and turn to ways to improve transparency: information sharing agreements, regulation for organisations to be clear about what data they gather and how they use it, and perhaps new ways to make us more aware of what we’re sharing in the first place so we can make informed decisions
- We need to better understand personal data in context: what it is we really need to share, when and with whom (here’s a good example: to prove we are old enough to buy alcohol, we often use a document that proves we can drive. We can and should get better at using personal data in context – we only need to share what we need to share)
I’m hopeful that much of this is on the way. But there’s a lot more to do.