Using personalised services without privacy loss: what solutions does technology have to offer?
Publié le mars 15 2019Télécom SudParis
Online services are becoming more and more personalised. This transformation designed to satisfy the end-user might be seen as an economic opportunity, but also as a risk, since personalised services usually require personal data to be efficient. Two perceptions that do not seem compatible. Maryline Laurent and Nesrine Kaâniche, researchers at Telecom SudParis and members of the Chair Values and Policies of Personal Information, tackle this tough issue in this article. They give an overview of how technology can solve this equation by allowing both personalization and privacy.
Personalised services have become a major stake in the IT sector as they require actors to improve both the quality of the collected data and their ability to use them. Many services are running the innovation race, namely those related to companies’ information systems, government systems, e-commerce, access to knowledge, health, energy management, leisure and entertaining. The point is to offer end-users the best possible quality of experience, which in practice implies qualifying the relevance of the provided information and continuously adapting services to consumers’ uses and preferences.
Personalised services offer many perks, among which targeted recommendations based on interests, events, news, special offers for local services or goods, movies, books, and so on. Search engines return results that are usually personalised based on a user’s profile and actually start personalising as soon as a keyword is entered, by identifying semantics. For instance, the noun ‘mouse’ may refer to a small rodent if you’re a vet, a stay mouse if you’re a sailor, or a device that helps move the cursor on a computer screen if you’re an Internet user. In particular, mobile phone applications use personalisation; health and wellness apps (e.g. the new FitBit and Vivosport trackers) can come in very handy as they offer tips to improve one’s lifestyle, help users receive medical care remotely, or warn them on any possible health issue they detect as being related to a known illness.
How is personalisation technologically carried out?
When surfing on the Internet and using mobile phone services or apps, users are required to authenticate. Authentication allows to connect their digital identity with the personal data that is saved and collected from exchanges. Some software packages also include trackers, such as cookies, which are exchanged between a browser and a service provider or even a third party and allow to track individuals. Once an activity is linked to a given individual, a provider can easily fill up their profile with personal data, e.g. preferences and interests, and run efficient algorithms, often based on artificial intelligence (AI), to provide them with a piece of information, a service or targeted content. Sometimes, although more rarely, personalisation may rely solely on a situation experienced by a user – the simple fact they are geolocated in a certain place can trigger an ad or targeted content to be sent to them.
What risks may arise from enhanced personalisation?
Enhanced personalisation causes risks for users in particular. Based on geolocation data only, a third party may determine that a user goes to a specialised medical centre to treat cancer, or that they often spend time at a legal advice centre, a place of worship or a political party’s local headquarters. If such personal data is sold on a marketplace[1] and thus made accessible to insurers, credit institutions, employers and lessors, their use may breach user privacy and freedom of movement. And this is just one kind of data. If these were to be cross-referenced with a user’s pictures, Internet clicks, credit card purchases and heart rate… What further behavioural conclusions could be drawn? How could those be used?
One example that comes to mind is price discrimination,[2] i.e. charging different prices for the same product or service to different customers according to their location or social group. Democracies can also suffer from personalisation, as the Cambridge Analytica scandal has shown. In April 2018, Facebook confessed that U.S. citizens’ votes had been influenced through targeted political messaging in the 2016 election.
Responsible vs. resigned consumers
As pointed out in a survey carried out by the Chair Values and Policies of Personal Information (CVPIP) with French audience measurement company Médiamétrie,[3] some users and consumers have adopted data protection strategies, in particular by using software that prevents tracking or enables anonymous online browsing… Yet this requires them to make certain efforts. According to their purpose, they either choose a personalised service or a generic one to gain a certain control over their informational profile.
What if technology could solve the complex equation opposing personalised services and privacy?
Based on this observation, the Chair’s research team carried out a scientific study on Privacy Enhancing Technologies (PETs). In this study, we list the technologies that are best able to meet needs in terms of personalised services, give technical details about them and analyse them comparatively. As a result, we suggest classifying these solutions into 8 families, which are themselves grouped into the following 3 categories:
- User-oriented solutions. Users manage the protection of their identity by themselves by downloading software that allows them to control outgoing personal data.Protection solutions include attribute disclosure minimisation and noise addition, privacy-preserving certification,[4] and secure multiparty calculations (i.e. distributed among several independent collaborators).
- Server-oriented solutions. Any server we use is strongly involved in personal data processing by nature. Several protection approaches focus on servers, as these can anonymise databases in order to share or sell data, run heavy calculations on encrypted data upon customer request, implement solutions for automatic data self-destruction after a certain amount of time, or Privacy Information Retrieval solutions for non-intrusive content search tools that confidentially return relevant content to customers.
- Channel-oriented solutions. What matters here is the quality of the communication channel that connects users with servers, be it intermediated and/or encrypted, and the quality of the exchanged data, which may be damaged on purpose. There are two approaches to such solutions: securing communications and using trusted third parties as intermediators in a communication.
Some PETs are strictly in line with the ‘data protection by design’ concept as they implement data disclosure minimisation or data anonymisation, as required by Article 25-1 of the General Data Protection Regulation (GDPR).[5] Data and privacy protection methods should be implemented at the earliest possible stages of conceiving and developing IT solutions.
Our state of the art shows that using PETs raises many issues. Through a cross-cutting analysis linking CVPIP specialists’ different fields of expertise, we were able to identify several of these challenges:
- Using AI to better include privacy in personalised services;
- Improving the performance of existing solutions by adapting them to the limited capacities of mobile phone personalised services;
- Looking for the best economic trade-off between privacy, use of personal data and user experience;
- Determining how much it would cost industrials to include PETs in their solutions in terms of development, business model and adjusting their Privacy Impact Assessment (PIA);
- PETs seen as a way of bypassing or enforcing legislation.