The recent media coverage regarding data analytics firm Cambridge Analytica saw furious global debate about the pilfering of personal data on Facebook of more than 80 million people, without their permission. Are we really surprised by this?
Perhaps because I work in the tech industry as a data scientist I see the endless possibilities of how data can be used to influence every pocket of life. It is both exciting and terrifying. But when a company that exists to use data to influence behaviour starts pushing and overstepping the boundaries is hardly surprising. On the one hand big data can be used for the greater good, or at the other extreme it could be improperly used to steal someone’s identity. Somewhere in between lies the act of influencing attitudes and behaviour, which is essentially what marketing is all about.
Whether for good purpose or other, using data that individuals or society hasn’t agreed to share is unacceptable, and this perception of public support is what we call social license. Whether or not a company has technical or legal license to use data, social license is about whether society deems this use to be acceptable.
As people are only starting to become aware of the potential uses and consequences of sharing their own data, it’s time to put a spotlight on what data sharing means for the innocent masses – you and me.
Data sharing may currently be perceived as an opening for companies to influence your behaviour, opinions and beliefs, a perception that has been embedded in people’s minds because of events like Cambridge Analytica using 87 million people’s data to influence them through Facebook.
Traditionally, consumers have agreed to share their data with companies they have a relationship with, usually because they gain something in return like using an app or service – sometimes for free. As digital products and services become less about ownership and more about convenience, people are starting to view data sharing in terms of the benefits they receive. Therefore, to gain social license for data sharing, an organisation or industry needs to deliver and communicate the benefits to their users.
As the volume of data being collected from a variety of sources continues to increase, it is becoming harder to trace who can see what, and a lack of transparency from tech companies isn’t helping.
Trust becomes more important now than ever. Facebook earned nearly $40 billion in advertising revenue last year, which is partly due to an ability to analyse the massive amount of data it has collected about its users. The biggest gap in trust is that Facebook users are not aware of how their data is being shared and who it shares it with. In a recent Reuters survey, 51% percent either said they didn’t trust Facebook to protect their privacy and data.
What happened with Facebook and Cambridge Analytica was essentially a violation of trust and people’s privacy. Their data was shared for purposes they were not informed about, and as a result, users are aware and can now take action by deleting their account or updating their privacy settings.
In the case of healthcare, recovering from an incident like this is not so easy. Once people’s personal health data is out there, it cannot be removed or changed, it remains true for the person. For example, a diagnosis of a health issue may cause certain implications on someone’s life and can be used against them.
Unlike a hacked credit card where you can shut it down and start over, if personal health data was to be leaked, the damage is potentially irreversible. With new research in medicine, and genomic testing becoming more available and accurate, insurance companies and employers may want access to this information in the future, and it would be naĂŻve at best to assume that their primary interest is the benefit of an individual.
A philosophical debate has evolved about social license – the ongoing public support for what a company, government or industry is doing. As citizens, we grant a social license to institutions and governments to act in certain ways that we deem beneficial – and we tend to put both legal and cultural limits on this. Data science involves discovering patterns identified in the data, usually in an effort to predict or influence something.
For organisations performing data science, a social license requires society to believe that the benefits outweigh any risks for those whose data is involved. Ideally, this is provided through an informed consent process, as so often data is consented for one purpose and later discovered to be useful for another. Managing both a person’s consent and society’s acceptance of data use is a growing challenge.
To design the future we want, we need to engage the public and build a network of trust. Handling people’s data comes with a responsibility to engage individuals so that they understand firstly what data sharing means for them, and why it is important. It could be argued that for a social media giant, where the majority of their revenue comes from advertising, people are giving up their privacy to a company that has no obligation to its users but has the power to use their data for almost anything. It may sound daunting, but there are two sides to the world of data sharing.
Healthcare, unlike Facebook, is an industry that has established a lot of trust, but also comes with a lot of risk. People tend to trust their doctors a lot because of their knowledge and expertise, they are there to help you, give advice and make you feel better. In the world of health data sharing, this will become paramount for healthcare providers to uphold, in order to build a trust network with patients.
There are many benefits of sharing your health data that should be communicated in order to engage the public in support of a social license. First and foremost, healthcare organisations could use this data to help treat entire populations. There is huge potential for data science to make the masses of health data meaningful to clinicians and patients, providing valuable insights to help them make decisions.
For instance, machine learning models can be applied to information on a patients’ electronic health record (EHR) to find patterns and combinations of risk factors, giving clinicians a more holistic view of a patient. Ideally, once a person’s health data is collected and stored on their EHR, deep learning could be applied to help clinicians find “patients like me” – the deep learning model would be able to help to identify health issues early, allowing time to intervene and possibly prevent it from exacerbating.
We are heading towards a health data boom, where in just over a decade’s time, people’s data will be collected, stored and optimised to provide insights that could potentially impact whole populations.
Predictive medicine can save hospitals money by intervening with a high-risk patient before they get too costly for the health system. This of course provides a better patient outcome and experience as they are able to be cared for outside the hospital, either at home or in their community. If hospitals are operating efficiently, not at maximum capacity or under-resourced, imagine the level of care that they could provide to those who really need it.
The foundation for this is already well underway with numerous data science research projects contributing to the global precision health movement.
There is more than one side to the story when it comes to sharing people’s data. Collaborations between data scientists, researchers, commercial parties, clinicians and industry specialists are highly valuable in exploring what is possible with health data. It is only the beginning of the data science journey for health, therefore it is up to us to get it right.
The original article can also be found at CIO magazine here.