As the health care industry begins to use new technologies such as predictive analytics, government health agencies, doctors, and primary health providers must be aware of risks and agree on standards.
Technology is playing an integral role in health care worldwide as predictive analytics has become increasingly useful in operational management, personal medicine, and epidemiology. This article will delve into the benefits for predictive analytics in the health sector, the possible biases inherent in developing algorithms (as well as logic), and the new sources of risks emerging due to a lack of industry assurance and absence of clear regulations.
Explore the Analytics collection
Visit the Health care collection
Subscribe to receive related content
Download the Deloitte Insights and Dow Jones app
Health care has a long track record of evidence-based clinical practice and ethical standards in research. However, the extension of this into new technologies such as the use of predictive analytics, the algorithms behind them, and the point where a machine process should be replaced by a human mental process is not clearly regulated or controlled by industry standards. Government health agencies, doctors, and primary health givers need to be aware of the risks emerging and agree on levels of assurance as society continues to move into a new era of decision-making supplemented, and at times replaced, by evidence from digital technologies. More specifically, this paper will look at the various ethical issues and moral hazards that need to be navigated following the adoption and use of predictive analytics in the health care sector with an emphasis on accountable algorithms.
Predictive analytics can be described as a branch of advanced analytics that is utilised in the making of predictions about unknown future events or activities that lead to decisions.
It is a discipline that utilises various techniques including modelling, data mining, and statistics, as well as artificial intelligence (AI) (such as machine learning) to evaluate historical and real-time data and make predictions about the future. These predictions offer a unique opportunity to see into the future and identify future trends in patient care both at an individual level and at a cohort scale.
Predictive analytics is based on logic that is drawn from theories developed by humans to fit a hypothesis (supervised learning). A set of rules and processes are developed into a formula that undertakes calculations and is known as an algorithm. Predictive analytics can also be based on unsupervised learning which does not have a guiding hypothesis and uses an algorithm to seek patterns and structure in data and cluster them into groups or insights. In unsupervised learning the machine may not know what it’s looking for but as it processes the data it starts to identify complex processes and patterns that a human may never have identified and therefore can add significant value to researchers looking for something new. Both supervised and unsupervised predictive modelling are valid analytical tools to use in a well-rounded application of these technologies.
Predictive analytics is increasing in its application and has been very useful in various industries including manufacturing, marketing, law, crime, fraud detection, and health care. The health care sector, with its many stakeholders, stands to be a key beneficiary of predictive analytics, with the advanced technology being recognised as an integral part of health care service delivery. This paper will look at the various moral and ethical hazards that need to be navigated by government agencies, doctors, and primary caregivers when leveraging the potential that predictive analytics has.
With new technologies come new risks. This paper will evaluate various scenarios in the use of predictive analytics with a particular focus on service delivery within health care.
Two of the most disruptive factors in recent times are the rise of the internet and the smartphone. Together they have allowed for people around the world to have access to a large repository of knowledge and information at their fingertips. These have transformed industries, including arguably the most regulated and traditional of them, health care, which is undergoing drastic change.
The move toward the adoption of technology in the health care sector has had a tremendously positive impact on medical processes along with the practices in which health care professionals engage.1
Some of the key milestones include the digitisation of health records, access to big data and storage in the cloud, advanced software, and mobile applications technology. All of these milestones have presented various advantages in the health care sector, including an ease of workflow, faster access to information, lower health care costs, improved public health, and the overall improvement of quality of life.
They have also aided in significantly reducing health care wastage and in the development of new drugs and treatments, along with helping to avoid preventable deaths.2 Going forward, technology will continue to play a fundamental role in improving the health of people and reducing mortality rates among people of all age groups. Predictive analytics will play a central role in this.
A risk emerging for predictive analytics includes the centralisation of data which presents a tremendous risk in terms of security and integrity of the data. Given the increasing amount of data that is often stored in the cloud or otherwise accessible via the internet, there is the persistent threat of hacking from individuals with malicious intent. There are also ethical issues to be considered, given the role the cloud technology plays in predictive analytics and the overall outcome.3 In this article, we focus on the ethical issues and leave security of data and the cloud to another time.
To better understand the various possibilities of predictive analytics in health care, it is first important to acknowledge the different ways through which health care can benefit from this discipline. These include operational management such as the overall improvement of business operations; personal medicine to assist and enhance accuracy of diagnosis and treatment; and cohort treatment and epidemiology to assess potential risk factors for public health.
Predictive analytics allows for the improvement of operational efficiency. Big data and predictive analytics are currently playing an integral part in health care organisations’ business intelligence strategies. Real-time reporting is relatively new but can provide timely insights into data and can be used to dynamically adjust the predictive algorithms in line with new discoveries and insights.
This technology allows the scrutinisation of historical and real-time patient admittance rates to determine ebb and flow, while also providing a capability to evaluate and analyse staff performance in real time. As an example, surge issues in hospitals creating bed shortages may be able to be addressed if the data provides insights which can then be used to prevent the issue from occurring in the first place. Extra staff may be able to be deployed to a ward, or a seasonal occurrence may enable prior planning to deal with the issue before it arises. This, in turn, allows for the overall improvement of service delivery to patients, helping to ensure that they receive the best possible quality of care. Patients can enjoy an increased accuracy of diagnoses, which in turn allows for a more effective treatment of their illnesses.4
As an example in operational management, predictive analytics insights can help optimise staff levels so managers know how many staff members they should plan to have in a given health care facility to achieve optimal patient-to-staff ratios. This can be achieved by utilising historical data, overflow data from nearby facilities, population data, demographic data, reportable diseases, and seasonal sickness patterns in a predictive analytics model.
Operational management can also benefit as the technology exists to assess weather patterns such as ambient temperature readings, and calendar variables such as day of the week, time of the year, and public holidays to forecast patients seeking care. One can estimate the volume of walk-in patients that a facility can handle, allowing them to recruit and roster staff accordingly,5 helping optimise operations.
Predictive models can also assist in the recruitment and assessment of new staff competencies. With the increased demand for aged-care services, pressure will increase on health care organisations, and especially aged-care institutions, to ensure staff are fully trained, meet competency models, and have the skills as well as emotional capacity to handle their work in a society with an ageing population. This is within a context of increased pressures on medical facilities in general. Predictive analytics needs to be handled carefully in this environment but could be applied in interviews to construct a logistic regression model from which a candidate’s performance can be predicted. This would be particularly useful when processing large numbers of applications for new roles and trying to narrow the field to a shortlist of suitable candidates.
In these roles, self-control, resilience, and leadership are key behaviours that might be useful to assess.
However, applying sophisticated actuarial mathematical modelling to human behaviour is complex. Humans are not machines and are less able to be analysed, assessed, and predicted. There are whole fields of study such as psychology, sociology, anthropology, political science, and behavioural economics, to name a few, which offer a wide range of models and approaches to consider. Mathematics is a base for predictive analytics and the engines that drive it—algorithms. It is an integral part of contemporary social and behavioural sciences with many of today’s profound insights into human behaviour drawing on mathematical formulae and insights. Predictive analytics can benefit efficiencies in health care operations for staffing optimisation and fit, but considerable work still needs to be done to provide accurate insights into individual human behaviour.
From a regulation perspective, predictive risk profile models can be developed to identify the risk profile of aged-care services based on data such as pressure injuries, staff-to-patient ratios, qualified staff, wages, patient turnover, and profitability statistics. This information can highlight anomalies in the system and areas that need investigation, as well as help predict what resources and training are required for the future provision of quality patient-centred services.
Predictive analytics in the health care sector also allows for a more definitive diagnosis of patients, followed by the appropriate treatment of the identified ailment(s). For hospitals this can mean a significant optimisation in operations and a reduction in readmissions. Predictive tools such as remote patient monitoring and machine learning can work hand in hand to support decisions made in hospitals through risk scoring as well as threshold alerts.6 This technology can allow the involved parties to proactively prevent readmissions, and emergency room visits, as well as other negative events.
As always, it is important to look at what truly matters for caregivers and patients. Predictive analytics has great potential, not just for patients but to caregivers as well. Going forward, it is becoming an integral component of service delivery in the health care sector, thereby making it a necessity and not a luxury.7 Using predictive analytics would help ensure that health care facilities can deliver exceptional services for a long time to come in an environment of population growth, while also addressing issues of timely treatment for patients and providing a more accurate diagnosis for patients.
In personal medicine, predictive analytics can play a key role at the individual level and enable the use of prognostic analytics and big data to allow for doctors and other involved parties to find cures for certain diseases which they might not be familiar with at a given time. This introduces more accurate modelling for mortality rates at an individual level. Notably, it’s long been known that some medicines seem to work for a specific group of people but not others. This is because people are complex and unique and there are many things to be witnessed in an individual’s DNA (genome) and how it’s expressed.
While it is virtually impossible for one health practitioner to manually analyse all of this information in detail, big data and predictive analytics allow the involved parties to uncover unknown correlations, insights, and hidden patterns through examining large datasets (big data) and forming predictions based on them. These can be applied effectively at the individual level, and consequently caregivers are more likely to come up with the correct treatment or drug to treat a specific illness.8
Predictive analytics in health care is also increasingly being used to advise on the risk of deaths in surgery based on the patient’s current condition, previous medical history, and drug prescription, as well as to help in making medical decisions. For example, statistical tools can detect diabetic patients with the highest probability of hospitalisation in the following year based on age, coexisting chronic illnesses, medication adherence, and past patterns of care. The University of Pennsylvania utilises predictive analytics to identify patients on track for septic shock 12 hours before the condition occurs9, and health insurance companies are increasingly sophisticated in applying such models to assess risk.
The increasing digitisation of electronic health records and legislated performance reporting requirements for hospitals and other medical facilities provide valuable and large datasets to be able to obtain insights into the health of a community. Access to this data is closely monitored and legislated to avoid the risk of identification and to protect individuals. The drive toward open data means that pressure is increasing to release data for the common good and more and more datasets are being made available for research purposes globally and in Australia.
Predictive analytics on large population studies using volumes of health system data including geographic, demographic, and medical condition information can generate profiles of community and other cohort health patterns and inform health organisations and government agencies on where to better target interventions such as ‘quit smoking’ or ‘obesity’ campaigns, thereby increasing effectiveness. Predictions on the likelihood of disease and chronic illness based on historical data could create early interventions that aim to reduce the financial and resource load on the public health system in the future.
Data could also be used from the pharmaceutical sector to highlight clusters of diseases and disorders and predict and redirect supply chain requirements and resources to target demand more accurately and avoid medicine shortages.
Epidemiological studies are based on risk assessments and statistics that aim to identify and prevent illness for populations at risk. Predictive analytics can provide fast and accurate insights to utilise risk scores and give insights into collective health issues beyond now and for the future. This will help to proactively identify groups of people at risk into the future for health issues such as disease outbreaks and cancer clusters.
There is always risk in statistical modelling and predictions. However, predictive analytics is providing a new source of risk as the technology increases the pace of the decision-making process, and the exact point at which the decision needs to be handed over from a machine to a human mental process is usually unclear and unregulated. Algorithms behind computer processes are known to be biased unless very clear risk controls and assurance processes are actively engaged and addressed. Computer systems reflect the implicit values of the people coding and training them and currently accountability for coding and training algorithms is not regulated or consistently applied across the industry.
There are a significant number of ethical dilemmas and an emerging moral hazard, resulting in increased risks, to be aware of in applying predictive analytics to the health care industry. Some of these risks are thousands of years old and are amplified due to faster decision-making processes with the digital disruption, and others are emerging as technology and analytics become more prevalent. This article has briefly touched on a number of significant issues, each of which could warrant their own detailed article.
The health care landscape is complex and difficult to navigate. Our ethical responsibilities in a given situation depend in part on the nature of the decision and in part on the roles we play. A patient and a family member play different roles and have different ethical obligations to each other than a patient and their doctor. These are entrenched, however, in key ethical principles that are embedded as far back as the origins of the Hippocratic Oath. This was developed by Hippocrates, an ancient Greek doctor, and is the earliest known expression of medical ethics requiring new doctors to swear to uphold ethical standards and abstain from wrong doing and harm. In this paper, it is assumed that the majority of caregivers and family members, as well as the allied health system, aim to align with Hippocratic-based ethics with an additional modern emphasis on patient autonomy, privacy, and respect.
Key considerations within the context of predictive analytics are that respect, privacy, autonomy, and doing no harm are accepted key principles within ethics and that moral hazard is an extension of this.
Change is happening at a faster pace than ever before globally. The term digital disruption has arisen to capture the essence of just how fast everything is changing based on new technologies. The old way of doing things is not only changing, it’s changing at speeds that are often difficult to keep up with. The way we do things and our thinking are literally uprooted with all the digital choices we have now. We can book medical appointments on our phone, see a doctor online, order clothes online, and even apply for a personal loan online through crowdsourcing. Most of this was not possible 10 years ago. The day-to-day ways of working, human relationships, and even recreational pastimes are changing.
The health care industry is not immune. Doctors are now under pressure to combine clinical personal care with data capture. Traditional doctor and patient relationships are impacted with the doctor needing to ensure they capture information digitally from patients as much as possible, resulting in a need to mix on-the-spot personal care and human touch with machines and data entry. One challenge is finding a balance between patient care and data capture within the traditional allotted appointment times whilst maintaining a trusted doctor and patient relationship. The storage of the data is also a potential risk and can lead to loss of trust if breached, as shown recently with the large number of people (reportedly over 1 million) that opted out of the Australian government’s move to an electronic health record system. Not everyone will trust the security of the data being kept by their doctor.
Doctors are increasingly finding they need to continually evolve their computing skills as technology systems become more and sophisticated and are linked with the ability to read and interpret information such as pathology reports from digital sources. As an example, X-rays are rarely held up to light boxes any more but are available on software systems on a doctor’s desktop computer or laptop.
Patients are also driving the disruption with new expectations. In other parts of life, patients are offered convenience, real-time information, value for money, and options when considering services. This has led to an expectation of these things from their health care providers, resulting in online doctor services, self-help, instant payment of rebates, and choices such as home health. This extends to the expectation that patients now see more data capture than ever before and are increasingly aware that treatments might be able to be more specifically tailored to their DNA and health history.
Digital disruption is not necessarily moving at the same pace across the entire medical industry. Pockets of care are still heavily reliant on traditional approaches such as the reliance on paper records with associated data quality and linkage issues. However, the overall pace of change is accelerating and is having a considerable impact on the sector. With this comes emerging ethical issues that need to be addressed and which are outlined in further detail in this article.
Moral hazard has roots in many areas, including behavioural economics and the insurance industry. Essentially people are often considered to undertake more risky behaviour if they think they have a safety net. One key example of moral hazard is that people are inclined to undertake more risky behaviour on the basis under which they are insured, over and above what they would normally do. For example, a worker becomes less diligent on safety issues on a work site because he knows he is covered by labour accident insurance if something untoward should happen. Essentially risk is transferred to someone else (the social fund), thereby adversely modifying the behaviour of the insured person.10 The transfer of risk and liability within the medical industry is complex and this risk combined with misdiagnosis from a machine adds to the complexity that needs to be addressed when integrating predictive analytics into health care.
This could increase risk in health care if, for example, a doctor relies on a computer to give a diagnosis over their own assessment. They may take more risks because they believe they are protected with the computer being accountable and bearing the cost of the risks. This challenges the ethics of respect and doing no harm, with the key decisions being outsourced to a machine and the accountability lines being blurred in the diagnosis and treatment plan.
The accuracy of the machine may be proven to be higher than that of the doctor, but if a doctor relies solely on the machine, it is questionable whether the doctor is doing no harm for multiple reasons. Various ethicists argue that the human touch is vital in recovery and that outsourcing decision-making in health care to machines is not respectful. The successful use of predictive analytics in health care needs to consider the importance of aligning with accepted ethical standards and the intervention points for when the human touch or an empathic human decision is more critical than that of a machine’s.
The effectiveness of predictive analytics in the health care sector drills down to the role of the different stakeholders therein. One area that could raise a moral hazard is the role of the doctor. Previous research has highlighted that the most extensive ethical encounter of predictive analytics is its probability to affect the role of the doctor.
It is noted that predictions on adverse medical events by the predictive analytics models can promise greater accuracy than prognostication by clinicians.11 However, reliance on such models may be called into question without clear documentation of the point at which the machine-based decision is assigned to a human mental process. To avoid any complications along the way, doctors and caregivers should capture data and discuss treatment pathways in detail with patients as usual and that as part of this treatment process they clearly track the decision-making process points between the human and the machine.
New skills will be required to work hand in hand with technology. The mastering of these skills will need to include at what point a caregiver decides to deviate from a machine-based recommendation and back their own judgement, observations, and experience as well as mastering excellent communication with their patients and their families.12 This will help support the decision-making process, ensuring caregivers do not rely solely on the safety net of trusting the machine but instead continue to apply a human mental process to diagnoses, with the machine aiding their accuracy but not overriding their judgement.
This will help doctors and caregivers to manage the trade-offs that are involved in different clinical outcomes, even while taking into consideration the predictions made using relevant models. The ideal outcome is that these models are our tools and not our masters13 and should be used in conjunction with a human mental decision-making process.
Moral hazard and liability in predictive analytics can also involve lawsuits. Case law points out that doctors can be held accountable for injury that could have been avoided had they more carefully reviewed their patients’ medical records. To avoid such outcomes, predictive analytics models may be of positive use for all parties if they are integrated into the existing decision support systems. Medical negligence lawsuits may increase if patients feel a doctor overrode a machine’s recommendation. Liability may also arise if a doctor follows a predictive analytics model recommendation and it contains an error. To reduce the risk doctors should not become complacent and need to document their decision-making processes, clearly articulating when their judgement overrides the machine in as much detail as possible.
The use of predictive analytics in health care and society in general is evolving and the best approach is to view this new technology capability as a useful tool that augments and assists the human decision-making process—rather than replacing it. Adhering to models in predictive analytics should be discretionary and not binding. Doctors need to be able to override the diagnosis or recommendation when their judgement ascertains it is appropriate to do so. A clear risk model should be available to assess all factors and assist the decision as the use of machines in patient diagnosis and care is best used when integrated with a human mental process.
Choice architecture is another aspect to consider within the ethics of risk assessments for predictive analytics. Predictive models provide a series of results based on data. Assumptions are built into these data, and options provided by predictive analytics will carry risk scores.
Health care providers need to assess the options from the analytics results and present patients with choices. Choice architecture is a behavioural economics concept that aims to provide interventions that influence people without impacting their freedom of choice.14
Information from the predictive technology is designed to help providers and patients with more accurate diagnoses and clearer findings for decision-making about treatments. The way the information from the analysis is presented to the patient may influence their decision and so both care givers and analysts involved in predictive modelling need to be aware of the risks of presenting the information and consider choice architecture frameworks when designing communications with patients.
If the treatment options carry risks, then it can be potentially an issue about how the information is presented, with complexity revolving around the preferences and values of the health care provider and the patient. These are tough decisions and doctors need to be able to apply a mental process to the predictive analytics and feel able to override recommendations on multiple factors to present their choices based on unique factors. These may include the mental and emotional stability of the patient, risks of the proposed intervention, potential errors in the analytics, stakeholder opinion, potential liability, and risk of automation bias which occurs when a person automatically makes the customary choice even if the situation calls for another choice.15
Decisions about the ease of overriding the predictive model to suggest alternate treatment plans over the machine evidence should be made on a case-by-case basis and clearly documented for future liability or ethical concerns.
A potential issue with predictive analytics is the possibility of bias or impartial representation. Extrapolative analytics models require a sizable amount of data that are representative of the entire population as opposed to a mere fraction of it. Predictive analytics in the health care sector is very useful if it’s applied to benefit the large majority of the population. A challenge is ensuring equitable representation without bias.
Bias in building predictive models also needs to be addressed with the development of accountable algorithms wherein specific decision-making processes can be traced back to within the predictive analytical model. Algorithmic bias occurs when the technology reflects the attitudes and values of the humans, conscious or otherwise, who are coding, collecting, selecting, or using the data to train the algorithm. People often place a great deal of trust in algorithms and consider them to be neutral and unbiased. However, this is incorrect.
Most of the algorithms driving predictive analytics are developed by fallible human beings who all hold prejudices and biases—whether conscious or unconscious. Without a continuous feedback loop of improvement and true attempts at reducing bias, serious statistical errors can occur within predictive analytics. There is no clear legislation or policy framework in this area in Australia, so an ethical issue can occur unless risk controls are put in place specifically to address bias. These controls are currently voluntary and motivators to circumvent them might be the promise of profits or the attraction of ‘an amazing find’.
Explicit attempts to write algorithms that are accountable, as well as fair and equitable, are not always at the top of the agenda when organisations are struggling to keep up with digital disruption. Government legislation and regulations do not specifically cover algorithm development or use and rely on a system of controls which is unclear, and clearly voluntary. The system relies on the majority of people in technology knowing to utilise risk models that help them avoid bias and voluntarily doing the right thing. The European Union’s General Data Protection Regulation (GDPR) requires organisations to be able to explain their algorithmic decisions. However, this is open for interpretation and there are no clear guidelines for what this should look like. The mathematical and computer programming audit trail could clearly highlight any logical incompetence in design. However, this needs to be considered with a social sciences approach. Is it being used in a socially acceptable way? How is this measured? Does it exploit human vulnerabilities? How was bias removed? Is the data sample suitable? Tracking the accountability trail and ethical landscape is complex.
There are examples in health care of self-regulation being implemented in the absence of government policy, such as the protocols regarding electronic blood pressure machines that were found to have a margin of error when they were first introduced. The margin of error impacted on the level of seriousness of the patient’s hypertension, which in some cases could have meant the difference between life or death. The European Society of Hypertension International Protocol for the validation of blood pressure monitors now exists and sets a series of protocols and validations of machines for self-regulation, supplementing dedicated hypertension protocols in countries such as Britain, Australia, and the United States. This shows that if industry takes the issue seriously enough, they don’t need to wait for legislation. Risk controls can be introduced voluntarily.
Another ethical aspect to consider is the building and validation of the model to be used in the predictive analysis. It is important for the entire undertaking to be patient-centred and have patient-centred perspectives, without which they could be considered unethical.16 Patient-centred care is respectful of, and responsive to, the preferences, needs, and values of patients and consumers. Dimensions of patient-centred care are generally accepted as respect, emotional support, physical comfort, information and communication, continuity and transition, care coordination, involvement of family and carers, and access to care. Projects utilising predictive analytics in health care need to align with the intent of patient-centred care to remain ethically viable.
The establishment and introduction of ethics committees in government agencies, regulatory bodies, and associations may go some way in the modern age to addressing the potential for inequality and bias when using predictive analytics in health care. Ethics committees are used in clinical trials and at some hospitals, and are well entrenched and respected in the universities and research sector.
These models work well and could be adapted by organisations and government agencies to self-regulate in the absence of clear legislation so as to protect the welfare and rights of people through accountable algorithms and technologies that actively aim to avoid bias. There are examples of a few government departments and organisations setting up their own corporate ethics committees or partnering with universities. This means that the risk for bias in projects involving the design of algorithms for predictive analytics could be reduced through controls agreed via a rigorous ethical assessment exercise run through an ethics committee prior to implementation.
The standards for validation and transparency could also present some ethical issues along the way when applying predictive analytics. It is noted that before any prognostic analytics model is utilised in medical care, it ought to be carefully appraised for effectiveness and any potential adverse consequences. It is important to establish an appropriate validation standard, analysis plans, and other avenues that would help to guarantee the integrity of the entire undertaking and the effectiveness of the analysis to be conducted.17 This includes technology-led models. In Australia the National Safety and Quality Health Service (NSQHS) Standards aim to protect the public from harm and improve the quality of health care. They describe the level of care that should be provided by health service organisations and the systems that are needed to deliver such care. However, the guidelines for technology-related projects are not as strong as those for performance reporting or clinical trials and there is much work to be done to provide clear ethical guidelines in this space.
Opening up medical data for research is not new. Insights into symptoms, diseases, treatment patterns have been benefiting populations for a number of years. However, the amount of data being collected is larger than ever before and is growing faster and faster with the move to electronic health record keeping and faster data-sharing. Predictive analytics capitalises on data, and this needs to be collected from patients and other involved parties.
The move to digital records means that there is strong growth in the amount of health care data available and the new wealth of opportunity they provide to increase wellness, but also in the rise of some serious privacy considerations. The move from paper- to electronic-centred patient health records has made the health care industry rich in data and how the data is collected and interrogated is protected by the Privacy Act of 1998 (Privacy Act), which is an Australian law that is essential in regulating as well as handling the personal information of an individual. The Privacy Amendment (Enhancing Privacy Protection) Act 2012 amends the Privacy Act of 1988.
In Australia, data derived from individuals is protected by the Privacy Act that precludes the release of personal sensitive information to unauthorised parties. This covers situations within the health sector when personal health information from a patient is collected, as well as situations when data derived from an individual is used in research. De-identification and encryption of data is required in order to conduct research and protect personally sensitive information, and includes access controls and applying security measures such as codes to ensure privacy of individuals is retained, while encouraging data-sharing for research purposes when appropriate and possible.
The Internet of Things (IoT) advances have resulted in unprecedented levels of personal data being captured from wearable devices, social media, and even shopping patterns. This provides rich datasets for health researchers and for predicting health patterns and behaviours. The data economy means that this information that is primarily collected in the commercial sector can be made openly available for sale or use.
Predictive analytics encourages data-sharing to produce more results that are accurate. The bigger the datasets the higher likelihood of accuracy in the predictions. This often challenges the concept of privacy and can put data at risk if it isn’t handled correctly in line with legislation and privacy controls. The greater reliance on the use of technology means we need to ensure continued compliance with ethical requirements. There is strong potential in being able to use de-identified patient data to improve health services for everyone in the community. However, privacy is a very important right for a patient18 and is an important condition for other rights such as freedom, as well as personal independence.
With the onset of advanced technological developments in the health sector, there is a need for privacy to be upheld and there are strict laws that are set up to direct health sector providers on how they should collect information about a patient’s situation. This also includes how the information is stored and how it can be used, or shared.
According to the Australian Charter of Health Care Rights, each person that is involved in care, as well as treatment, is obliged legally as well as professionally to keep information about their clients private at all times. At times, information about a patient needs to be shared among different health care providers. They are only allowed to share this information with consent. Despite the significant benefits of utilising predictive analytics in health care at an individual and cohort level, there is a real need to align with privacy controls and keep data private. The ethical issues inherent in breaches of privacy are covered in the legislation of the Privacy Act and all predictive analytics models and projects should align with the legislation at all times.
The concern that predictive analytics may reduce patient care to a set of algorithmically derived probabilities is important and real. Particularly as legislation and governance lags behind technology disruption. However, the benefits are also important and real.
Technology is playing an integral role in the world today and all sectors are benefitting from what it has to offer. The health care sector is no exception. It can benefit significantly from predictive analytics, and it can be argued that this technology is a core aspect of the future of medicine and health care delivery in general.
The advantages associated with sensibly designed and implemented predictive analytics in the health care sector far outweigh their potential issues. Millions of people around the world stand to benefit from its adoption, with patients able to enjoy an improved service delivery that anticipates challenges and addresses them proactively. Diagnosis would be more accurate, as would the treatment that follows. Caregivers would also benefit, given how easy it would be to access useful information and take appropriate steps toward seeing the health of their patients improve.
However, even with these advantages, there are many emerging risks that need to be navigated for all involved parties to benefit from the full potential of predictive analytics.
Mostly, this would involve setting clear risk controls to cover bias, address emerging ethical considerations, and ensure clearer documentation for accountability. With policymakers still moving to catch up with the drafting of appropriate legislation, this would require self-regulation from those responsible for writing the algorithms. Existing predictive models and analysis also need to avoid breaking any existing laws such as those around privacy or violating ethical standards.
Predictive analytics has a strong and healthy place in the future of health care delivery. However, we need to remember that the algorithms and models behind predictive analytics are not perfect and need to be made more accountable and transparent with clear human intervention points when appropriate. They also need a clear foundation to be set that seeks to be ethical and nonbiased in its application, preferably one guided by legislation.