• Article highlight
  • Article tables
  • Article images

Article History

Received : 12-10-2023

Accepted : 03-11-2023



Article Metrics




Downlaod Files

   


Article Access statistics

Viewed: 821

PDF Downloaded: 411


Get Permission Gandotra and Gupta: Challenges to AI use in anesthesia and healthcare: An anesthesiologist’s perspective


Introduction

The use of Artificial Intelligence in healthcare is a promising prospect and its applications in medicine continue to grow.

We can think about AI applications in medicine in areas of clinical practice, biomedical research, and translational medicine and research. Broad list of AI applications demonstrate how AI solutions can advance patient care and medical research:

  1. Image analysis: AI applicability for image analysis is used in intraoperative echocardiography, vascular access, interventional procedures and focussed scans. AI solutions can reduce errors due to human fatigue.

  2. Patient risk stratification (or population level primary prevention)

  3. Risk of readmission (usually 30 days)

  4. Medical research: AI can help with novel trial design, analytics and novel patient recruitment strategies. AI based chatbots can be used for trial screening.

  5. Home videos for diagnosis of autism or learning disabilities.

  6. Quality improvement: OR “blackbox” platform is already in use. AI systems can help quantify blood loss by analysing the photos of the sponges in OR or delivery suites.

  7. Drug discovery: AI use in research has a major role in precision medicine. Developing targeted drugs based on phenotypes is being widely studies. Key features are gene identification, RNA expression, DNA mutations, and protein-protein interaction.

  8. Medical education: AI competency is being considered in training programes. AI based simulation programs are being used for training and evaluation purposes.

AI can potentially provide high performance data-driven medicine, optimize patient care trajectories, suggest the right therapy for the right patient, improve diagnostics, and improve the process of clinical decision-making. Furthermore, AI can improve clinical reliability, reduce errors related to human fatigue, reducing over all healthcare costs and help physicians understand patients’ values and goals.1

Artificial intelligence can have a major impact on anesthesiology; on multiple elements such as monitoring the depth of anesthesia, control of anesthetic machine functions, ultrasound for procedures and diagnosis, adverse event prediction, pain assessment and management, and optimising the operating room workflow.2, 3, 4, 5, 6, 7, 8, 9 AI use in anesthesia is still under investigation, but potential applications include automated recognition of anatomical structures during image-guided (US/CT/X-ray) regional anesthesia blocks or chronic pain procedures or for intravenous or arterial access; prediction of hemodynamic adverse events based on graph analysis, help in interpretation of ECHO, FAST images, recognition of the glottic opening and vocal cords during laryngoscopy and identification of endobronchial or oesophageal intubation; prediction of difficult airway using facial images. It is important that clinicians understand how AI solutions can be leveraged to deliver more efficient, safer and cost-effective care.

There are many ethical, legal, and regulatory factors that determine and constrain the financing and delivery of healthcare in ways that may not apply to other commercial products and services.

We will focus on the broad ethical issues that are applicable to developing and using AI, particularly in medicine. These include ethical frameworks that not only guide medicine, but also the development and application of AI.10, 11 We will examine these issues especially in relation to AI applications in anesthetic practice. Broadly speaking, we ask the questions: Whether AI tools help or harm patients or healthcare providers? Is the medical community ready to accept the AI solutions? And whether these tools perpetuate the social inequities?

Discussion

Autonomy, beneficence, non maleficence and justice are the four cornerstones of medical ethics.12 The ethical concerns can arise from multiple aspects of AI research and deployment such as the nature and source of the data, data collection methodologies, AI models design, output interpretation and inappropriate use. AI solution can have the unintended consequences like perpetuation of systematic biases and discrimination towards under-represented sections of society. In addition, there can be broad social implications if the AI systems are intended to replace humans at healthcare related jobs.

Consent and potential harm

AI models use large datasets, mostly from electronic health records (EHRs) or digitised form of paper data, to produce mathematical models that can prospectively interpret complex multifactorial relationships. As an example, this data may be used to develop predictive models to anticipate adverse events and provide clinicians with sufficient lead times to intervene. This can help in predicting and taking actions in advance for the optimum outcome, for example, advising the perioperative team of the volume and type of blood product to order for a patient depending on the type, duration and complexity of the procedure.13

Patients have the right to be know about the use of AI in their healthcare, receive a full disclosure and be able to provide informed consent. Ethical concerns arise when patients are uncomfortable with AI assistance or are unaware of AI involvement in their care.

The benefit to clinicians is that of clinical decision support to supplement clinical judgement.14

Privacy and security of data

Hospitals, insurance companies and government programmes generate huge amounts of data. That patients’ information, the quality of care received, outcomes and the associated costs are also vast valuable data. Much of these digital data is usable for analysis. A big general concern here is about the privacy and security of digital data. Healthcare data are sensitive and may have implications for patients’ personal and professional life.15 The patients may not consent to their data being used for algorithmic training. In addition, there are jurisdiction specific laws and regulations pertaining to the use and transmission of healthcare data.

Introduction of bias and inequities

Bias can occur during almost any stage from data collection to AI programing, data collection to model development. AI systems are prone to biases namely historical bias, representation bias, measurement bias, aggregation bias, evaluation bias and deployment bias.16

AI algorithms are trained on data. Historically, healthcare data is extremely male and extremely white. This historical bias inherent to real world data cannot be overcome, even by perfect sampling and randomisation. When such models are applied to the final use population, it creates representation bias. If the attributes differ across groups and lead to differential performance, it gives rise to measurement bias. For example, a model trained to predict myocardial infarction when used in women may fail to predict, resulting in missed diagnosis. To mitigate aggregation bias, developers need to identify and understand the distinction between the groups and reasons why they are different from each other.

AI fairness and unintended bias alluded to maybe a larger problem in countries like India where health disparities exist based on patient demographics, gender, and geographical distribution.

Another example could be genomic data and precision medicine. These databases represent European ancestry and are missing population specific information particularly from Asian and African populations. Therefore, genomic test results for persons of non-European ancestry could be less accurate and therefore, will have limited applicability to Indian population.17

Deployment bias is the intersection of AI solution with the general applicability, that is, how society or the medical community uses the AI solution and derives its output. For example, algorithms for predicting whether an individual should receive a spinal cord stimulator for chronic pain may be biased towards those who are able to access and afford this expensive procedure.

AI models typically do not include social factors such as income, education, or social standing. These factors are well-known to have strong effects on hospital admissions and other long term health outcomes. Vulnerable populations such as the poor, people with disabilities, and people from far away geographical regions tend to incur disproportionately high healthcare costs. That could be because these populations are sicker when they do seek healthcare or because they have low access to medical care. AI models analyze insurance claims, health records to predict which of its patients are likely to incur the highest cost of care over the next year.

The deployment of this model is biased when used to predict healthcare need instead of health care cost. This deployment bias may also result in a rural or a low socio-economic group patient needing to be almost twice as sick as an urban patient to qualify for the same beneficial care program.18 This could also affect the insurance premiums or simply the access to insurance. Models need to distinguish patients that incur high costs from patients that are sicker or have more medical needs so that a large vulnerable section is not denied the care they deserve.

The predictions based on theses systematic errors become a self-fulfilling prophecy and perpetuate the biases in further studies.

Algorithmic fairness

AI solutions must not propagate inequities. We must take a view of AI fairness in terms of justice that centers on the health and lives of people, not the outputs alone.19 AI solutions that use datasets that are under representative of certain groups, may need specific training data to improve decision-making and identify and reduce unfair results. Classification parity asks for equal predictive performance; Anti-classification requires the exclusion of any protected attributes in the outcome modeling. It is not always possible to exclude the “protected attributes” (like gender, ethnicity) as they may be essential for correct prediction, and reporting these is required for maintain transparency of AI models, as we will discuss later.

If an AI model is used to triage high-risk patients, poorly calibrated risk estimates will lead to false expectations. For instance, proposed models for triaging trauma patients for resuscitation can be in direct conflict with the fundamental rights of the individuals. If an AI model predicts poor outcome based on age or co-morbidities, should the clinician stop the resuscitative efforts?

In medical research, the available data rarely reflect the variable of interest. Hence, much of our research is by proxy. For instance, using race as a proxy for genetic ancestry to explain difference in disease prevalence or severity. Proxies are imperfect; this introduces the possibility of systematic error (thereby violating beneficence and nonmaleficence). When the possibility of systematic error is introduced, there's always the possibility of discrimination, and as discussed, leads to negative consequences for certain patient groups.

Lack of transparency

Machine learning models are constantly changing and updating themselves based on the data on which they are operating. Machine learning operates in a sort of black box, which lacks clinical explanations and accountability. It is often difficult to predict how AI is arriving at its decisions. Clinicians are reliant on technologies believing them to be safe and effective for use on their patients. There are very few standards or regulations for evaluating safety and effectiveness of AI based products. Should the clinicians and healthcare providers be held ethically and legally responsible for the decisions that may be informed by AI?

Most machine learning studies do not report the demographic breakdown of the training data used to develop and train their models. Some may argue this is to promote algorithmic fairness. It thus, becomes exceedingly difficult to evaluate the bias and fairness, and its applicability across populations. In an analysis, looking at reporting of demographics and population representativeness in AI research, race and ethnicity of data sample were not reported in 64% studies, gender and age were not reported in 75% of the studies, and social economic status was not reported in over 90% of the studies.20 Detailed information on the data used to develop and train the model are necessary for unbiased and appropriate application of the AI solution.

The MINIMAR (Minimum information for medical AI reporting) has been proposed as a solution for transparent reporting.21

Regulatory concerns and human oversight

Physicians have as fiduciary responsibilities and are ethically bound to serve the best interests of their patients.22 They rely on a lot of other support systems to serve the best interests of their patients. For example, clinicians rely on researchers, scientists, and scientific societies to provide valid evidence to support and guide clinical practices. The regulatory systems to evaluate drugs and devices and the certification systems to evaluate hospitals and laboratories are also critical. As of now, there is no regulatory guidelines and clear directives on the responsibilities of physicians in the context of AI use to influence clinical decision making. The clinicians use their own judgement without directives. It will be necessary to modify and evolve the current ethical frameworks pertaining to patient confidentiality and the fiduciary duty clinicians have toward their patients as such systems become commonplace in clinical practice.23

Recently, some regulatory authorities have approved the clinical use of devices that incorporate machine learning algorithms for nerve identification during regional anesthesia. The ScanNav Anatomy Peripheral Nerve Block system, NerveBlox and NerveTrack are being considered for clinical practice.24 There exists probability of medical harm associated with the use of AI solutions. This type of harm could come from medical error in diagnosis or treatment, or from undertreatment or overtreatment.

Maintaining human oversight in AI-assisted anesthesia is essential to avoid over-reliance on AI systems and to ensure that clinical judgment remains central to patient care. The use of AI predictions for depth of anesthesia or target controlled infusions are as good as the data they have, which may in turn be influenced by the patient condition and human factors. Every patient has their own set of challenges and thus anesthesia is highly individualised. Blanket use of AI models may be inappropriate in every scenario.

Competing interests and ownership of data

While clinicians and health care organizations are trying to serve the best interests of their patients, they have interests of their own and are subject to conflicts.20 There are financial interests and intellectual interests. The healthcare system being an economic and commercial entity might prioritize cost effectiveness even though the physician is the one dispensing the care.

The specific health data belongs to the patient and its physical form belongs to the healthcare organisation. The questions then need to be addressed like who owns the data, the traceability of specific data elements from each individual patient into the “big” datasets, and where does patients’ rights to privacy stand?

Data sets, involving image or biopsy interpretations, clinical interventions, may also reflect the significant intellectual contributions of clinicians. The subsequent work to curate this data and develop an algorithm certainly adds value to this raw data, but not all the value. How should we adjudicate claims regarding the value of the data, the value of each individual’s data contribution to aggregate dataset, the value of the intellectual contribution from each provider and the pricing of the AI system? Should the clinicians who provided the intellectual contribution or the patients’ whose data was used to train the models be compensated? There has also been ongoing patient activism for inclusion in recognition for specimen contribution to scientific advances.24

Challenges to role of physicians

The introduction of AI in anesthesia challenges traditional professional roles and responsibilities, raising questions about job displacement and professional autonomy. There is significant anxiety in the healthcare community to implement AI systems without proper validation and explainability. There is also concern about skill degradation.

AI systems risk becoming something followed either blindly or poorly. The output from an AI system may take on an unintended authority. Individuals who challenge an AI-based recommendation have frequently been required to provide significantly more robust evidence to refute the recommendation than the evidence which the AI recommendation was based on.25 This has been observed in non healthcare contexts.

Conclusion

AI tools and clinical assessments have a common goal of improving patient outcomes. Nevertheless, balance must always be maintained. Not embracing the improvements that new technologies could provide and only favouring the traditional medical practices risks limiting the possible benefits of this technological revolution for our patients.

There are more challenges and opportunities for fair AI research. It is nearly impossible that one solution could address all challenges. It is vitally important that an AI model focuses on equity, meaning that each individual or group is given the same number of resources, attention, or outcomes. Many of these biases are systematic and we're often unaware they exist.

There are also many opportunities. The increasing number of papers discussing bias in AI solutions related to healthcare is evidence that we are forging a new path for data-driven, evidence-based health care. Transparency in reporting, deployment and use of AI solutions necessary for populations to trust medical system, particularly where AI is involved. The publisher of the New England Journal of Medicine is introducing a new journal, NEJM AI, to identify and evaluate the applications of artificial intelligence to clinical medicine. JAMA has a series of videos, podcasts and series on AI. The key is to use AI in a way that does benefit all groups, which requires thoughtful evaluations and human interpretations.

Conflicts of Interest and Source of Funding

The authors have disclosed they do not have any conflicts of interest.

Author Contributions

Drafting of the manuscript: All authors. Revising the manuscript critically for important intellectual content: All authors. Final approval of the version to be published: All authors.

References

1 

Ó Álvarez-Machancoses JL Fernández-Martínez Using artificial intelligence methods to speed up drug discoveryExpert Opin Drug Discov201914876977

2 

DA Hashimoto Artificial Intelligence in Anesthesiology. Current Techniques, Clinical Applications, and LimitationsAnesthesiology2020132237994

3 

A Shalbaf M Saffar JW Sleigh R Shalbaf Monitoring the depth of anesthesia using a new adaptive neurofuzzy systemIEEE J Biomed Health Inform20182236717

4 

C Zaouter TM Hemmerling R Lanchon E Valoti A Remy S Leuillet The feasibility of a completely automated total IV anesthesia drug delivery system for cardiac surgeryAnesth Analg2016123488593

5 

F Hatib Z Jian S Buddi C Lee J Settels K Sibert Machine-learning algorithm to predict hypotension based on high-fidelity arterial pressure waveform analysisAnesthesiology2018129466374

6 

CS Lin CC Chang JS Chiu YW Lee JA Lin MS Mok Application of an artificial neural network to predict postinduction hypotension during general anesthesiaMed Decis Making201131230814

7 

E Smistad L Løvstakken G Carneiro Vessel Detection in Ultrasound Images Using Deep Convolutional Neural NetworksDeep Learning and Data Labeling for Medical Applications. DLMIA LABELS 2016 2016. Lecture Notes in Computer Science(10008)SpringerCham2016

8 

T Wingert C Lee M Cannesson Machine Learning, Deep Learning, and Closed Loop Devices-Anesthesia DeliveryAnesthesiol Clin202139356581

9 

S Gkikas M Tsiknakis Automatic assessment of pain based on deep learning methods: A systematic reviewComput Methods Programs Biomed2023231107365

10 

FA Riddick The code of medical ethics of the American Medical AssociationOchsner J200352610

11 

PA Lawler The ACHE Code of Ethics: Its Role for the ProfessionJ Continuing High Educ2000483314

12 

T Hope M Dunn Medical Ethics: A Very Short IntroductionOxford University PressOxford, England2018

13 

M Braun P Hummel S Beck P Dabrock Primer on an ethics of AI-based decision support systems in the clinicJ Med Ethics20214712e3

14 

CS Lin JS Chiu MH Hsieh MS Mok YC Li HW Chiu Predicting hypotensive episodes during spinal anesthesia with the application of artificial neural networksComput Methods Programs Biomed20089221937

15 

H Smith Clinical AI: opacity, accountability, responsibility and liabilityAI Soc20203653545

16 

H Suresh J Guttag A Framework for Unde A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cyclerstanding Sources of Harm throughout the Machine Learning Life CycleEAAMO '21: Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization2021Association for Computing MachineryUnited States19

17 

MD Kessler L Yerges-Armstrong MA Taub AC Shetty K Maloney LJB Jeng Challenges and disparities in the application of personalized genomic medicine to populations with African ancestryNat Commun201671252110.1038/ncomms12521

18 

Z Obermeyer B Powers C Vogeli S Mullainathan Dissecting racial bias in an algorithm used to manage the health of populationsScience2019366646444753

19 

S Corbett-Davies JD Gaebler H Nilforoshan S Goel R Shroff S Goel The measure and mismeasure of fairness: A critical review of fair machine learningarXiv201810.48550/arXiv.1808.00023

20 

S Bozkurt EM Cahan MG Seneviratne R Sun JA Lossio-Ventura JPA Ioannidis Reporting of demographic data and representativeness in machine learning models using electronic health recordsJ Am Med Inform Assoc20202712187884

21 

T Hernandez-Boussard S Bozkurt JPA Ioannidis NH Shah MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health careJ Am Med Inform Assoc2020271220115

22 

H Smith J Downer J Ives Clinicians and AI use: where is the professional guidance?J Med Ethics202310.1136/jme-2022-108831

23 

DS Char NH Shah D Magnus Implementing Machine Learning in Health Care-Addressing Ethical ChallengesN Engl J Med20183789813

24 

JS Bowness K El-Boghdadly G Woodworth JA Noble H Higham D Burckett-St Lauren Exploring the utility of assistive artificial intelligence for ultrasound scanning in regional anesthesiaReg Anesth Pain Med20224763759

25 

C O’Neil The Ivory Tower Can’t Keep Ignoring TechThe New York Times14November2017https://www.nytimes.com/2017/11/14/opinion/academia-tech-algorithms.html



jats-html.xsl


This is an Open Access (OA) journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.