HealthGuard: A Machine Learning-Based Security Framework for Smart Healthcare Systems
HealthGuard: A Machine Learning-Based Security Framework for Smart Healthcare Systems
The integration of Internet-of-Things and pervasive computing in medical devices have made the modern healthcare system ”smart.” Today, the function of the healthcare system is not limited to treat the patients only. With the help of implantable medical devices and wearables, Smart Healthcare System (SHS) can continuously monitor different vital signs of a patient and automatically detect and prevent critical medical conditions. However, these increasing functionalities of SHS raise several security concerns and attackers can exploit the SHS in numerous ways: they can impede normal function of the SHS, inject false data to change vital signs, and tamper a medical device to change the outcome of a medical emergency. In this paper, we propose HealthGuard, a novel machine learning-based security framework to detect malicious activities in a SHS. HealthGuard observes the vital signs of different connected devices of a SHS and correlates the vitals to understand the changes in body functions of the patient to distinguish benign and malicious activities. HealthGuard utilizes four different machine learningbased detection techniques (Artificial Neural Network, Decision Tree, Random Forest, k-Nearest Neighbor) to detect malicious activities in a SHS. We trained HealthGuard with data collected for eight different smart medical devices for twelve benign events including seven normal user activities and five diseaseaffected events. Furthermore, we evaluated the performance of HealthGuard against three different malicious threats. Our extensive evaluation shows that HealthGuard is an effective security framework for SHS with an accuracy of 91% and an F1 score of 90%.
The full article can be downloaded below.
U.S. Opioid Epidemic: Impact on Public Health and Review of Prescription Drug Monitoring Programs (PDMPs)
U.S. Opioid Epidemic: Impact on Public Health and Review of Prescription Drug Monitoring Programs (PDMPs)
In recent years, the devastating effects of U.S. opioid epidemic has been making news headlines. This report explores background information and trends on opioid misuse, overdose fatalities and its impact on public health. In addition, various efforts to improve surveillance, timeliness of data and Prescription Drug Monitoring Program (PDMP) integration and interoperability are reviewed.
PubMed and internet searches were performed to find information on the U.S. opioid epidemic. In addition, searches were performed to retrieve information about PDMPs and state-specific mandates along with presentation slides and learnings from the 2018 National Rx Drug Abuse & Heroin Summit in Atlanta, GA.
It is clear that the U.S. opioid epidemic has a tremendous impact on public health including the next generation of children. Various data, surveillance & technology-driven efforts including CDCFunded Enhanced State Opioid Overdose Surveillance Program (ESOOS) and use of telemedicine for opioid use disorder treatment aim to improve prevention, treatment and targeted interventions. In addition, PDMP integration and interoperability efforts are advancing to provide prescribers meaningful decision support tools.
The opioid epidemic has a complex impact on public health intertwined with variable factors such as mental health and social determinants of health. Given the statistics and studies that suggest many of the illicit opioid users start with prescription opioids, continued advancement in the area of PDMP integration and interoperability is necessary. The PDMP integrated clinical decision support systems need to supply to healthcare providers access to complete, timely and evidence-based information that can meaningfully inform prescribing decisions and communication with patients that affect measurable outcomes.
While Prescription Drug Monitoring Programs (PDMPs) are valuable tools for providers in making informed prescribing decisions, the variable state mandates and varying degrees of integration and interoperability across states may limit their potential as meaningful decision support tools. Sharing best practices, challenges and lessons learned among states and organizations may inform strategic and systematic use of PDMPs to improve public health outcomes.
The full article can be downloaded below.
As Medicine Evolves, So Too Must Those Who Assure Its Quality
As Medicine Evolves, So Too Must Those Who Assure Its Quality
The past few years have illustrated the startling speed with which medicine can evolve. Since 2018, the US Food & Drug Administration (FDA) has approved first-of-their-kind drugs based on RNA, gene therapy, and cancer-killing chimeric antigen receptor (CAR) T-cells and signed off on human trials to explore the clinical use of CRISPR-mediated genome editing. And throughout this process, the US Pharmacopeia (USP) has been working in the background to ensure that quality standards are in place for new medical products reaching the market. “200 years ago, our first monographs were basically recipes: ‘take bark from this tree and boil it for this long and you should get a brown liquid’,” says Michael Levy, Head of Research & Innovation (R&I) at USP. “Obviously we've evolved tremendously since then, but it’s just a continuation of what we’ve always done—we’re just doubling down on it.”
Regular revisions to quality standards to accommodate advances in knowledge and changes in medical practice were built into the USP process by its founders. Today, to ensure its standards stay current, USP also works to stay ahead of the rapidly changing technology curve. Well before a cutting-edge medicine reaches the market, the underlying tools and techniques are already percolating up into the scientific literature and presentations at international conferences. USP combs through this early-stage work and projects which prospects seem most likely to impact the quality of therapeutics within the next decade or so. “We ask what’s on the horizon, what are the quality issues potentially associated with that trend or technology, and how does USP need to respond,” says Levy. USP then explores some of these technologies, working through a typical research approach with preliminary proof-of-concept work potentially followed by longer-term “incubation projects” conducted by subject-matter experts.
The full article from Scientific American can be viewed at this link.
A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis
A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis
Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging.
In this systematic review and meta-analysis, we searched Ovid-MEDLINE, Embase, Science Citation Index, and Conference Proceedings Citation Index for studies published from Jan 1, 2012, to June 6, 2019. Studies comparing the diagnostic performance of deep learning models and health-care professionals based on medical imaging, for any disease, were included. We excluded studies that used medical waveform data graphics material or investigated the accuracy of image segmentation rather than disease classification. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. Studies undertaking an out-of-sample external validation were included in a meta-analysis, using a unified hierarchical model. This study is registered with PROSPERO, CRD42018091176.
Our search identified 31 587 studies, of which 82 (describing 147 patient cohorts) were included. 69 studies provided enough data to construct contingency tables, enabling calculation of test accuracy, with sensitivity ranging from 9·7% to 100·0% (mean 79·1%, SD 0·2) and specificity ranging from 38·9% to 100·0% (mean 88·3%, SD 0·1). An out-of-sample external validation was done in 25 studies, of which 14 made the comparison between deep learning models and health-care professionals in the same sample. Comparison of the performance between health-care professionals in these 14 studies, when restricting the analysis to the contingency table for each study reporting the highest accuracy, found a pooled sensitivity of 87·0% (95% CI 83·0–90·2) for deep learning models and 86·4% (79·9–91·0) for health-care professionals, and a pooled specificity of 92·5% (95% CI 85·1–96·4) for deep learning models and 90·5% (80·6–95·7) for health-care professionals.
Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals. However, a major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample. Additionally, poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology.
The full article can be downloaded below.
A Multicenter, Scan-Rescan, Human and Machine Learning CMR Study to Test Generalizability and Precision in Imaging Biomarker Analysis
A Multicenter, Scan-Rescan, Human and Machine Learning CMR Study to Test Generalizability and Precision in Imaging Biomarker Analysis
Automated analysis of cardiac structure and function using machine learning (ML) has great potential, but is currently hindered by poor generalizability. Comparison is traditionally against clinicians as a reference, ignoring inherent human inter- and intraobserver error, and ensuring that ML cannot demonstrate superiority. Measuring precision (scan:rescan reproducibility) addresses this. We compared precision of ML and humans using a multicenter, multi-disease, scan:rescan cardiovascular magnetic resonance data set.
One hundred ten patients (5 disease categories, 5 institutions, 2 scanner manufacturers, and 2 field strengths) underwent scan:rescan cardiovascular magnetic resonance (96% within one week). After identification of the most precise human technique, left ventricular chamber volumes, mass, and ejection fraction were measured by an expert, a trained junior clinician, and a fully automated convolutional neural network trained on 599 independent multicenter disease cases. Scan:rescan coefficient of variation and 1000 bootstrapped 95% CIs were calculated and compared using mixed linear effects models.
Clinicians can be confident in detecting a 9% change in left ventricular ejection fraction, with greater than half of coefficient of variation attributable to intraobserver variation. Expert, trained junior, and automated scan:rescan precision were similar (for left ventricular ejection fraction, coefficient of variation 6.1 [5.2%–7.1%], P=0.2581; 8.3 [5.6%– 10.3%], P=0.3653; 8.8 [6.1%–11.1%], P=0.8620). Automated analysis was 186× faster than humans (0.07 versus 13 minutes).
Automated ML analysis is faster with similar precision to the most precise human techniques, even when challenged with realworld scan:rescan data. Assessment of multicenter, multi-vendor, multifield strength scan:rescan data (available at www.thevolumesresource. com) permits a generalizable assessment of ML precision and may facilitate direct translation of ML to clinical practice.
The full article can be downloaded below.
Physician Suicide: A Call to Action
Physician Suicide: A Call to Action
Physician suicide is topic of growing professional and public health concern. Despite working to improve the health of others, physicians often sacrifice their own well-being to do so. Furthermore, there are systemic barriers in place that discourage self-care and help-seeking behaviors among physicians. This article will discuss the relevant epidemiology, risk factors, and barriers to treatment, then explore solutions to address this alarming trend.
The full article can be downloaded below.
Nursing Our Way to Better Health
Nursing Our Way to Better Health
Nurses have always been on the front lines of health care provision. Increasingly, they are on the front lines of health care reform. Almost all of the ideas put forward for US health care reform, from reducing treatment costs to improving patient safety to moving care into the community, involve a significant role for nurses.
There are real questions, however, about whether the economics will support the needed nursing care. Done right, nursing can be the lynchpin for a better, cheaper health system. But if we make the same mistakes with nurses as we did with physicians, we will have wasted another shot at health care improvement.
The full article can be downloaded below.
Pharmacogenomics: What the Doctor Ordered?
Pharmacogenomics: What the Doctor Ordered?
About half a million adverse drug reactions are reported in the US each year that result in disability, hospitalization or death. The efficacy or toxicity of a drug in a patient can be strongly influenced by their genetics as well as environment. Application of genomics to clinical pharmacology, “pharmacogenomics,” promises to transform patient care and health resource utilization in the coming decade.
The full article can be downloaded below.
Detecting conversation topics in primary care office visits from transcripts of patient-provider interactions
Detecting conversation topics in primary care office visits from transcripts of patient-provider interactions
Amid electronic health records, laboratory tests, and other technology, office-based patient and provider communication is still the heart of primary medical care. Patients typically present multiple complaints, requiring physicians to decide how to balance competing demands. How this time is allocated has implications for patient satisfaction, payments, and quality of care. We investigate the effectiveness of machine learning methods for automated annotation of medical topics in patient-provider dialog transcripts.
We used dialog transcripts from 279 primary care visits to predict talk-turn topic labels. Different machine learning models were trained to operate on single or multiple local talk-turns (logistic classifiers, support vector machines, gated recurrent units) as well as sequential models that integrate information across talk-turn sequences (conditional random fields, hidden Markov models, and hierarchical gated recurrent units).
Evaluation was performed using cross-validation to measure 1) classification accuracy for talk-turns and 2) precision, recall, and F1 scores at the visit level. Experimental results showed that sequential models had higher classification accuracy at the talk-turn level and higher precision at the visit level. Independent models had higher recall scores at the visit level compared with sequential models.
Incorporating sequential information across talk-turns improves the accuracy of topic prediction in patient-provider dialog by smoothing out noisy information from talk-turns. Although the results are promising, more advanced prediction techniques and larger labeled datasets will likely be required to achieve prediction performance appropriate for real-world clinical applications.
The full article can be downloaded below.
Distributed representation of patients and its use for medical risk adjustment
Distributed representation of patients and its use for medical risk adjustment
Efficient representation of patients is very important in the healthcare domain and can help with many tasks such as medical risk prediction. Many existing methods, such as Diagnostic Cost Groups (DCG), rely on expert knowledge to build patient representation from medical data, which is resource consuming and non-scalable. Unsupervised machine learning algorithms are a good choice for automating the representation learning process. However, there is very little research focusing on patient-level representation learning directly. In this paper, we proposed a novel patient vector learning architecture that learns high quality, fixed-length patient representation from claims data. In addition, our model can learn meaningful medical visit representation and medical code representation at the same time. We conducted several experiments to test the quality of our learned representation, and the empirical results show that our learned patient vectors are superior to vectors learned through other methods. We also used our patient vector on a real-world application, and it outperforms a popular commercial model. Lastly, we provide potential clinical interpretation for using our representation on predictive tasks, as interpretability is vital in the healthcare domain.
The full article can be downloaded below.