Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access December 27, 2019

Data Engineering Frameworks for Optimizing Community Health Surveillance Systems

Abstract A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like [...] Read more.
A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like blockchain or secure enclaves, and means of data storage and retrieval, have emerged. But, with these innovations comes a grand challenge: how to blend with, and adapt them to, the traditional public health practices. The long-in-place infrastructures and protocols to protect and ensure the welfare of communities are in need of change, or at least update, to enhance their marked longevity of impact directly on the health outcomes and community wellbeing they were designed to fortify. It is in this vein that the essay is written and composed. The investigation in this essay is to query what, particularly, might be the aspects and influences of the emerging veritable cornucopia of new data engineering frameworks that are either being developed specifically for health surveillance and wellness, or are available to be co opted from devices and services already thriving in the current market and research milieu. Knowing what these ways may be could well aid in molding their uptake and spread, ensuring their beneficial impacts on those communities who stand to gain the most. The essay is divided into several key segments. After this introduction, section two details the research methods. In the section that follows, the maximum health outcome potentials of these novel frameworks are reviewed. Part four of the essay takes a more critical approach, addressing how the success of these methods may be hindered and future research avenues. Lastly, the concluding information suggests some actions to take to aid best suit the implementation of these ways, and suggests some thoughts for further research after the completion of these inquiriestrand [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2021

Advancing Healthcare Innovation in 2021: Integrating AI, Digital Health Technologies, and Precision Medicine for Improved Patient Outcomes

Abstract Advances of wearables, sensors, smart devices, and electronic health records have generated patient-oriented longitudinal data sources that are analyzed with advanced analytical tools to generate enormous opportunities to understand patient health conditions and needs, transforming healthcare significantly from conventional paradigms to more patient-specific and preventive approaches. Artificial [...] Read more.
Advances of wearables, sensors, smart devices, and electronic health records have generated patient-oriented longitudinal data sources that are analyzed with advanced analytical tools to generate enormous opportunities to understand patient health conditions and needs, transforming healthcare significantly from conventional paradigms to more patient-specific and preventive approaches. Artificial intelligence (AI) with a machine learning methodology is prominently considered as it is uniquely suitable to derive predictions and recommendations from complex patient datasets. Recent studies have shown that precise data aggregation methods exhibit an important role in the precision and reliability of clinical outcome distribution models. There is an essential need to develop an effective and powerful multifunctional machine learning platform to enable healthcare professionals to comprehend challenging biomedical multifactorial datasets to understand patient-specific scenarios and to make better clinical decisions, potentially leading to the optimist patient outcomes. There is a substantial drive to develop the networking and interoperability of clinical systems, the laboratory, and public health. These steps are delivered in concert with efforts at enabling usefully analytic tools and technologies for making sense of the eruption of overall patient’s information from various sources. However, the full efficiency of this technology can only be eliminated when ethical, legal, and social challenges related to reducing the privacy of healthcare information are successfully absorbed. Public and media are to be informed about the capabilities and limitations of the technologies and the paramount to be balanced is juvenile public healthcare data privacy debate. While this is ongoing, the measures have been progressed from patient data protection abuses for progress to realize the full potential of AI technology for hosting the health system, with benefits for all stakeholders. Any protection program should be based on fairness, transparency, and a full commitment to data privacy. On-going innovative systems that use AI to manage clinical data and analyzes are proposed. These tools can be used by healthcare providers, especially in defining specific scenarios related to biomedical data management and analysis. These platforms ensure that the significant and potentially predictive parameters associated with the diagnosis, treatment, and progression of the disease have been recognized. With the systematic use of these solutions, this work can contribute to the realization of noticeable improvements in the provision of real-time, personalized, and efficient medicine at a reduced cost [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2020

Building Foundational Data Products for Financial Services: A MDM-Based Approach to Customer, and Product Data Integration

Abstract Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals [...] Read more.
Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals of these models are to achieve large return on investment ratios and to maximize contribution to local profit centers. What’s important is that their modeling is based only on data created, curated and maintained by these marketing organizations. The difference today is that the modeling is no longer based solely on a small number of response-determined variables that are constantly assessed in terms of importance. A quarterly campaign update generates hundreds of statistical models — involving campaign responses, purchase-lag time, the relative magnitude of the direct effect, and the cross-buying effects — using thousands of variables, including customer demographics, life stage, product transactions, household composition, and customer service history. It’s a network of models, not just a table of variable-by-residual importance values. But that’s only part of the story of data products. The predictive modeling utilized by these campaign plans is based on analytics and data preparation, which are data products in their most diminutive form. These products would be even more elementary were they not crafted quarterly by highly skilled, experienced modelers using advanced software and processes. Most companies have enough data to create models that contain not simply hundreds of variables, but thousands, so that the focus can return to information instead of data reduction. These models largely replace the internal econometric models previously used to produce advanced forecasts in the absence of campaign modeling. People used these forecasts to simulate ROI and contribution forecasts for the planned campaigns. In the old days, reliance on econometrically forecast ROI-guideline contribution values reduced the reliance on the marketing campaign modelers because of a lack of trust in their predictive ability.
Figures
PreviousNext
Review Article
Open Access December 24, 2022

Web-Centric Cloud Framework for Real-Time Monitoring and Risk Prediction in Clinical Trials Using Machine Learning

Abstract Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing [...] Read more.
Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing machine-learning algorithms for data analysis. Detailed descriptions of the data-management interface and the machine-learning processes are provided, targeting active clinical trials with therapeutic uses in cancer. Demonstrations utilize publicly available clinical-trial data from the ClinicalTrials.gov repository. The real-time monitoring and risk prediction systems were assessed by developing five supervised-classification-machine-learning models for trial-status prediction and six unsupervised models for patient-safety-profile assessment, each representing a different phase of the clinical-trial process. All supervised models yielded high accuracy and area-under-the-curve values at the testing stage, while the unsupervised models demonstrated practical applicability. The results underscore the advantages of using the trial-status algorithm, the patient-safety-profile model, and the proposed framework for performing real-time monitoring and risk prediction of clinical trials.
Figures
PreviousNext
Review Article

Query parameters

Keyword:  Data Management

View options

Citations of

Views of

Downloads of