Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access December 27, 2019

Data Engineering Frameworks for Optimizing Community Health Surveillance Systems

Abstract A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like [...] Read more.
A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like blockchain or secure enclaves, and means of data storage and retrieval, have emerged. But, with these innovations comes a grand challenge: how to blend with, and adapt them to, the traditional public health practices. The long-in-place infrastructures and protocols to protect and ensure the welfare of communities are in need of change, or at least update, to enhance their marked longevity of impact directly on the health outcomes and community wellbeing they were designed to fortify. It is in this vein that the essay is written and composed. The investigation in this essay is to query what, particularly, might be the aspects and influences of the emerging veritable cornucopia of new data engineering frameworks that are either being developed specifically for health surveillance and wellness, or are available to be co opted from devices and services already thriving in the current market and research milieu. Knowing what these ways may be could well aid in molding their uptake and spread, ensuring their beneficial impacts on those communities who stand to gain the most. The essay is divided into several key segments. After this introduction, section two details the research methods. In the section that follows, the maximum health outcome potentials of these novel frameworks are reviewed. Part four of the essay takes a more critical approach, addressing how the success of these methods may be hindered and future research avenues. Lastly, the concluding information suggests some actions to take to aid best suit the implementation of these ways, and suggests some thoughts for further research after the completion of these inquiriestrand [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2021

Advancing Healthcare Innovation in 2021: Integrating AI, Digital Health Technologies, and Precision Medicine for Improved Patient Outcomes

Abstract Advances of wearables, sensors, smart devices, and electronic health records have generated patient-oriented longitudinal data sources that are analyzed with advanced analytical tools to generate enormous opportunities to understand patient health conditions and needs, transforming healthcare significantly from conventional paradigms to more patient-specific and preventive approaches. Artificial [...] Read more.
Advances of wearables, sensors, smart devices, and electronic health records have generated patient-oriented longitudinal data sources that are analyzed with advanced analytical tools to generate enormous opportunities to understand patient health conditions and needs, transforming healthcare significantly from conventional paradigms to more patient-specific and preventive approaches. Artificial intelligence (AI) with a machine learning methodology is prominently considered as it is uniquely suitable to derive predictions and recommendations from complex patient datasets. Recent studies have shown that precise data aggregation methods exhibit an important role in the precision and reliability of clinical outcome distribution models. There is an essential need to develop an effective and powerful multifunctional machine learning platform to enable healthcare professionals to comprehend challenging biomedical multifactorial datasets to understand patient-specific scenarios and to make better clinical decisions, potentially leading to the optimist patient outcomes. There is a substantial drive to develop the networking and interoperability of clinical systems, the laboratory, and public health. These steps are delivered in concert with efforts at enabling usefully analytic tools and technologies for making sense of the eruption of overall patient’s information from various sources. However, the full efficiency of this technology can only be eliminated when ethical, legal, and social challenges related to reducing the privacy of healthcare information are successfully absorbed. Public and media are to be informed about the capabilities and limitations of the technologies and the paramount to be balanced is juvenile public healthcare data privacy debate. While this is ongoing, the measures have been progressed from patient data protection abuses for progress to realize the full potential of AI technology for hosting the health system, with benefits for all stakeholders. Any protection program should be based on fairness, transparency, and a full commitment to data privacy. On-going innovative systems that use AI to manage clinical data and analyzes are proposed. These tools can be used by healthcare providers, especially in defining specific scenarios related to biomedical data management and analysis. These platforms ensure that the significant and potentially predictive parameters associated with the diagnosis, treatment, and progression of the disease have been recognized. With the systematic use of these solutions, this work can contribute to the realization of noticeable improvements in the provision of real-time, personalized, and efficient medicine at a reduced cost [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2020

Building Foundational Data Products for Financial Services: A MDM-Based Approach to Customer, and Product Data Integration

Abstract Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals [...] Read more.
Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals of these models are to achieve large return on investment ratios and to maximize contribution to local profit centers. What’s important is that their modeling is based only on data created, curated and maintained by these marketing organizations. The difference today is that the modeling is no longer based solely on a small number of response-determined variables that are constantly assessed in terms of importance. A quarterly campaign update generates hundreds of statistical models — involving campaign responses, purchase-lag time, the relative magnitude of the direct effect, and the cross-buying effects — using thousands of variables, including customer demographics, life stage, product transactions, household composition, and customer service history. It’s a network of models, not just a table of variable-by-residual importance values. But that’s only part of the story of data products. The predictive modeling utilized by these campaign plans is based on analytics and data preparation, which are data products in their most diminutive form. These products would be even more elementary were they not crafted quarterly by highly skilled, experienced modelers using advanced software and processes. Most companies have enough data to create models that contain not simply hundreds of variables, but thousands, so that the focus can return to information instead of data reduction. These models largely replace the internal econometric models previously used to produce advanced forecasts in the absence of campaign modeling. People used these forecasts to simulate ROI and contribution forecasts for the planned campaigns. In the old days, reliance on econometrically forecast ROI-guideline contribution values reduced the reliance on the marketing campaign modelers because of a lack of trust in their predictive ability.
Figures
PreviousNext
Review Article
Open Access December 24, 2022

Web-Centric Cloud Framework for Real-Time Monitoring and Risk Prediction in Clinical Trials Using Machine Learning

Abstract Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing [...] Read more.
Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing machine-learning algorithms for data analysis. Detailed descriptions of the data-management interface and the machine-learning processes are provided, targeting active clinical trials with therapeutic uses in cancer. Demonstrations utilize publicly available clinical-trial data from the ClinicalTrials.gov repository. The real-time monitoring and risk prediction systems were assessed by developing five supervised-classification-machine-learning models for trial-status prediction and six unsupervised models for patient-safety-profile assessment, each representing a different phase of the clinical-trial process. All supervised models yielded high accuracy and area-under-the-curve values at the testing stage, while the unsupervised models demonstrated practical applicability. The results underscore the advantages of using the trial-status algorithm, the patient-safety-profile model, and the proposed framework for performing real-time monitoring and risk prediction of clinical trials.
Figures
PreviousNext
Review Article
Open Access December 21, 2021

Optimizing Data Warehousing for Large Scale Policy Management Using Advanced ETL Frameworks

Abstract Data warehousing is a technique for collecting, managing, and presenting data to help people analyze and use that data effectively. It involves a large database designed to support management-level staff by providing all the relevant historical data for analysis. This chapter begins with a definition of data warehousing, followed by an overview of large-scale policy management to highlight the [...] Read more.
Data warehousing is a technique for collecting, managing, and presenting data to help people analyze and use that data effectively. It involves a large database designed to support management-level staff by providing all the relevant historical data for analysis. This chapter begins with a definition of data warehousing, followed by an overview of large-scale policy management to highlight the need for data warehousing. Next, an overview of an ETL framework is presented, along with a discussion of advanced ETL techniques. The chapter concludes with an outline of performance optimization techniques for data warehousing. Data warehousing is considered a key enabler for efficient reporting and analysis, with implementation choices ranging from cost-effective desktop systems to large-scale, mission-critical data marts and warehouses containing petabytes of data. Extract, transform, and load (ETL) systems remain one of the largest cost and effort areas within data warehouse development projects, requiring significant planning and resources to build, manage, and monitor the flow of data from source systems into the data warehouse. The technology and techniques used for ETL can greatly influence the success or failure of a data warehouse. Complex business requirements for data cleansing, loading, transformation, and integration have intensified, while operational plans for real-time and near-real-time reporting add additional challenges. Parallel loading mechanisms, incremental data loading, and runtime update and insert strategies not only improve ETL performance but also optimize data warehousing performance, particularly for large-scale policy management.
Figures
PreviousNext
Article
Open Access December 22, 2020

Cloud Migration Strategies for High-Volume Financial Messaging Systems

Abstract Key business objectives for digital infrastructure cloud adoption are often framed in terms of reducing cost, improving fault tolerance and resilience, simplifying scale, and enabling innovation. Given the critical nature of the financial sector, however, where timeliness and price can significantly determine an outcome, cloud migration in delivery environments demands greater throughput on the [...] Read more.
Key business objectives for digital infrastructure cloud adoption are often framed in terms of reducing cost, improving fault tolerance and resilience, simplifying scale, and enabling innovation. Given the critical nature of the financial sector, however, where timeliness and price can significantly determine an outcome, cloud migration in delivery environments demands greater throughput on the critical path and, in many enterprise-scale settings, forgoes hybrid complexity and multi-cloud risks. Nevertheless, slack in system designs does exist; financial institutions enable market functionality—trading, clearing/best execution—despite potentially being able to meet such sets with lower service levels than other verticals. A cloud multi-account structure for sensitive data, for example, naturally limits exposure when combined with observed risk. Fulfilling predictions of elasticity during periods of high demand usually requires support from a dedicated environment (or environments) located nearer to the operations. Components can consequently be allocated on a per-account basis or maintained as shared sink systems to which the dedicated streams write. The automation code can similarly be targeted for dedicated accounts, avoiding the resource constraints that beset such operations during industry events like emergency triage/contact desking.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Improving Data Quality and Lineage in Regulated Financial Data Platforms

Abstract Data quality and data lineage are critical concerns for organizations mandated to comply with stringent regulatory regimes. This paper analyses the latest developments in the governance of data quality and data lineage within a regulated financial services organisation. It sets out the underlying regulatory context, describes the concepts employed in the business environment, summarizes how data [...] Read more.
Data quality and data lineage are critical concerns for organizations mandated to comply with stringent regulatory regimes. This paper analyses the latest developments in the governance of data quality and data lineage within a regulated financial services organisation. It sets out the underlying regulatory context, describes the concepts employed in the business environment, summarizes how data quality is captured and monitored, examines the artefacts that record data lineage, reviews the roles and responsibilities of staff who implement the necessary processes, and maps areas where improvements are possible. The internal organization and processes of regulated data platforms are shaped not only by the capabilities prescribed by their technical architecture but also by the regulatory regimes under which they operate. These mandates, in particular, require rigorous examination of four aspects of data quality — accuracy, completeness, consistency, and timeliness — and detailed documentation of how data arrives in its final form in the repository. Although data monitoring, alerting, assessment, and remediation are well established, provenance capture remains an area ripe for further investment.
Figures
PreviousNext
Review Article

Query parameters

Keyword:  Data Management

View options

Citations of

Views of

Downloads of