Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access December 22, 2025

Reimagining Mathematical Modeling for a Responsive and Integrated Future in Infectious Disease Epidemiology

Abstract Mathematical modeling plays a central role in infectious disease epidemiology, shaping outbreak response strategies and informing public health policy. The COVID-19 pandemic demonstrated the value of these models but also exposed persistent limitations related to data fragility, lack of transparency, limited stakeholder engagement, and insufficient consideration of social and political contexts. [...] Read more.
Mathematical modeling plays a central role in infectious disease epidemiology, shaping outbreak response strategies and informing public health policy. The COVID-19 pandemic demonstrated the value of these models but also exposed persistent limitations related to data fragility, lack of transparency, limited stakeholder engagement, and insufficient consideration of social and political contexts. Rather than critiquing modeling as a discipline, this perspective argues for a reorientation of infectious disease modeling toward a more responsive, equity-centered, and participatory paradigm. We propose a conceptual framework built on three interrelated principles: adaptability through real-time data integration, transparency via open-source and reproducible practices, and relevance through interdisciplinary and co-produced model design. Drawing on illustrative examples from COVID-19 and dengue control efforts, we highlight how integrating behavioral dynamics, local knowledge, and policy feedback can improve model usefulness and public trust. Reconceptualizing models as dynamic systems of inquiry rather than static forecasting tools can enhance decision-making and promote more equitable and effective responses to future public health emergencies.
Figures
PreviousNext
Brief Review
Open Access April 10, 2025

Advancements in Pharmaceutical IT: Transforming the Industry with ERP Systems

Abstract The pharmaceutical industry is undergoing a profound transformation driven by advancements in Information Technology (IT), with Enterprise Resource Planning (ERP) systems playing a pivotal role in reshaping operations. These systems offer integrated solutions that streamline key business processes, such as production, inventory management, supply chain optimization, regulatory compliance, and data [...] Read more.
The pharmaceutical industry is undergoing a profound transformation driven by advancements in Information Technology (IT), with Enterprise Resource Planning (ERP) systems playing a pivotal role in reshaping operations. These systems offer integrated solutions that streamline key business processes, such as production, inventory management, supply chain optimization, regulatory compliance, and data integration, contributing significantly to operational efficiency and organizational agility. This paper explores the evolution and impact of ERP systems within the pharmaceutical sector, highlighting their contributions to overcoming the industry’s inherent challenges, including complex regulatory requirements, the need for accurate and real-time data, and the demand for supply chain resilience. The integration of cloud-based ERP solutions, the incorporation of emerging technologies like Artificial Intelligence (AI), Machine Learning (ML), and the Internet of Things (IoT), and enhanced data analytics capabilities have revolutionized pharmaceutical IT. These advancements not only reduce operational costs, improve forecasting accuracy, and enhance collaboration but also ensure compliance with stringent global regulations, such as Good Manufacturing Practices (GMP) and FDA guidelines. Moreover, ERP systems have been instrumental in managing the pharmaceutical supply chain, ensuring product traceability, and improving inventory control and order fulfillment processes. This manuscript examines how ERP systems enable pharmaceutical companies to maintain high standards of product quality, improve decision-making, and ensure the safety and efficacy of drugs through robust tracking and auditing mechanisms. A case study of a pharmaceutical company that implemented an ERP system demonstrates the tangible benefits, including increased operational efficiency, improved compliance rates, and enhanced customer satisfaction. However, despite the clear advantages, challenges such as customization complexities, data integration issues, and resistance to change remain. As the pharmaceutical industry continues to evolve, ERP systems will remain a cornerstone of digital transformation, facilitating smarter decision-making, better resource management, and enhanced collaboration across global operations. This paper also identifies future trends, including the potential of AI and blockchain technologies in further strengthening ERP systems and transforming the pharmaceutical landscape.
Review Article
Open Access December 27, 2019

Data Engineering Frameworks for Optimizing Community Health Surveillance Systems

Abstract A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like [...] Read more.
A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like blockchain or secure enclaves, and means of data storage and retrieval, have emerged. But, with these innovations comes a grand challenge: how to blend with, and adapt them to, the traditional public health practices. The long-in-place infrastructures and protocols to protect and ensure the welfare of communities are in need of change, or at least update, to enhance their marked longevity of impact directly on the health outcomes and community wellbeing they were designed to fortify. It is in this vein that the essay is written and composed. The investigation in this essay is to query what, particularly, might be the aspects and influences of the emerging veritable cornucopia of new data engineering frameworks that are either being developed specifically for health surveillance and wellness, or are available to be co opted from devices and services already thriving in the current market and research milieu. Knowing what these ways may be could well aid in molding their uptake and spread, ensuring their beneficial impacts on those communities who stand to gain the most. The essay is divided into several key segments. After this introduction, section two details the research methods. In the section that follows, the maximum health outcome potentials of these novel frameworks are reviewed. Part four of the essay takes a more critical approach, addressing how the success of these methods may be hindered and future research avenues. Lastly, the concluding information suggests some actions to take to aid best suit the implementation of these ways, and suggests some thoughts for further research after the completion of these inquiriestrand [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2020

Optimizing Unclaimed Property Management through Cloud-Enabled AI and Integrated IT Infrastructures

Abstract With unclaimed property assets reaching record levels, businesses have become, in some cases, overwhelmed and hamstrung by stagnant, unoptimized processes. That sentiment is compounded by ever-evolving regulatory changes, resulting in organizations struggling to hit compliance deadlines while delivering an optimal claimant experience. Often, early systems had periods of short-term success but are [...] Read more.
With unclaimed property assets reaching record levels, businesses have become, in some cases, overwhelmed and hamstrung by stagnant, unoptimized processes. That sentiment is compounded by ever-evolving regulatory changes, resulting in organizations struggling to hit compliance deadlines while delivering an optimal claimant experience. Often, early systems had periods of short-term success but are on the verge of obsolescence, resulting in stressed workflows and cumbersome integrations. Deploying an integrated IT infrastructure, supported by cloud-enabled AI, represents the quickest path to modernizing unclaimed property management. A fully integrated IT infrastructure is crucial to optimize the management of unclaimed property [1]. When lone solutions exist across an organization, companies miss out on automation opportunities generated through the interconnectedness of systems and data. AI presents organizations with the opportunity to traverse these gaps, enabling a vast library of applications to improve the perturbed workflows of unclaimed property teams. Automated data extraction, document comparison, fraudulent claim detection, and workflow completion analysis are just a few popular applications well suited for the unclaimed property space. In addition to the lagging technology currently deployed by many organizations, the unclaimed property landscape itself is evolving. Compliance issuance, asset availability, rates, the ability to collect fraudulently posted claims, and the claimant experience have all become hot-button items that are now front of mind for regulation agencies and businesses alike. Issuing duplication letters in a compliant manner, accommodating claimant inquiries regarding held assets, and managing, processing, and understanding the operational impact of rate changes are vexing problems many organizations now find themselves playing catch-up to address. The opportunity posed by cloud-enabled AI is furthered by economic, regulatory, and report cycle pressures on unclaimed property teams to do more with the same size or fewer resources. It’s now no longer simply a case of hitting the audit date deadline and checking off a box but an emerging priority for businesses at all sides of the market, from Fortune 500 to mid-market firms. In-house shared service teams are comfortable in areas of monitoring and curating business data; however, unclaimed property is an unknown territory with a learning curve, compliance gaps, and operational holes that, if ignored, stand to scale up exponentially. The combined fallout from regulatory changes and the recent pandemic have only made the situation riskier, with increased volatility in balancing time-sensitive tasks against stringent regulatory deadlines and growing claimant outreach.
Figures
PreviousNext
Review Article
Open Access December 29, 2020

Enhancing Government Fiscal Impact Analysis with Integrated Big Data and Cloud-Based Analytics Platforms

Abstract While several application domains are exploiting the added-value of analytics over various datasets to obtain actionable insights and drive decision making, the public policy management domain has not yet taken advantage of the full potential of the aforementioned analytics and data models. To this end, in this paper authors present an overall architecture of a cloud-based environment that [...] Read more.
While several application domains are exploiting the added-value of analytics over various datasets to obtain actionable insights and drive decision making, the public policy management domain has not yet taken advantage of the full potential of the aforementioned analytics and data models. To this end, in this paper authors present an overall architecture of a cloud-based environment that facilitates data retrieval and analytics, as well as policy modelling, creation and optimization. The environment enables data collection from heterogeneous sources, linking and aggregation, complemented with data cleaning and interoperability techniques. An innovative approach for analytics as a service is introduced and linked with a policy development toolkit, which is an integrated web-based environment to fulfil the requirements of the public policy ecosystem stakeholders [1]. Large information databases on various public issues exist, but their usage for public policy formulation and impact analysis has been limited so far, as no cloud-based service ecosystem exists to facilitate their efficient exploitation. With the increasing availability and importance of both public big and traditional data, the need to extract, link and utilize such information efficiently has arisen. Current data-driven web technologies and models are not aligned with the needs of this domain, and therefore, potential candidates for big data, cloud-based and service-oriented public policy analysis solutions should be investigated, piloted and demonstrated [2]. This paper presents the conceptual architecture of such an ecosystem based on the capabilities of state-of-the-art cloud and web technologies, as well as the requirements of its users.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Building Foundational Data Products for Financial Services: A MDM-Based Approach to Customer, and Product Data Integration

Abstract Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals [...] Read more.
Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals of these models are to achieve large return on investment ratios and to maximize contribution to local profit centers. What’s important is that their modeling is based only on data created, curated and maintained by these marketing organizations. The difference today is that the modeling is no longer based solely on a small number of response-determined variables that are constantly assessed in terms of importance. A quarterly campaign update generates hundreds of statistical models — involving campaign responses, purchase-lag time, the relative magnitude of the direct effect, and the cross-buying effects — using thousands of variables, including customer demographics, life stage, product transactions, household composition, and customer service history. It’s a network of models, not just a table of variable-by-residual importance values. But that’s only part of the story of data products. The predictive modeling utilized by these campaign plans is based on analytics and data preparation, which are data products in their most diminutive form. These products would be even more elementary were they not crafted quarterly by highly skilled, experienced modelers using advanced software and processes. Most companies have enough data to create models that contain not simply hundreds of variables, but thousands, so that the focus can return to information instead of data reduction. These models largely replace the internal econometric models previously used to produce advanced forecasts in the absence of campaign modeling. People used these forecasts to simulate ROI and contribution forecasts for the planned campaigns. In the old days, reliance on econometrically forecast ROI-guideline contribution values reduced the reliance on the marketing campaign modelers because of a lack of trust in their predictive ability.
Figures
PreviousNext
Review Article
Open Access December 18, 2020

Intelligent Supply Chain Ecosystems: Cloud-Native Architectures and Big Data Integration in Retail and Manufacturing Operations

Abstract The supply chain ecosystem plays a very important role in the success or failure of organizations, markets, and economies. Supply chain ecosystems are broadly defined as supply chain organizations and their collaborators. Today's combined challenges of pandemic shutdowns, rising internet usage, and skyrocketing climate change concerns demand that the supply chain ecosystem better connect with [...] Read more.
The supply chain ecosystem plays a very important role in the success or failure of organizations, markets, and economies. Supply chain ecosystems are broadly defined as supply chain organizations and their collaborators. Today's combined challenges of pandemic shutdowns, rising internet usage, and skyrocketing climate change concerns demand that the supply chain ecosystem better connect with customers, when and how they want, to provide products and services with high levels of availability and zero defects, yet collaboratively do this to reduce transportation and production risks, often at the same time reducing operational costs and carbon footprints. Addressing these challenges, this work explores the cloud delivery capabilities of cloud-native architectures to enable the big data integrations and analytics that are needed to grow smarter supply chain ecosystems. This work describes what smart supply chain ecosystems are and how they are planning to grow their technology and integration capabilities. Discussing the industry-leading advanced and manufacturing technology producer ecosystems, it is explained how their technology collaboration and investment plans are driven by climate change and job creation goals. With these background models, the work examines the new digital reality of customer-driven experiences and economies that are demanding cloud-native and intelligent technology partnerships to deliver climate objectives, operational responsiveness, and compatibility to avoid trading economies of scale for economies of integration. The final objectives of this paper are to share key ideas about the need to balance the growing customer service direct-to-consumer business models with those for collaborative investment by market and industry. In doing this, it hopes to promote an intelligent supply chain ecosystem foundation for helping its different participating countries survive and thrive in the digital economy.
Figures
PreviousNext
Review Article
Open Access December 24, 2022

Web-Centric Cloud Framework for Real-Time Monitoring and Risk Prediction in Clinical Trials Using Machine Learning

Abstract Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing [...] Read more.
Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing machine-learning algorithms for data analysis. Detailed descriptions of the data-management interface and the machine-learning processes are provided, targeting active clinical trials with therapeutic uses in cancer. Demonstrations utilize publicly available clinical-trial data from the ClinicalTrials.gov repository. The real-time monitoring and risk prediction systems were assessed by developing five supervised-classification-machine-learning models for trial-status prediction and six unsupervised models for patient-safety-profile assessment, each representing a different phase of the clinical-trial process. All supervised models yielded high accuracy and area-under-the-curve values at the testing stage, while the unsupervised models demonstrated practical applicability. The results underscore the advantages of using the trial-status algorithm, the patient-safety-profile model, and the proposed framework for performing real-time monitoring and risk prediction of clinical trials.
Figures
PreviousNext
Review Article
Open Access December 21, 2021

Optimizing Data Warehousing for Large Scale Policy Management Using Advanced ETL Frameworks

Abstract Data warehousing is a technique for collecting, managing, and presenting data to help people analyze and use that data effectively. It involves a large database designed to support management-level staff by providing all the relevant historical data for analysis. This chapter begins with a definition of data warehousing, followed by an overview of large-scale policy management to highlight the [...] Read more.
Data warehousing is a technique for collecting, managing, and presenting data to help people analyze and use that data effectively. It involves a large database designed to support management-level staff by providing all the relevant historical data for analysis. This chapter begins with a definition of data warehousing, followed by an overview of large-scale policy management to highlight the need for data warehousing. Next, an overview of an ETL framework is presented, along with a discussion of advanced ETL techniques. The chapter concludes with an outline of performance optimization techniques for data warehousing. Data warehousing is considered a key enabler for efficient reporting and analysis, with implementation choices ranging from cost-effective desktop systems to large-scale, mission-critical data marts and warehouses containing petabytes of data. Extract, transform, and load (ETL) systems remain one of the largest cost and effort areas within data warehouse development projects, requiring significant planning and resources to build, manage, and monitor the flow of data from source systems into the data warehouse. The technology and techniques used for ETL can greatly influence the success or failure of a data warehouse. Complex business requirements for data cleansing, loading, transformation, and integration have intensified, while operational plans for real-time and near-real-time reporting add additional challenges. Parallel loading mechanisms, incremental data loading, and runtime update and insert strategies not only improve ETL performance but also optimize data warehousing performance, particularly for large-scale policy management.
Figures
PreviousNext
Article

Query parameters

Keyword:  Data Integration

View options

Citations of

Views of

Downloads of