Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access February 06, 2026

Predictive Modeling of Public Sentiment Using Social Media Data and Natural Language Processing Techniques

Abstract Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled [...] Read more.
Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled tweets, and develops predictive models for binary sentiment classification using Naive Bayes, Logistic Regression, and the transformer-based BERT model. Experiments were conducted on a balanced subset of 12,000 tweets after comprehensive NLP preprocessing. Evaluation using accuracy, F1-score, and confusion matrices revealed that BERT significantly outperforms traditional models, achieving an accuracy of 89.5% and an F1-score of 0.89 by effectively modeling contextual and semantic nuances. In contrast, Naive Bayes and Logistic Regression demonstrated reasonable but consistently lower performance. To support practical deployment, we introduce SentiFeel, an interactive tool enabling real-time sentiment analysis. While resource constraints limited the dataset size and training epochs, future work will explore full corpus utilization and the inclusion of neutral sentiment classes. These findings underscore the potential of transformer models for enhanced public opinion monitoring, marketing analytics, and policy forecasting.
Figures
PreviousNext
Article
Open Access April 10, 2025

Advancements in Pharmaceutical IT: Transforming the Industry with ERP Systems

Abstract The pharmaceutical industry is undergoing a profound transformation driven by advancements in Information Technology (IT), with Enterprise Resource Planning (ERP) systems playing a pivotal role in reshaping operations. These systems offer integrated solutions that streamline key business processes, such as production, inventory management, supply chain optimization, regulatory compliance, and data [...] Read more.
The pharmaceutical industry is undergoing a profound transformation driven by advancements in Information Technology (IT), with Enterprise Resource Planning (ERP) systems playing a pivotal role in reshaping operations. These systems offer integrated solutions that streamline key business processes, such as production, inventory management, supply chain optimization, regulatory compliance, and data integration, contributing significantly to operational efficiency and organizational agility. This paper explores the evolution and impact of ERP systems within the pharmaceutical sector, highlighting their contributions to overcoming the industry’s inherent challenges, including complex regulatory requirements, the need for accurate and real-time data, and the demand for supply chain resilience. The integration of cloud-based ERP solutions, the incorporation of emerging technologies like Artificial Intelligence (AI), Machine Learning (ML), and the Internet of Things (IoT), and enhanced data analytics capabilities have revolutionized pharmaceutical IT. These advancements not only reduce operational costs, improve forecasting accuracy, and enhance collaboration but also ensure compliance with stringent global regulations, such as Good Manufacturing Practices (GMP) and FDA guidelines. Moreover, ERP systems have been instrumental in managing the pharmaceutical supply chain, ensuring product traceability, and improving inventory control and order fulfillment processes. This manuscript examines how ERP systems enable pharmaceutical companies to maintain high standards of product quality, improve decision-making, and ensure the safety and efficacy of drugs through robust tracking and auditing mechanisms. A case study of a pharmaceutical company that implemented an ERP system demonstrates the tangible benefits, including increased operational efficiency, improved compliance rates, and enhanced customer satisfaction. However, despite the clear advantages, challenges such as customization complexities, data integration issues, and resistance to change remain. As the pharmaceutical industry continues to evolve, ERP systems will remain a cornerstone of digital transformation, facilitating smarter decision-making, better resource management, and enhanced collaboration across global operations. This paper also identifies future trends, including the potential of AI and blockchain technologies in further strengthening ERP systems and transforming the pharmaceutical landscape.
Review Article
Open Access March 22, 2025

Enhancing Scalability and Performance in Analytics Data Acquisition through Spark Parallelism

Abstract Data acquisition serves as a critical component of modern data architecture, with REST API integration emerging as one of the most common approaches for sourcing external data. This study evaluates the efficiency of various methodologies for collecting data via REST APIs and benchmark their performance. It explores how leveraging the Spark distributed computing platform can optimize large scale [...] Read more.
Data acquisition serves as a critical component of modern data architecture, with REST API integration emerging as one of the most common approaches for sourcing external data. This study evaluates the efficiency of various methodologies for collecting data via REST APIs and benchmark their performance. It explores how leveraging the Spark distributed computing platform can optimize large scale REST API calls, enabling enhanced scalability and improved processing speeds to meet the demands of high volume data workflows.
Figures
PreviousNext
Review Article
Open Access November 07, 2024

Optimizing Pharmaceutical Supply Chain: Key Challenges and Strategic Solutions

Abstract Pharmaceutical supply chains are critical to ensuring the availability of safe and effective medications, yet they face numerous challenges that can jeopardize public health. This article provides a comprehensive analysis of the key issues impacting pharmaceutical supply chains, including regulatory compliance, demand forecasting, supply chain visibility, quality assurance, and geopolitical risks. [...] Read more.
Pharmaceutical supply chains are critical to ensuring the availability of safe and effective medications, yet they face numerous challenges that can jeopardize public health. This article provides a comprehensive analysis of the key issues impacting pharmaceutical supply chains, including regulatory compliance, demand forecasting, supply chain visibility, quality assurance, and geopolitical risks. Regulatory compliance remains a significant concern due to the stringent guidelines imposed by authorities such as the FDA and EMA, which can lead to increased operational costs and time delays. Additionally, traditional demand forecasting methods often fail to accurately predict fluctuations in drug demand, resulting in stockouts or excess inventory. Limited supply chain visibility further complicates these challenges, hindering timely decision-making and operational efficiency. Quality assurance is paramount, as maintaining the integrity of pharmaceutical products throughout the supply chain is crucial to preventing costly recalls and ensuring patient safety. Moreover, the globalization of supply chains introduces vulnerabilities to geopolitical risks, trade disputes, and natural disasters. In response to these issues, this article outlines strategic recommendations for optimizing pharmaceutical supply chains. These include leveraging advanced analytics and IoT technologies to enhance demand forecasting and visibility, strengthening compliance through automated systems and training, fostering collaboration among stakeholders, implementing robust risk management frameworks, and investing in quality management systems. By adopting these strategies, pharmaceutical companies can enhance the efficiency and resilience of their supply chains, ultimately ensuring the continuous availability of essential medications for patients worldwide. This analysis serves as a critical resource for industry professionals seeking to navigate the complexities of pharmaceutical supply chains in an increasingly dynamic global environment.
Review Article
Open Access November 15, 2023

Predictive Failure Analytics in Critical Automotive Applications: Enhancing Reliability and Safety through Advanced AI Techniques

Abstract Failure prediction can be achieved through prognostics, which provides timely warnings before failure. Failure prediction is crucial in an effective prognostic system, allowing preventive maintenance actions to avoid downtime. The prognostics problem involves estimating the remaining useful life (RUL) of a system or component at any given time. The RUL is defined as the time from the current time [...] Read more.
Failure prediction can be achieved through prognostics, which provides timely warnings before failure. Failure prediction is crucial in an effective prognostic system, allowing preventive maintenance actions to avoid downtime. The prognostics problem involves estimating the remaining useful life (RUL) of a system or component at any given time. The RUL is defined as the time from the current time to the time of failure. The goal is to make accurate predictions close to the failure time to provide early warnings. J S Grewal and J. Grewal provide a comprehensive definition of RUL in their paper "The Kalman Filter approach to RUL estimation." A process is a quadruple (XU f P), where X is the state space, U is the control space, P is the set of possible paths, and f represents the transition between states. The process involves applying control values to change the system's state over time.
Figures
PreviousNext
Article
Open Access March 06, 2024

Embedded Architecture of SAP S/4 HANA ERP Application

Abstract The SAP HANA Application to handle operational workloads that are consistent with transactions while also supporting intricate business analytics operations. Technically speaking, the SAP HANA database is made up of several data processing engines that work together with a distributed query processing environment to provide the entire range of data processing capabilities. This includes graph and [...] Read more.
The SAP HANA Application to handle operational workloads that are consistent with transactions while also supporting intricate business analytics operations. Technically speaking, the SAP HANA database is made up of several data processing engines that work together with a distributed query processing environment to provide the entire range of data processing capabilities. This includes graph and text processing for managing semi-structured and unstructured data within the same system, as well as classical relational data that supports both row- and column-oriented physical representations in a hybrid engine. The next-generation SAP Business Suite program designed specifically for the SAP HANA Platform is called SAP S/4HANA. The key features of SAP S/4HANA are an intuitive, contemporary user interface (SAP Fiori); planning and simulation options in many conventional transactions; simplification of business processes; significantly improved transaction efficiency; faster analytics.
Review Article
Open Access February 19, 2024

The use of contemporary Enterprise Resource Planning (ERP) technologies for digital transformation

Abstract Our lives are becoming more and more digital, and this has an impact on how we work, study, communicate, and interact. Businesses are currently digitally altering their information systems, procedures, culture, and strategy. Existing businesses and economies are severely disrupted by the digital revolution. The Internet of Things, microservices, and mobile services are examples of IT systems with [...] Read more.
Our lives are becoming more and more digital, and this has an impact on how we work, study, communicate, and interact. Businesses are currently digitally altering their information systems, procedures, culture, and strategy. Existing businesses and economies are severely disrupted by the digital revolution. The Internet of Things, microservices, and mobile services are examples of IT systems with numerous, dispersed, and very small structures that are made possible by digitization. Utilizing the possibilities of cloud computing, mobile systems, big data and analytics, services computing, Internet of Things, collaborative networks, and decision support, numerous new business prospects have emerged throughout the years. The logical basis for robust and self-optimizing run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures comes from biological metaphors of living, dynamic ecosystems. This has a significant effect on how digital services and products are designed from a value- and service-oriented perspective. The evolution of enterprise architectures and the shift from a closed-world modeling environment to a more flexible open-world composition establish the dynamic framework for highly distributed and adaptive systems, which are crucial for enabling the digital transformation. This study examines how enterprise architecture has changed over time, taking into account newly established, value-based relationships between digital business models, digital strategies, and enhanced enterprise architecture.
Review Article
Open Access July 29, 2023

Critical Success Factors of Adopting an Enterprise System for Pharmaceutical Drug Traceability

Abstract For conducting advanced analytics initiatives to acquire in-depth data into usage habits, regional access, sales, and promotional success, etc., unique identification of packaged pharmaceuticals will be a fantastic enabler. The main objective of this study is to prevent and reduce the production of erroneous and counterfeit drugs using the enterprise system, which has become a serious threat [...] Read more.
For conducting advanced analytics initiatives to acquire in-depth data into usage habits, regional access, sales, and promotional success, etc., unique identification of packaged pharmaceuticals will be a fantastic enabler. The main objective of this study is to prevent and reduce the production of erroneous and counterfeit drugs using the enterprise system, which has become a serious threat because it damages the reputation of legitimate drug manufacturers by trying to produce and market placebo medications that are identical to the real thing. Due to federal government procedures and priorities that frequently change over time, the majority of implementation takes time. To achieve compliance with numerous federal regulatory authorities, including drug traceability for patient safety, the pharmaceutical industry must implement a systematic procedure in an ERP environment. The goals would be to guarantee medical drug traceability and provide real-time warnings to supply chain stakeholders and regulatory bodies to maximize the benefit of integrating a drug traceability system into an ERP environment. Additionally, manufacturers are compelled to maintain product costs on the higher side due to a heavy burden of unchecked manufacturing cost spikes. As a result, innovative marketing schemes must be introduced in order to increase the reach to consumers by putting into practice successful strategies.
Figures
PreviousNext
Review Article
Open Access August 20, 2022

Advancing Predictive Failure Analytics in Automotive Safety: AI-Driven Approaches for School Buses and Commercial Trucks

Abstract The recent evidence on AI in automotive safety shows the potential to reduce crashes and improve efficiency. Studies used AI techniques like machine learning and predictive analytics models to develop predictive collision avoidance systems. The studies collected data from various sources, such as traffic collision data and shapefiles. They utilized deep learning neural networks and 3D [...] Read more.
The recent evidence on AI in automotive safety shows the potential to reduce crashes and improve efficiency. Studies used AI techniques like machine learning and predictive analytics models to develop predictive collision avoidance systems. The studies collected data from various sources, such as traffic collision data and shapefiles. They utilized deep learning neural networks and 3D visualization techniques to analyze the data. However, there needs to be more research on AI in school bus and commercial truck safety. This paper explores the importance of AI-driven predictive failure analytics in enhancing automotive safety for these vehicles. It will discuss challenges, required data, technologies involved in predictive failure analytics, and the potential benefits and implications for the future. The conclusion will summarize the findings and emphasize the significance of AI in improving driver safety. Overall, this paper contributes to the field of automotive safety and aims to attract more research in this area.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

An Effective Predicting E-Commerce Sales & Management System Based on Machine Learning Methods

Abstract Due to influence of Internet, this e-commerce sector has developed rapidly. Most of the online retailing or selling businesses are seeking for way for predicting their products demand. Sales forecasting may help retailers develop a sales strategy that will enhance sales and attract more money and investment. The current research work puts forward a machine learning framework to forecast E-commerce [...] Read more.
Due to influence of Internet, this e-commerce sector has developed rapidly. Most of the online retailing or selling businesses are seeking for way for predicting their products demand. Sales forecasting may help retailers develop a sales strategy that will enhance sales and attract more money and investment. The current research work puts forward a machine learning framework to forecast E-commerce sales for strategic management using a dataset of E-commerce transactions. With 70 percent of the data for train and 30 percent for test, three models were produced, namely, Random Forest, Decision Tree, and XGBoost. In order to evaluate the models, performance measures inclusive of R-squared (R²) and Root Mean Squared Error (RMSE) were employed. Thus, the XGBoost model was the most accurate in marketing predictive capabilities for E-commerce sales with the R² score of 96.3%. This has demonstrated the increased capability of XGBoost algorithm to forecast E-commerce monthly sales more accurately than other models and can assist decision makers for managing inventory and arriving smart and quick decisions in this rapidly growing E-commerce market. The findings reiterate the importance of using advanced analytics in order to drive effectiveness and customer experience within E-commerce sector.
Figures
PreviousNext
Review Article
Open Access October 15, 2022

Big Data and AI/ML in Threat Detection: A New Era of Cybersecurity

Abstract The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even [...] Read more.
The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even under pressure from regulatory boards, have strived to harness the power of data and leverage it to enhance safety and security, maximize performance, and mitigate risks. However, the adversaries themselves have capitalized on the unequal battle of big data and artificial intelligence to inflict widespread chaos. Therefore, the demand for big data analytics and AI/ML for high-fidelity intelligence, surveillance, and reconnaissance is at its highest. Today, in the cybersecurity realm, the detection of adverse incidents poses substantial challenges due to the sheer variety, volume, and velocity of deep packet inspection data. State-of-the-art detection techniques have fallen short of detecting the latest attacks after a big data breach incident. On the other hand, computational intelligence techniques such as machine learning have reignited the search for solutions for diverse monitoring problems. Recent advancements in AI/ML frameworks have the potential to analyze IoT/edge-generated big data in near real-time and assist risk assessment and mitigation through automated threat detection and modeling in the big data and AI/ML domain. Industry best practices and case studies are examined that endeavor to showcase how big data coupled with AI/ML unlocks new dimensions and capabilities in improved vigilance and monitoring, prediction of adverse incidents, intelligent modeling, and future uncertainty quantification by data resampling correction. All of these avenues lead to enhanced robustness, security, safety, and performance of industrial processes, computing, and infrastructures. A view of the future and how the potential threats due to the misuse of new technologies from bandwidth to IoT/edge, blockchain, AI, quantum, and autonomous fields is discussed. Cybersecurity is again playing out at a pace set by adversaries with low entry barriers and debilitating tools. The need for innovative solutions for defense from the emerging threat landscape, harnessing the power of new technologies and collaboration, is emphasized.
Figures
PreviousNext
Article
Open Access December 27, 2021

Financial Implications of Predictive Analytics in Vehicle Manufacturing: Insights for Budget Optimization and Resource Allocation

Abstract Factory owners and vehicle manufacturers increasingly opt for predictive analytics to inform their decisions. While predictive analytics have been proven to provide insights into the initiation of maintenance measures before a machine actually fails, the right models and features could have a significant impact on the budget spent and resources allocated. This means that financially oriented [...] Read more.
Factory owners and vehicle manufacturers increasingly opt for predictive analytics to inform their decisions. While predictive analytics have been proven to provide insights into the initiation of maintenance measures before a machine actually fails, the right models and features could have a significant impact on the budget spent and resources allocated. This means that financially oriented questions need to at least partially guide the decisions in the planning phase of data science projects. Data-driven approaches will play an increasingly important role, but only a few of the firms that were confident performed logistic regression models for predictive maintenance. Also, from the available knowledge, data-driven classification models connecting vehicle component failures and the occurrence of delays at the assembly line have not been published. This paper utilizes a real-world data-driven approach using classification models in predictive analytics by vehicle manufacturers and thereby links the financial implications of such data science projects to their results. We expand the existing literature on predictive maintenance and possess a unique dataset of newly launched series of vehicles, presented as-is. Our research context is of interest to researchers and practitioners in the automotive industry that manage and plan the final vehicle assembly with just-in-time principles, factoring the consequences of component failures on the assembly process. Key findings of this paper highlight that while minor tweaking of the models is possible, their potential input in decision-making processes for budget optimization is limited.
Figures
PreviousNext
Review Article
Open Access October 29, 2022

Neural Networks for Enhancing Rail Safety and Security: Real-Time Monitoring and Incident Prediction

Abstract The growth in demand for rail transportation systems within cities, together with high-speed and long-distance transportation running on a rail network, raises the issues of both rail safety and security. If an accident or an attack occurs, its consequences can be extremely severe. To mitigate the impact of these events, the real-time monitoring of a rail system is required. In that case, the [...] Read more.
The growth in demand for rail transportation systems within cities, together with high-speed and long-distance transportation running on a rail network, raises the issues of both rail safety and security. If an accident or an attack occurs, its consequences can be extremely severe. To mitigate the impact of these events, the real-time monitoring of a rail system is required. In that case, the improvements in monitoring can be achieved using artificial intelligence algorithms such as neural networks. Neural networks have been used to achieve real-time incident identification in monitoring the track quality in terms of classifying the graphical outputs of an ultrasonic system working with the rails and track bed, to predict incidents on the rail infrastructure due to transmission channels becoming blocked, and also to attempt scheduling preemptive and preventative maintenance. In terms of forecasting incidents and accidents on board the trains, neural networks have been used to model passenger behavior and optimize responses during a train station evacuation. In tackling the incidents and accidents occurring on rail transport, we contribute with two methodologies to detect anomalies in real-time and identify the level of security risk: at the maintenance level with personnel operating along the railways, and onboard passenger trains. These methodologies were evaluated on real-world datasets and shown to be able to achieve a high accuracy in the results. The results generated from these case studies also reveal the potential for network-wide applications, which could enhance security and safety on railway networks by offering the possibility of better managing network disruptions and more rapidly identifying security issues. The speed and coverage of the information generated through the implementation of these methodologies have implications in utilizing prediction for decision support and enhancing safety and security on board the rail network.
Figures
PreviousNext
Review Article
Open Access October 30, 2022

Towards Autonomous Analytics: The Evolution of Self-Service BI Platforms with Machine Learning Integration

Abstract Self-service business intelligence (BI) platforms have become essential applications for exploring, analyzing, and visualizing business data in various domains. Here, we envisage that the business intelligence platform will perform automatic and autonomous data analytics with minimal to no user interaction. We aim to offer a data-driven, intelligent, and scalable infrastructure that amplifies the [...] Read more.
Self-service business intelligence (BI) platforms have become essential applications for exploring, analyzing, and visualizing business data in various domains. Here, we envisage that the business intelligence platform will perform automatic and autonomous data analytics with minimal to no user interaction. We aim to offer a data-driven, intelligent, and scalable infrastructure that amplifies the advantages of BI systems and discovers hidden and complex insights from very large business datasets, which a business analyst can miss during manual exploratory data analysis. Towards our future vision of autonomous analytics, we propose a collective machine learning model repository with an integration layer for user-defined analytical goals within the BI platform. The proposed architecture can effectively reduce the cognitive load on users for repetitive tasks, democratizing data science expertise across data workers and facilitating a less experienced user community to develop and use advanced machine learning and statistical algorithms.
Figures
PreviousNext
Review Article
Open Access November 05, 2022

Application of Neural Networks in Optimizing Health Outcomes in Medicare Advantage and Supplement Plans

Abstract The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling, [...] Read more.
The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling, personalized treatment recommendations, and risk stratification. This paper explores the application of neural networks in enhancing health outcomes within the context of Medicare Advantage and Supplement plans. We review how deep learning models can be leveraged to predict patient risk, optimize resource allocation, and identify at-risk populations for preventive interventions. Additionally, we discuss the potential for neural networks to improve claims processing, reduce fraud, and streamline administrative burdens. By integrating various data sources, including medical records, claims data, and demographic information, neural networks enable more accurate and efficient decision-making processes. Ultimately, this approach can lead to better patient care, reduced healthcare costs, and improved satisfaction for beneficiaries of these programs. The paper concludes by highlighting the current limitations, ethical considerations, and future directions for AI adoption in the Medicare Advantage and Supplement sectors.
Figures
PreviousNext
Review Article
Open Access November 16, 2023

Zero Carbon Manufacturing in the Automotive Industry: Integrating Predictive Analytics to Achieve Sustainable Production

Abstract This charge-ahead paper suggests that transitioning the automotive industry towards a zero-carbon ecosystem from material to end-of-life can be accomplished through disruptive zero-carbon manufacturing in the broad area of all-electric vehicle production technology. To accomplish zero carbon emission automotive manufacturing in the vehicle assembly domain, future paradigms must converge on the [...] Read more.
This charge-ahead paper suggests that transitioning the automotive industry towards a zero-carbon ecosystem from material to end-of-life can be accomplished through disruptive zero-carbon manufacturing in the broad area of all-electric vehicle production technology. To accomplish zero carbon emission automotive manufacturing in the vehicle assembly domain, future paradigms must converge on the decoupling of carbon dioxide emissions from automobile manufacturing and use the design, processing, and manufacturing conditions. The envisioned zero carbon emission vehicle manufacturing domain consists of two complementary components: (a) making more efficient use of energy and (b) reducing carbon in energy use. This paper presents the status of key scientific and technological advancements to bring the manufacturing model of today to a zero-carbon ecosystem for the entire automotive industry of tomorrow. This paper suggests the groundbreaking application of dynamic and distributed predictive scheduling algorithms and open sensing and visualization technology to meet the zero carbon emission vehicle manufacturing goals. Power-aware high-performance computing clusters have recently become a viable solution for sustainable production. Advances in scalable and self-adaptive monitoring, predictive analytics, timeline-based machine learning, and digital replica of cyber-physical systems are also seen co-evolving in the zero carbon manufacturing future. These methods are inspired by initiatives to decouple gross domestic product growth and energy-related carbon dioxide emissions. Stakeholders could co-design and implement shared roadmaps to transition the automotive manufacturing sector with relevant societal and environmental benefits. The automated mobility sector offers a program, an industry-leading example of transforming an automotive production facility to carbon neutrality status. The conclusions from this paper challenge automotive manufacturers to engage in industry offsetting and carbon tax programs to drive continuous improvement and circular vehicle flows via a multi-directional zero-carbon smart grid.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Enhancing Pharmaceutical Supply Chain Efficiency with Deep Learning-Driven Insights

Abstract The growing complexity of the operating environment urges pharmaceutical innovation. This essay addresses the need for the integration of advanced technologies in the pharmaceutical supply chain. It justifies the value proposition and presents a concrete use case for the integration of deep learning insights to make data-driven decisions. The supply chain has always been a priority for the [...] Read more.
The growing complexity of the operating environment urges pharmaceutical innovation. This essay addresses the need for the integration of advanced technologies in the pharmaceutical supply chain. It justifies the value proposition and presents a concrete use case for the integration of deep learning insights to make data-driven decisions. The supply chain has always been a priority for the pharmaceutical industry; research and development recognizes companies' increasing investment in big data strategies, with plans for a CAGR in big data tool adoption. The work presented herein has a preliminary explorative character to recuperate and integrate evidence from partly overlooked practical experience and know-how. The practical relevance of the essay is directed toward practitioners in pharmaceutical production, supply chain management, logistics, and regulatory agencies. The literature has shown a long-term concern for enhanced performance in the pharmaceutical supply chain network. This essay demonstrates the application of deep learning-driven insights to reveal non-evident flow dependencies. The main aim is to present a comprehensive insight into deep learning-driven decision support. The supply chain is portrayed in a holistic manner, seeking end-to-end visibility. Implications for public policy are discussed, such as data equity: many countries are protecting their populations and economic growth by building resilience and efficiency to ensure the capacity to move goods across supply chains. The implementation strategy is covered. The combined reduction of variability, efficiency as matured richness, reliability (on stochastic flows and their understanding through deep learning and data), and system noise (increased dampening through the inclusiveness of all stakeholders) results in increased responsiveness of supply chains for pharmaceutical products. Future work involves the integration of external data, closing the loop between planning and its application in reality.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

Predictive Analytics and Deep Learning for Logistics Optimization in Supply Chain Management

Abstract Managing supply chains efficiently has become a major concern for organizations. One of the important factors to optimize in supply chain management is logistics. The advent of technology and the increase in data availability allow for the enhancement of the efficiency of logistics in a supply chain. This discussion focuses on the blending of analytics with innovation in logistics to improve the [...] Read more.
Managing supply chains efficiently has become a major concern for organizations. One of the important factors to optimize in supply chain management is logistics. The advent of technology and the increase in data availability allow for the enhancement of the efficiency of logistics in a supply chain. This discussion focuses on the blending of analytics with innovation in logistics to improve the operations of a supply chain. An approach is presented on how predictive analytics can be used to improve logistics operations. In order to analyze big data in logistics effectively, an artificial intelligence computational technique, specifically deep learning, is employed. Two case studies are illustrated to demonstrate the practical employability of the proposed technique. This reveals the power and potential of using predictive analytics in logistics to project various KPI values ahead in the future based on the contemporary data from the logistics operations; sheds light on the innovative technique of employing deep learning through deep learning-based predictive analytics in logistics; suggests incorporating innovative techniques like deep learning with predictive analytics to develop an accurate forecasting technique in logistics and optimize operations and prevent disruption in the supply chain. The network of supply chains has become more complex, necessitating the need for the latest technological advancements. The sectors that have gained a fair amount of attention for the application of technology to optimize their operations are manufacturing, healthcare, aerospace, and the automotive industry. A little attention has been diverted to the logistics sector; many describe how analytics and artificial intelligence can be used in the logistics sector to achieve higher optimization. Currently, significant research has been done in optimizing logistics operations. Nevertheless, with the explosive volume of historical data being produced by the logistics operations of an organization, there is a great opportunity to learn valuable insights from the data accumulated over time for more long-term strategic planning. To develop the logistics operations in an organization, the use of historical data is essential to understand the trends in the operations. For example, regular maintenance planning and resource allocation based on trends are long-term activities that will not affect logistics operations immediately but can affect the business’s strategic planning in the long run. A predictive analysis technique employed on historical data of logistics can narrow down conclusions based on the future trends of logistics operations. Thus, the technique can be used to prevent the disruption of the supply chain.
Figures
PreviousNext
Review Article
Open Access December 27, 2022

Advancing Pain Medicine with AI and Neural Networks: Predictive Analytics and Personalized Treatment Plans for Chronic and Acute Pain Managements

Abstract There is a growing body of evidence that the number of individuals suffering from chronic and acute pain is under-reported and the burden of the veteran, aging, athletic, and working populations is rising. Current pain management is limited by our capacity to collaborate with individuals continuing normal daily functions and self-administration of pain treatments outside of traditional healthcare [...] Read more.
There is a growing body of evidence that the number of individuals suffering from chronic and acute pain is under-reported and the burden of the veteran, aging, athletic, and working populations is rising. Current pain management is limited by our capacity to collaborate with individuals continuing normal daily functions and self-administration of pain treatments outside of traditional healthcare appointments and hospital settings. In this review, the current gap in clinical care for real-time feedback and guidance with pain management decision-making for chronic and post-operative pain treatment is defined. We examine the recent and future applications for predictive analytics of opioid use after surgery and implementing real-time neural networks for personalized pain management goal setting for particular individuals on the path to discharge to normal function. Integration of personalized neural networks with longitudinal data may enable the development of future treatment personalizations paired with electrical simulations.
Figures
PreviousNext
Review Article
Open Access December 27, 2023

Leveraging Artificial Intelligence to Enhance Supply Chain Resilience: A Study of Predictive Analytics and Risk Mitigation Strategies

Abstract The management of supply chains is increasingly complex. This study provides a comparative analysis of the cost-benefit analysis for managing various risks. It identifies the financial implications of leveraging artificial intelligence in supply chains to better address risk. Empirical results show a business case for managing some sources of risk more proactively facilitated through predictive [...] Read more.
The management of supply chains is increasingly complex. This study provides a comparative analysis of the cost-benefit analysis for managing various risks. It identifies the financial implications of leveraging artificial intelligence in supply chains to better address risk. Empirical results show a business case for managing some sources of risk more proactively facilitated through predictive modeling techniques offered by AI. Across investigation streams, the use of AI results in an average total cost saving ranging from 41,254 to 4,099,617. Findings from our research can be used to inform managers and theorists about the implications of integrating AI technologies to manage risks in the supply chain. Our work also highlights areas for future research. Given the growing interest in studying sub-second forecasting, our research could be a point of departure for future investigations aimed at considering the impact of forecasting horizons such as an intra-day basis. We formulate a conceptual framework that considers how and to what extent performance evaluation metrics vary according to differences in the fidelity of predictive models and factor importance for identifying risks. We also utilize a mixed-method approach to demonstrate the applicability of our ideas in practice. Our results illustrate the financial implications of integrating AI predictive tools with business processes. Results suggest that real-world companies can circumvent inefficiencies associated with trying to manage many classes of risk via the use of AI-enhanced predictive analytics. As managers need to justify investment to top management, our work supports decision-making by providing a means of conducting a trade-off analysis at the tactical level.
Figures
PreviousNext
Review Article
Open Access February 22, 2023

Navigating the Pharmaceutical Supply Chain: Key Strategies for Balancing Demand and Supply

Abstract The pharmaceutical industry is fundamental to global healthcare, providing essential medicines that improve health outcomes and quality of life. However, the demand and supply dynamics within this sector are highly complex, shaped by various factors including demographic changes, evolving disease burdens, technological advancements, regulatory challenges, and economic pressures. This manuscript [...] Read more.
The pharmaceutical industry is fundamental to global healthcare, providing essential medicines that improve health outcomes and quality of life. However, the demand and supply dynamics within this sector are highly complex, shaped by various factors including demographic changes, evolving disease burdens, technological advancements, regulatory challenges, and economic pressures. This manuscript explores the intricate relationship between pharmaceutical medicine demand and supply, focusing on key strategies that can help companies effectively navigate these challenges. The demand for pharmaceutical products is driven by several factors, such as population growth, the aging population, the rise of chronic diseases, and the emergence of new health threats. Additionally, healthcare accessibility, affordability, and policy changes significantly impact the consumption of medicines, while innovations in medical technologies and therapies create new treatment needs. On the supply side, pharmaceutical companies face challenges related to manufacturing capacity, raw material availability, distribution logistics, and compliance with ever-evolving global regulatory frameworks. To address these challenges, the manuscript discusses strategic approaches to managing both demand and supply in the pharmaceutical sector. Key strategies include advanced demand forecasting through data analytics, optimizing supply chains for efficiency and resilience, implementing just-in-time inventory models, and investing in flexible manufacturing systems. Furthermore, global collaboration and partnerships, as well as effective risk management practices, are highlighted as essential to ensuring the availability of medicines, particularly in times of crisis or global health emergencies. This manuscript also delves into the role of policy advocacy and regulatory harmonization in stabilizing the pharmaceutical market, ensuring that medicines are accessible to all populations. In conclusion, the pharmaceutical industry must continually adapt to meet the evolving challenges of demand and supply, embracing innovation and collaboration while maintaining a focus on patient access and global healthcare equity. Through strategic planning and adaptive solutions, the pharmaceutical sector can ensure the continuous availability of critical medicines worldwide, meeting both current and future health needs.
Case Report
Open Access December 27, 2023

Leveraging Machine Learning Techniques for Predictive Analysis in Merger and Acquisition (M&A)

Abstract M&A is a strategic concept of business growth through consolidation, gaining market access, increasing strategic positions, and increasing operational efficiency. To understand the dynamics of M&A, this paper looks at aspects such as targeted firm identification, evaluation, bidding for the target firm, and post-acquisition integration. All forms of M&A, including horizontal, [...] Read more.
M&A is a strategic concept of business growth through consolidation, gaining market access, increasing strategic positions, and increasing operational efficiency. To understand the dynamics of M&A, this paper looks at aspects such as targeted firm identification, evaluation, bidding for the target firm, and post-acquisition integration. All forms of M&A, including horizontal, vertical, conglomerate, and acquisitions, are discussed in terms of goals and values, including synergy, cost reduction, competitive advantages, and access to better technology. However, issues such as cultural assimilation, adhesion to regulations, and calculating an inaccurate value are also resolved. The paper then goes deeper to provide insight into how predictive analytics applies to M&A, using ML to improve decision-making with forecasting benefits. Including healthcare, education, and construction industries, the presented predictive models using regression analysis, neural networks, and ensemble techniques help to make decisions. Through time series and real-time data, PDA enables sound M&A strategies, effective risk management and smooth integration.
Figures
PreviousNext
Review Article
Open Access December 27, 2023

Understanding the Fundamentals of Digital Transformation in Financial Services: Drivers and Strategic Insights

Abstract The current financial services sector is realising considerable changes in its operations due to development in technology and embracing of digital platforms. This evolution is changing the established concepts of business, consumers and channels of delivery of services. Financial services firms are changing the way they work through digital transformation due to developments in technology, [...] Read more.
The current financial services sector is realising considerable changes in its operations due to development in technology and embracing of digital platforms. This evolution is changing the established concepts of business, consumers and channels of delivery of services. Financial services firms are changing the way they work through digital transformation due to developments in technology, changes in customer needs, and an increase in emphasis on sustainability. Understanding the opportunities, risks, and new trends in digital transformation is the focus of this paper. Opportunities include efficient real-time decision-making processes, increased transparency and better process controls, which are balanced by the threats of change management, dubious organization-technology fit, and high implementation costs. The study also examines recent advancements, including the application of machine learning and artificial intelligence, developments in mobile and online banking, integration of blockchain, and increasing focus on security and personalised banking. A literature review yields some findings from different studies on rural financial services, the evolution of the blockchain, drivers of digital transformation, cloud-based learning approaches, and emerging sustainability practices. All of these results suggest that more strategic planning, analytics, and more focus on ensuring that organisational objectives are met with transformations should be pursued. Hence, this research findings add to the existing literature in determining how innovative and digital technologies are likely to transform the financial services sector and advance sustainability.
Figures
PreviousNext
Review Article
Open Access November 19, 2022

Analyzing Behavioral Trends in Credit Card Fraud Patterns: Leveraging Federated Learning and Privacy-Preserving Artificial Intelligence Frameworks

Abstract We investigate and analyze the trends and behaviors in credit card fraud attacks and transactions. First, we perform logical analysis to find hidden patterns and trends, then we leverage game-theoretical models to illustrate the potential strategies of both the attackers and defenders. Next, we demonstrate the strength of industry-scale, privacy-preserving artificial intelligence solutions by [...] Read more.
We investigate and analyze the trends and behaviors in credit card fraud attacks and transactions. First, we perform logical analysis to find hidden patterns and trends, then we leverage game-theoretical models to illustrate the potential strategies of both the attackers and defenders. Next, we demonstrate the strength of industry-scale, privacy-preserving artificial intelligence solutions by presenting the results from our recent exploratory study in this respect. Furthermore, we describe the intrinsic challenges in the context of developing reliable predictive models using more stringent protocols, and hence the need for sector-specific benchmark datasets, and provide potential solutions based on state-of-the-art privacy models. Finally, we conclude the paper by discussing future research lines on the topic, and also the possible real-life implications. The paper underscores the challenges in creating robust AI models for the banking sector. The results also showcase that privacy-preserving AI models can potentially augment sharing capabilities while mitigating liability issues of public-private sector partnerships [1].
Figures
PreviousNext
Review Article
Open Access December 29, 2019

Explainable Analytics in Multi-Cloud Environments: A Framework for Transparent Decision-Making

Abstract The multitude of services and resources available in multi-cloud environments has increased the importance of analytics applications in cloud brokering. These applications can orchestrate services and resources that reside in different domains and require inputs that a single cloud provider could not easily acquire. Yet, despite their distinct characteristics, multi-cloud analytics users have no [...] Read more.
The multitude of services and resources available in multi-cloud environments has increased the importance of analytics applications in cloud brokering. These applications can orchestrate services and resources that reside in different domains and require inputs that a single cloud provider could not easily acquire. Yet, despite their distinct characteristics, multi-cloud analytics users have no voice in the ranking of the services in brokerage marketplaces. In this chapter, we introduce the concept and propose the implementation of explainable analytics to increase transparency and user satisfaction in multi-cloud environments. The criteria that we have identified and measured in order to summarize them in explainable results allow cloud users to acquire an understanding of the ranking rules, a crucial requirement in trustful decision-making. Our proposal accounts for a set of regulations for intelligent systems and targets their specific adaptation and use in multi-cloud environments.
Figures
PreviousNext
Review Article
Open Access December 27, 2019

The Role of Neural Networks in Advancing Wearable Healthcare Technology Analytics

Abstract Neural networks are bringing a transformation in wearable healthcare technology analytics. These networks are able to analyze a vast amount of data to help in making decisions concerning patient care. Advancements in deep learning have brought neural networks to the forefront, making data analytics a straightforward process. This study will help in unveiling the use of ICT and AI in medical [...] Read more.
Neural networks are bringing a transformation in wearable healthcare technology analytics. These networks are able to analyze a vast amount of data to help in making decisions concerning patient care. Advancements in deep learning have brought neural networks to the forefront, making data analytics a straightforward process. This study will help in unveiling the use of ICT and AI in medical healthcare technology, crawling through some industry giants. Wearable Healthcare Technologies are becoming more popular every day. These technologies facilitate collecting, monitoring, and sharing every vital aspect of the human body necessary for diagnosing and treating an ailment. At the advent of global digitization, health data storage and systematic analysis are taking shape to ensure better diagnostics, preventive, and predictive healthcare. Healthcare analytics powered by neural networks can significantly improve health outcomes, maximizing individuals' potential and quality of life. The breadth and possibilities of connected devices are getting wider. From personal activity monitoring to quantifying every bit of health statistics, connected devices are making an impact in measurement, management, and manipulation. In healthcare, early diagnosis could be a lifesaver. Data analytics can help in a big way to make moves and predictions to save lives. We are in another phase of the digitization era, "Neural Network and Wearable Healthcare Technology Analytics." A neural network could be conceived as an adaptive system made up of a large number of neurons connected in multiple layers. A neural network processes data in a similar way as the human brain does. Using a collection of algorithms, for many neural networks, objects are composed of 'input' and 'output' layers along with the layers of the neural network.
Figures
PreviousNext
Review Article
Open Access December 27, 2019

Predictive Analytics in Biologics: Improving Production Outcomes Using Big Data

Abstract Biopharmaceuticals, or biologics, are a burgeoning sector in the pharmaceutical industry, predicted to reach $239.4 billion by 2025. This unparalleled growth is often attributed to the enhanced specificity offered by large molecules over small molecules. The large size of the constituent proteins necessitates the continuous implementation of big data predictive analytics to elucidate the most [...] Read more.
Biopharmaceuticals, or biologics, are a burgeoning sector in the pharmaceutical industry, predicted to reach $239.4 billion by 2025. This unparalleled growth is often attributed to the enhanced specificity offered by large molecules over small molecules. The large size of the constituent proteins necessitates the continuous implementation of big data predictive analytics to elucidate the most effective candidates in the lead optimization process. These same methodologies can be applied, and with the advent of machine learning and automated predictive analytics, this is becoming an increasingly facile task, to the augmentation and optimization of the downstream production processes that comprise the majority of the development cost of any biologic. In this work, big data from cell line generation, product and process design, and large-scale lead validation studies have been used to compare the applicability of simple statistical models against these black-box approaches for the rapid acceleration of enzymes to the pilot plant stage. This research can be expanded upon to exploit the big datasets generated as part of the progression of biologics through the development pipeline to further optimize production outcomes. Over the coming months, data from the project will be used to probe which approaches are amenable to which processes and, as a result, more amenable to various economic simulations. The computed optimization objective for the HIT must include the cost of acquiring, storing, and analyzing data to construct these predictive models, alongside the expected commercial reward of choosing an optimally ranked candidate. In this vein, perspective must be taken in the probable future price, capability outputs, and ownership issues of increasingly sophisticated data analysis software as superstructures become more frequent. It is frequently stated that decisions made to reduce production costs are data-driven, but that is not because more economically or energetically costly experiments or production methods are employed; to truly evaluate production steps, dynamic energy, and economic models need to become more commonplace. Conversion of process quality approaches from large questionnaires, risk analysis, and expert opinion-driven methods to statistical and thus more reliable approaches is an area of future research in analytics used herein.
Figures
PreviousNext
Review Article
Open Access December 27, 2019

Revolutionizing Patient Care and Digital Infrastructure: Integrating Cloud Computing and Advanced Data Engineering for Industry Innovation

Abstract This work details how the integration of cloud computing and advanced data engineering can innovate and reshape patient care and digital infrastructure. In the healthcare sector, cloud services offer the necessary support to generate digitally-oriented services and service kits. These services can contain high levels of availability, low levels of latency, and on-demand scaling capabilities, while [...] Read more.
This work details how the integration of cloud computing and advanced data engineering can innovate and reshape patient care and digital infrastructure. In the healthcare sector, cloud services offer the necessary support to generate digitally-oriented services and service kits. These services can contain high levels of availability, low levels of latency, and on-demand scaling capabilities, while following the strictest data protection laws and regulations. On the other hand, these services can be combined with data engineering techniques to construct an ecosystem that enhances and adds an optimized data layer on any cloud environment. This ecosystem includes technologies to acquire, process, and manage healthcare data while respecting all regulatory obligations and institutions and can be part of a comprehensive digitalization strategy. The objective is to augment the healthcare services that the industry offers by leveraging healthcare data and AI technologies. Designed services, processes, and technologies can be described either as industry-agnostic services or healthcare-specific services that process and manage electronic healthcare records (EHR). Industry-agnostic services offer a set of tools and methodologies to conduct optimized data experiments. The goal is to exploit any variety, velocity, volume, and veracity of medical data. Healthcare-specific services offer a set of tools and methodologies to connect to any common EHR vendor in a privacy-preserving manner. Participating companies are thus able to hold, share, and make use of healthcare data in real-time. The proposed architecture can be transformative for the healthcare industry, opening up and facilitating experimentation on new and scalable service models. The transition to a more digital health approach would help overcome the limits encountered in traditional settings. Limitations in the availability of healthcare facilities and healthcare professionals have underpinned the increasing share of telemedicine in the care process. However, the record-keeping of the patients that undergo care outside of traditional healthcare facilities is often missing and can severely influence the continuity of treatment. Identifying new methods to implement disease prevention and early intervention processes is crucial to avoid more extensive treatment and to support those on multiple line therapies. For chronic patients, having a service available that monitors the state of health and intervenes when parameters go off the wanted range is crucial. However, the same patients are the most under the influence of the decision of care providers; a second opinion might be given remotely which the patient can access at any time on-demand. To address these different kinds of services, an ecosystem composed of a dictionary's worth data layer is outlined, able to live and operate seamlessly in any cloud environment. This future work's envisioned outcome is the rapid evolution and re-definition of the European healthcare landscape.
Figures
PreviousNext
Review Article
Open Access December 27, 2019

Data Engineering Frameworks for Optimizing Community Health Surveillance Systems

Abstract A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like [...] Read more.
A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like blockchain or secure enclaves, and means of data storage and retrieval, have emerged. But, with these innovations comes a grand challenge: how to blend with, and adapt them to, the traditional public health practices. The long-in-place infrastructures and protocols to protect and ensure the welfare of communities are in need of change, or at least update, to enhance their marked longevity of impact directly on the health outcomes and community wellbeing they were designed to fortify. It is in this vein that the essay is written and composed. The investigation in this essay is to query what, particularly, might be the aspects and influences of the emerging veritable cornucopia of new data engineering frameworks that are either being developed specifically for health surveillance and wellness, or are available to be co opted from devices and services already thriving in the current market and research milieu. Knowing what these ways may be could well aid in molding their uptake and spread, ensuring their beneficial impacts on those communities who stand to gain the most. The essay is divided into several key segments. After this introduction, section two details the research methods. In the section that follows, the maximum health outcome potentials of these novel frameworks are reviewed. Part four of the essay takes a more critical approach, addressing how the success of these methods may be hindered and future research avenues. Lastly, the concluding information suggests some actions to take to aid best suit the implementation of these ways, and suggests some thoughts for further research after the completion of these inquiriestrand [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2019

Data-Driven Innovation in Finance: Crafting Intelligent Solutions for Customer-Centric Service Delivery and Competitive Advantage

Abstract Innovations in computing and communication technologies are reshaping finance. The seismic changes are casting uncertainty about the future of financial services. On one hand, fintech evangelists project a rosy future, asserting that the fast-moving algorithms can deliver low-cost financial services intuitively, customized to meet robust consumer expectations. On the other hand, many finance [...] Read more.
Innovations in computing and communication technologies are reshaping finance. The seismic changes are casting uncertainty about the future of financial services. On one hand, fintech evangelists project a rosy future, asserting that the fast-moving algorithms can deliver low-cost financial services intuitively, customized to meet robust consumer expectations. On the other hand, many finance veterans fret that the traditional banking model could disintermediate, bleeding banks via a ‘death by a thousand cuts’, reducing them to passive portfolio holders with no direct customer relationship, eclipsed by digital giants which use their enormous treasure troves of customer data to offer banking as an added service with nearly free cost. Amidst the upbeat technological promises and apocalyptic forebodings, there are two constant, mostly agreed-upon, truths. The first is the vital importance of data. Advances in the internet, cloud computing, and record-keeping technologies are producing an ‘exponential growth in the volume and detail of data’. Some of this big data are personal information. Smartphones are deployed in almost all developed and emerging economies, serving as little spies; tracking, recording location histories, social networks, and app usage of their unsuspecting owners; often with a great degree of precision. ‘People are walking data-factories’ in this ‘mobile digital society’. Data are the fermentation of these global exchanges, electronic commerce and communication, and financial transactions. To just take Facebook as an example, it shares 30 million people a day through updates and posts, hosting personal information on 2.23 billion users. To the alarm of the uninformed public, much of this information is available for commercial harvest. The second constant is the rise of intelligent solutions. Consumers today—be it disclosed or not—are fed tailored clothes, music, film, holiday packages—almost anything you like, notably dynamic pricing, varying in accordance with individual profiles, or personalized search results. The availability of powerful computers has enabled comparable applications that are intended to make the system more responsive to their customer profiles and desires, or to capitalize competitive business possibilities. Such changes will transform the financial industry and occupy a prominent position among the mechanisms of policy competition, reshaping the way in which financial services are bestowed and led on the demand side.
Figures
PreviousNext
Review Article
Open Access December 27, 2022

Integrating generative AI into financial reporting systems for automated insights and decision support

Abstract Generative AI refers to deep learning technology that can automatically produce original text, images, audio, video, and other outputs. With its emerging capabilities, Generative AI can radically change the dynamics of key operational processes in most industries. In this document, we illustrate how it is possible to integrate Generative AI technologies into the Financial Reporting System (FRS) of [...] Read more.
Generative AI refers to deep learning technology that can automatically produce original text, images, audio, video, and other outputs. With its emerging capabilities, Generative AI can radically change the dynamics of key operational processes in most industries. In this document, we illustrate how it is possible to integrate Generative AI technologies into the Financial Reporting System (FRS) of a corporation. The integration will allow the FRS to deliver on demand concise and lucid insights to its associated users on what is happening in the corporation and different aspects of the corporation performance assessment, such as its liquidity, solvency, profitability, organizational structure, and share buy back decision. The integration will also facilitate the delivery of what-if analyses associated with different strategic and tactical decisions taken by the corporation management, such as capital budgeting and profit distribution decisions. The unique added value of several attributes of these insightful analytics is automating the responses to ongoing requests of the FRS users on the corporation. Generative AI capabilities are rapidly expanding. The integration can be applied not only for the corporate FRS but any FRS at the national or global levels delivered by a central bank or an accounting standards setter. Any of these FRS can be made into a unique “hub” for the integrated Generative AI technologies. An equally innovative possible generalized integration could put any corporate process at the center and its supporting FRS tasks and deliverables in its periphery.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Enhancing Regulatory Compliance in Finance through Big Data Analytics and AI Automation

Abstract This paper shows how Big Data Analytics (BDA) and Artificial Intelligence (AI) automation facilitate regulatory compliance in Finance. Regulatory compliance is essential in helping institutions to mitigate reputational, litigation, and financial risk. Existing literature reveals several preconditions for compliance. However, much of the literature has adopted an internal view of compliance without [...] Read more.
This paper shows how Big Data Analytics (BDA) and Artificial Intelligence (AI) automation facilitate regulatory compliance in Finance. Regulatory compliance is essential in helping institutions to mitigate reputational, litigation, and financial risk. Existing literature reveals several preconditions for compliance. However, much of the literature has adopted an internal view of compliance without considering external regulatory frameworks. This research draws on the cognitive model of regulation that looks at regulatory compliance as a social construct. It uses a triangulation research method comprising literature review, interview of trade compliance experts, and questionnaire survey of compliance practitioners to understand how regulation affects compliance and what role ICTs play in implementing compliance. The findings of this study present a regulatory compliance framework comprising four cognitive stages and a conceptual regulatory compliance system that presents how BDA and AI automation are applied to mitigate regulatory complexity and enhance regulatory compliance. The conceptual regulatory compliance system shows how BDA and AI enable institutions to dynamically assess regulatory risk, automatically monitor compliance, and intelligently predict risk violations mitigating regulatory complexity and preventing producing unnecessary documents. It provides theoretical contributions to understanding regulatory evolution and compliance and practical implications for understanding how regulation evolves to be more complicated and elements of a regulatory compliance system mitigate proliferating regulations. Additionally, it provides avenues for future research into the relationship between competing regulatory mandates and how institutions cope with that. Regulations are important for ensuring compliance and governance in finance and to curb systemic risk. Complying with regulations is difficult due to their growing volume, complexity, and fragmentation. Institutions use large-scale Information and Communication Technologies (ICTs), such as Big Data Analytics (BDA) and Artificial Intelligence (AI) automation, to monitor compliance and mitigate regulatory complexity. However, less is known about how firms comply with regulation. Most literature does not thoroughly investigate regulatory elements nor explicitly relate them to compliance.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Designing Self-Learning Agentic Systems for Dynamic Retail Supply Networks

Abstract The evolution of supply chains (SC) from a linear to a network structure created an opportunity for new processes, product/service offerings, and provider-business. Rising customer service expectations have led to the need for innovative SC designs to develop and sustain competitive performance globally. Firms are forced to respond and adapt accordingly, thereby leading to design, network, [...] Read more.
The evolution of supply chains (SC) from a linear to a network structure created an opportunity for new processes, product/service offerings, and provider-business. Rising customer service expectations have led to the need for innovative SC designs to develop and sustain competitive performance globally. Firms are forced to respond and adapt accordingly, thereby leading to design, network, operational, and performance dynamics. Traditionally, SCs are treated as static structures, focusing solely on design and/or operational optimization. Such perspectives are not viable options for SC domains, as they address only a portion of the dynamic problem space, use a deterministic assumption of dominant design variables, capitalize on past data to predict future decisions, and offer pre-classified forecasting options complemented with a limited comprehension of systemic SC elasticity. Novel self-learning agentic systems are proposed that blend the sciencematics of SC decisions and dynamics. The designs guide firms seeking to build adaptive SCs using operational decision processes. The designs address the agentic nature of SC, embedding computational interaction models of firm SC networks. The designs contrast the stochastic action-taking and thereby the performance outcomes, discovering opportunities for adaptive operational designs of SC tasks. Fine-tuning and meta-learning are new design capabilities that adapt to evolving dynamic environments. Frameworks for behavioral customization and systematic exploration of the design space are provided as user guides. Exemplar designs are also provided to serve as a translation template for users to express operational models of their own contexts. To account for the dynamics of supply chains (SC), agent-based models are increasingly adopted. Such models exhibit SC structure and/or formulation dynamics. Though existing efforts commence adjacent-only structural changes, dynamism with respect to tasks is crucial for SC design and operational strategy development. Proposed is a process modeling library and workflow for discovering intricate designs of adaptive agentic systems. The library revises Dataflow and Structure, concealing sequencing and context designs of processes. Prompted specifications describe and enact designs. Applications in SC formulation discovery are provided.
Figures
PreviousNext
Review Article
Open Access December 26, 2018

Understanding Consumer Behavior in Integrated Digital Ecosystems: A Data-Driven Approach

Abstract This study aims to achieve a new understanding of how, why, and when consumer behavior is shaped, enacted, and experienced inside and across integrated digital ecosystems related to large-scale trackable goods, all in service of helping marketers optimize their business performance in the new economy. The pioneering understanding begins by exploring what motivates the choices of a homogeneous [...] Read more.
This study aims to achieve a new understanding of how, why, and when consumer behavior is shaped, enacted, and experienced inside and across integrated digital ecosystems related to large-scale trackable goods, all in service of helping marketers optimize their business performance in the new economy. The pioneering understanding begins by exploring what motivates the choices of a homogeneous group of consumers to organize their consumption of national and store brand varieties of consumer package goods in a certain manner. Thereafter, the essay explores how, if at all, the other digital activities of consumers across various product-related digital spaces and on various platforms build expertise and interest in these products such that they exert an effect on the purchase choices for these products. The essay then advances to asking how online information seeking, in various product-related digital spaces, on various platforms, and from various sources, and taking place at various points in the purchase journey affects online-offline dynamics in purchasing these products. Thereafter, the research examines how paid digital communication in various product-related digital spheres and forms, enabled by consumer advertising engagement on various platforms, boosts the offline sales of these products. Finally, by employing a new methodology that combines consumer scanning data, self-reported online activity data, and transaction data collected from an ad-tech partner, the research presents a fresh set of marketing action levers and performance outcomes on chosen products. Along the way, progress is made on four under-investigated topics in the advertising literature – the role of consumer actors and their expertise in the online-offline purchasing dynamics for ads, advertising engagement, consumer digital spaces, and consumer digital activity investment.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Optimizing Unclaimed Property Management through Cloud-Enabled AI and Integrated IT Infrastructures

Abstract With unclaimed property assets reaching record levels, businesses have become, in some cases, overwhelmed and hamstrung by stagnant, unoptimized processes. That sentiment is compounded by ever-evolving regulatory changes, resulting in organizations struggling to hit compliance deadlines while delivering an optimal claimant experience. Often, early systems had periods of short-term success but are [...] Read more.
With unclaimed property assets reaching record levels, businesses have become, in some cases, overwhelmed and hamstrung by stagnant, unoptimized processes. That sentiment is compounded by ever-evolving regulatory changes, resulting in organizations struggling to hit compliance deadlines while delivering an optimal claimant experience. Often, early systems had periods of short-term success but are on the verge of obsolescence, resulting in stressed workflows and cumbersome integrations. Deploying an integrated IT infrastructure, supported by cloud-enabled AI, represents the quickest path to modernizing unclaimed property management. A fully integrated IT infrastructure is crucial to optimize the management of unclaimed property [1]. When lone solutions exist across an organization, companies miss out on automation opportunities generated through the interconnectedness of systems and data. AI presents organizations with the opportunity to traverse these gaps, enabling a vast library of applications to improve the perturbed workflows of unclaimed property teams. Automated data extraction, document comparison, fraudulent claim detection, and workflow completion analysis are just a few popular applications well suited for the unclaimed property space. In addition to the lagging technology currently deployed by many organizations, the unclaimed property landscape itself is evolving. Compliance issuance, asset availability, rates, the ability to collect fraudulently posted claims, and the claimant experience have all become hot-button items that are now front of mind for regulation agencies and businesses alike. Issuing duplication letters in a compliant manner, accommodating claimant inquiries regarding held assets, and managing, processing, and understanding the operational impact of rate changes are vexing problems many organizations now find themselves playing catch-up to address. The opportunity posed by cloud-enabled AI is furthered by economic, regulatory, and report cycle pressures on unclaimed property teams to do more with the same size or fewer resources. It’s now no longer simply a case of hitting the audit date deadline and checking off a box but an emerging priority for businesses at all sides of the market, from Fortune 500 to mid-market firms. In-house shared service teams are comfortable in areas of monitoring and curating business data; however, unclaimed property is an unknown territory with a learning curve, compliance gaps, and operational holes that, if ignored, stand to scale up exponentially. The combined fallout from regulatory changes and the recent pandemic have only made the situation riskier, with increased volatility in balancing time-sensitive tasks against stringent regulatory deadlines and growing claimant outreach.
Figures
PreviousNext
Review Article
Open Access December 29, 2020

Enhancing Government Fiscal Impact Analysis with Integrated Big Data and Cloud-Based Analytics Platforms

Abstract While several application domains are exploiting the added-value of analytics over various datasets to obtain actionable insights and drive decision making, the public policy management domain has not yet taken advantage of the full potential of the aforementioned analytics and data models. To this end, in this paper authors present an overall architecture of a cloud-based environment that [...] Read more.
While several application domains are exploiting the added-value of analytics over various datasets to obtain actionable insights and drive decision making, the public policy management domain has not yet taken advantage of the full potential of the aforementioned analytics and data models. To this end, in this paper authors present an overall architecture of a cloud-based environment that facilitates data retrieval and analytics, as well as policy modelling, creation and optimization. The environment enables data collection from heterogeneous sources, linking and aggregation, complemented with data cleaning and interoperability techniques. An innovative approach for analytics as a service is introduced and linked with a policy development toolkit, which is an integrated web-based environment to fulfil the requirements of the public policy ecosystem stakeholders [1]. Large information databases on various public issues exist, but their usage for public policy formulation and impact analysis has been limited so far, as no cloud-based service ecosystem exists to facilitate their efficient exploitation. With the increasing availability and importance of both public big and traditional data, the need to extract, link and utilize such information efficiently has arisen. Current data-driven web technologies and models are not aligned with the needs of this domain, and therefore, potential candidates for big data, cloud-based and service-oriented public policy analysis solutions should be investigated, piloted and demonstrated [2]. This paper presents the conceptual architecture of such an ecosystem based on the capabilities of state-of-the-art cloud and web technologies, as well as the requirements of its users.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Building Foundational Data Products for Financial Services: A MDM-Based Approach to Customer, and Product Data Integration

Abstract Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals [...] Read more.
Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals of these models are to achieve large return on investment ratios and to maximize contribution to local profit centers. What’s important is that their modeling is based only on data created, curated and maintained by these marketing organizations. The difference today is that the modeling is no longer based solely on a small number of response-determined variables that are constantly assessed in terms of importance. A quarterly campaign update generates hundreds of statistical models — involving campaign responses, purchase-lag time, the relative magnitude of the direct effect, and the cross-buying effects — using thousands of variables, including customer demographics, life stage, product transactions, household composition, and customer service history. It’s a network of models, not just a table of variable-by-residual importance values. But that’s only part of the story of data products. The predictive modeling utilized by these campaign plans is based on analytics and data preparation, which are data products in their most diminutive form. These products would be even more elementary were they not crafted quarterly by highly skilled, experienced modelers using advanced software and processes. Most companies have enough data to create models that contain not simply hundreds of variables, but thousands, so that the focus can return to information instead of data reduction. These models largely replace the internal econometric models previously used to produce advanced forecasts in the absence of campaign modeling. People used these forecasts to simulate ROI and contribution forecasts for the planned campaigns. In the old days, reliance on econometrically forecast ROI-guideline contribution values reduced the reliance on the marketing campaign modelers because of a lack of trust in their predictive ability.
Figures
PreviousNext
Review Article
Open Access December 18, 2020

Intelligent Supply Chain Ecosystems: Cloud-Native Architectures and Big Data Integration in Retail and Manufacturing Operations

Abstract The supply chain ecosystem plays a very important role in the success or failure of organizations, markets, and economies. Supply chain ecosystems are broadly defined as supply chain organizations and their collaborators. Today's combined challenges of pandemic shutdowns, rising internet usage, and skyrocketing climate change concerns demand that the supply chain ecosystem better connect with [...] Read more.
The supply chain ecosystem plays a very important role in the success or failure of organizations, markets, and economies. Supply chain ecosystems are broadly defined as supply chain organizations and their collaborators. Today's combined challenges of pandemic shutdowns, rising internet usage, and skyrocketing climate change concerns demand that the supply chain ecosystem better connect with customers, when and how they want, to provide products and services with high levels of availability and zero defects, yet collaboratively do this to reduce transportation and production risks, often at the same time reducing operational costs and carbon footprints. Addressing these challenges, this work explores the cloud delivery capabilities of cloud-native architectures to enable the big data integrations and analytics that are needed to grow smarter supply chain ecosystems. This work describes what smart supply chain ecosystems are and how they are planning to grow their technology and integration capabilities. Discussing the industry-leading advanced and manufacturing technology producer ecosystems, it is explained how their technology collaboration and investment plans are driven by climate change and job creation goals. With these background models, the work examines the new digital reality of customer-driven experiences and economies that are demanding cloud-native and intelligent technology partnerships to deliver climate objectives, operational responsiveness, and compatibility to avoid trading economies of scale for economies of integration. The final objectives of this paper are to share key ideas about the need to balance the growing customer service direct-to-consumer business models with those for collaborative investment by market and industry. In doing this, it hopes to promote an intelligent supply chain ecosystem foundation for helping its different participating countries survive and thrive in the digital economy.
Figures
PreviousNext
Review Article
Open Access December 24, 2022

Web-Centric Cloud Framework for Real-Time Monitoring and Risk Prediction in Clinical Trials Using Machine Learning

Abstract Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing [...] Read more.
Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing machine-learning algorithms for data analysis. Detailed descriptions of the data-management interface and the machine-learning processes are provided, targeting active clinical trials with therapeutic uses in cancer. Demonstrations utilize publicly available clinical-trial data from the ClinicalTrials.gov repository. The real-time monitoring and risk prediction systems were assessed by developing five supervised-classification-machine-learning models for trial-status prediction and six unsupervised models for patient-safety-profile assessment, each representing a different phase of the clinical-trial process. All supervised models yielded high accuracy and area-under-the-curve values at the testing stage, while the unsupervised models demonstrated practical applicability. The results underscore the advantages of using the trial-status algorithm, the patient-safety-profile model, and the proposed framework for performing real-time monitoring and risk prediction of clinical trials.
Figures
PreviousNext
Review Article
Open Access December 24, 2022

Cloud Native ETL Pipelines for Real Time Claims Processing in Large Scale Insurers

Abstract Cloud native ETL pipelines support the extract and transform phases of real time claims processing in large scale insurers. The cloud native approach offers dramatic improvements in scalability, reliability, resiliency and agility as well as seamless integration with the diverse set of data sources, destinations and technologies characteristic of large scale insurers. The ETL process extracts data [...] Read more.
Cloud native ETL pipelines support the extract and transform phases of real time claims processing in large scale insurers. The cloud native approach offers dramatic improvements in scalability, reliability, resiliency and agility as well as seamless integration with the diverse set of data sources, destinations and technologies characteristic of large scale insurers. The ETL process extracts data from source systems such as core transaction, fraud, customer and accounting processes, transforms the data to create a usable format for analytics and other applications, and loads the resulting tables into business intelligence or data lake systems for subsequent storage and analysis. By addressing these two phases of the overall ETL process, cloud native ETL pipelines can provide timely, reliable and consistent data to data scientists, actuaries, underwriters and other analysts. Real time processing represents a key priority within the overall claims process: faster, more accurate claim approvals reduce insurer costs, improve customer service and enhance premium pricing. As a result, a variety of claims related use cases are moving from batch to real time.
Figures
PreviousNext
Review Article
Open Access December 02, 2020

Predictive Modeling and Machine Learning Frameworks for Early Disease Detection in Healthcare Data Systems

Abstract Predictive modeling, supported by machine learning technology, aims to analyze data in order to guide decision-making towards actions generating desired values in the future. It encompasses the set of techniques used to build models that estimate the value of a certain variable predicting a forthcoming event from the past or current values of relevant attributes. In predictive healthcare modeling, [...] Read more.
Predictive modeling, supported by machine learning technology, aims to analyze data in order to guide decision-making towards actions generating desired values in the future. It encompasses the set of techniques used to build models that estimate the value of a certain variable predicting a forthcoming event from the past or current values of relevant attributes. In predictive healthcare modeling, the built models represent the relationship among the data concerning customer, provider, production, and other aspects of the healthcare environment in order to assist the decision processes in the prevention of diseases and in the planning of preventive actions by detection of high-risk patients. Contrary to trend analysis, whose goal is to describe past events, predictive models aim to provide useful indications regarding future events and changes. Predictive healthcare modeling supports actions that try to prevent the manifestation of diseases in healthy individuals or try to diagnose as early as possible the incidence of a disease in patients at risk. A sound predictive analysis encompasses not only the model-training task, but also the aspects of data quality, preprocessing, and fusion during its entire implementation lifecycle to ensure appropriate input data preparation. The robustness of the predictive model and its results depends highly on data quality. Due to the variety of data sources in healthcare environments, it becomes essential to use preprocessing in order to remove noise and inconsistencies. The increasing number of endorsable data exchange standards makes each data exchange achievable, but it demands the implementation of a data-governance program. In addition, the influence of the hospital-database architect on the architecture of an early-diagnosis model is important to guarantee appropriate input-formatting modularity.
Figures
PreviousNext
Review Article
Open Access December 18, 2021

A Comparative Study of Traditional Reporting Systems versus Real-Time Analytics Dashboards in Enterprise Operations

Abstract Seamless integration of information in organizations promotes not only the operational efficiency but also the quality of decisions made by managers. Real-time decision support systems enable organizations to evaluate organizational changes immediately and ideally gives a hint of problems before they even appear in the organization. Such real time systems are nowadays regarded as the front-line [...] Read more.
Seamless integration of information in organizations promotes not only the operational efficiency but also the quality of decisions made by managers. Real-time decision support systems enable organizations to evaluate organizational changes immediately and ideally gives a hint of problems before they even appear in the organization. Such real time systems are nowadays regarded as the front-line solutions for managing organizations effectively. The technological possibilities seem not to conquer management. For most companies the data is still dealt with traditional solutions, data is collected and reports are generated to evaluate the past occurrences which only gives information on what has happened in the organization. The problem with these non-real-time systems is the reflection of organizational condition very late. These are the common rear-mirror descriptions for what already has been. Managers are receiving information from their organizations too late and often too little to make optimal decisions. Is it not possible to manage operations in real-time? Is real-time decision support really needed? If so, why most organizations still rely on traditional reporting systems.
Figures
PreviousNext
Review Article
Open Access December 26, 2021

Architectural Frameworks for Large-Scale Electronic Health Record Data Platforms

Abstract Architectural frameworks for large-scale Electronic Health Record (EHR) data platforms are described. Existing EHR data platform architectures often leverage multiple cloud-based solutions blended with institutional infrastructures to manage and analyze clinical data at scale. Key design principles governing the scale of existing EHR data architecture include model design, governance structure, [...] Read more.
Architectural frameworks for large-scale Electronic Health Record (EHR) data platforms are described. Existing EHR data platform architectures often leverage multiple cloud-based solutions blended with institutional infrastructures to manage and analyze clinical data at scale. Key design principles governing the scale of existing EHR data architecture include model design, governance structure, data access management, data security/policy/protection, data-information-language-based standardization, and analytics tool alignment, among others. The rapidly evolving technology landscape and the unprecedented volume of incident and retrospective clinical data being collected and generated within healthcare organizations have led to the emergent need for a dedicated architectural framework to support large-scale computing in the health informatics domain. The application areas of large-scale computing in health informatics include real-time predictive analytics, risk stratification, patient cohort analytics, development of predictive models for specific institutions or population groups, and many more. The use of EHR data for a multitude of decision-making processes in both clinical and non-clinical settings has prompted the establishment of policies prescribing the conditions of access and use of EHR data for non-employed individuals in the organization. Consequently, the demand for accessing, using, and managing EHR data at scale has impacted the over.
Figures
PreviousNext
Review Article
Open Access December 26, 2021

Scalable Data Warehouse Architecture for Population Health Management and Predictive Analytics

Abstract Scalable architecture principles for data warehousing are introduced to support population health management and predictive analytics. These principles are validated through the design of an accompanying Data Pipeline that allows the integration of non-traditional data sources, the use of real-time data for descriptive analytics dashboards, and support for the generation of supervised Machine [...] Read more.
Scalable architecture principles for data warehousing are introduced to support population health management and predictive analytics. These principles are validated through the design of an accompanying Data Pipeline that allows the integration of non-traditional data sources, the use of real-time data for descriptive analytics dashboards, and support for the generation of supervised Machine Learning models. Several analytical capabilities have been implemented to exemplify the practical application of the principles, including predictive models for Risk Stratification in health care. Optimal cost-effectiveness and performance considerations ensure the practical relevance of the architectural principles and associated Data Pipeline. In recent years, the availability of Low-Cost Data Storage services and the increasing popularity of Streaming technologies opened new possibilities for the storage and processing of Streaming data on a near-real-time basis. These technologies can help Developing Countries in tackling many relevant issues such as Urban Planning, Environmental Management, Migration Policies, etc. A multi-tier approach combining Cloud-based Storage with Data Warehousing and Data Mining technologies can offer an interesting architecture to exploit Big Data related to populations.
Figures
PreviousNext
Review Article
Open Access December 26, 2021

Designing Scalable Healthcare Data Pipelines for Multi-Hospital Networks

Abstract Healthcare is increasingly recognized as a data-intensive industry. Multi-hospital networks, among other organizations, face mounting operational and governance challenges because of rigid data-integration pipelines that support all data sources and destinations in the network. These pipelines have become difficult to modify, causing them to lag behind the changing needs of the clinical operation. [...] Read more.
Healthcare is increasingly recognized as a data-intensive industry. Multi-hospital networks, among other organizations, face mounting operational and governance challenges because of rigid data-integration pipelines that support all data sources and destinations in the network. These pipelines have become difficult to modify, causing them to lag behind the changing needs of the clinical operation. Scalable data-pipeline architectures better support clinical decision making, optimize hospital operations, ease data quality and compliance concerns, and contribute to improved patient outcomes. Meeting scalability goals requires breaking up monolithic data-integration pipelines into smaller decoupled components and aligning service-level agreements of pipeline components and source systems. Parallelization and adoption of distributed data-warehouse technology mitigate the burden of ingesting data into a multi-hospital network. However, latency requirements still warrant the construction of separate pipelines for data ingress from clinical devices, electronic health records, and external laboratory-information systems. Healthcare associations recommend near real-time data availability for a growing list of clinical and operational applications. Mishandling the real-time ingestion of data from clinical devices, in particular, compromises availability and performance. Scalable architectural patterns for real-time streaming Ingestion from heterogeneous data sources, transport processes, and back-end processing structures are detailed.
Figures
PreviousNext
Review Article

Query parameters

Keyword:  Analytics

View options

Citations of

Views of

Downloads of