Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access December 15, 2021

Dissemination and Exploitation of Regional Meteo-Hydrological Datasets through Web-based Interactive Applications: The SOL System Case Study

Abstract The effects of climate change are already being felt in several parts of the World. Variability of changing rainfall intensity, drought and weather patterns contribute to determining the vulnerability of many human activities such as agriculture. In the next future, climate change considerations will depend on having appropriate strategies such as strengthen implementation agencies working in a [...] Read more.
The effects of climate change are already being felt in several parts of the World. Variability of changing rainfall intensity, drought and weather patterns contribute to determining the vulnerability of many human activities such as agriculture. In the next future, climate change considerations will depend on having appropriate strategies such as strengthen implementation agencies working in a coordinated manner and with a data-driven approach in order to ensure monitoring, reporting and data verification. In this context, national and regional meteorological Services are facing with high demand for timely and quality information, services and products. A web-based interactive application with the aim of disseminating meteo-hydrological information at regional scale is described in this paper. The web application is built on a relational database and client-side programming has been used for implementing the user interface and controlling the web page behavior. The combination of PHP (Hypertext Preprocessor, a general-purpose scripting language, especially suited to server-side web development) and JavaScript (high-level object-oriented scripting language, nowadays the dominant client-side scripting language of the Web) has been chosen for this reason, since such software is free to use for everyone. The SOL system, developed on behalf of Marche region, Italy, was chosen as a case study, due to its multi-source data framework and because of the processing and public dissemination of several ad-hoc data elaborations.
Figures
PreviousNext
Case Study
Open Access September 28, 2025

Mitochondrial Dysfunction and Oxidative Stress in Early-Onset Neurodegenerative Diseases: A Bibliometric and Data-Driven Analysis

Abstract Early-onset neurodegenerative diseases (EO-NDs), such as early-onset Alzheimer’s disease (EOAD), Parkinson’s disease (EOPD), and familial amyotrophic lateral sclerosis (fALS), often stem from monogenic causes and manifest before typical age thresholds. These disorders frequently feature disrupted mitochondrial function and heightened oxidative stress, which together accelerate neuronal damage and [...] Read more.
Early-onset neurodegenerative diseases (EO-NDs), such as early-onset Alzheimer’s disease (EOAD), Parkinson’s disease (EOPD), and familial amyotrophic lateral sclerosis (fALS), often stem from monogenic causes and manifest before typical age thresholds. These disorders frequently feature disrupted mitochondrial function and heightened oxidative stress, which together accelerate neuronal damage and degeneration. In this work, the author performs a comprehensive analysis of the literature and data related to mitochondrial dysfunction and redox imbalance in EO-NDs. Bibliometric trends were assessed using R-based tools on PubMed datasets, highlighting keyword networks and publication surges in recent years. Publicly available RNA-seq datasets from GEO and SRA were examined, with example DESeq2 analysis illustrating altered mitochondrial gene expression in EO-ND patient-derived samples. Network modeling of redox pathways using Python’s networkx demonstrates how oxidative stress can propagate through metabolic networks. Together, these computational approaches reinforce that mitochondrial DNA mutations, impaired electron transport chain (ETC) function, and reactive oxygen species (ROS) accumulation play central roles in EO-ND pathogenesis. The discussion further evaluates why antioxidant clinical trials have largely failed and how emerging therapies such as gene replacement, antisense oligonucleotides, and mitochondrial biogenesis modulators may provide more effective interventions.
Figures
PreviousNext
Brief Report
Open Access September 28, 2025

Gut-Brain Axis in Autism Spectrum Disorder: A Bibliometric and Microbial-Metabolite-Neural Pathway Analysis

Abstract The gut-brain axis (GBA) has emerged as a central focus in the study of neurodevelopmental disorders, particularly autism spectrum disorder (ASD). Research suggests that microbial composition and its metabolic byproducts influence neural development, synaptic plasticity, and behavior [1,2,3]. A structured bibliometric analysis of Scopus and Web of Science records was performed using Bibliometrix [...] Read more.
The gut-brain axis (GBA) has emerged as a central focus in the study of neurodevelopmental disorders, particularly autism spectrum disorder (ASD). Research suggests that microbial composition and its metabolic byproducts influence neural development, synaptic plasticity, and behavior [1,2,3]. A structured bibliometric analysis of Scopus and Web of Science records was performed using Bibliometrix and VOSviewer to trace trends and thematic evolution of GBA–ASD literature [7,8]. In parallel, a data-driven pathway modeling approach maps microbial metabolites (e.g., short-chain fatty acids, tryptophan catabolites) to host signaling pathways including vagal stimulation, immune cytokine modulation, and blood–brain barrier (BBB) permeability [4,5]. Simulations implemented in Python’s NetworkX illustrate how perturbations in metabolite flux may influence CNS outcomes. The findings reveal growing emphasis on butyrate, serotonin, microglial priming, and maternal immune activation in ASD-related GBA studies, and highlight the need for rigorous empirical validation of computational predictions [9,10,11].
Brief Report
Open Access February 26, 2025

Innovations and Challenges in Pharmaceutical Supply Chain, Serialization and Regulatory Landscape

Abstract The pharmaceutical supply chain has become increasingly complex and vulnerable to various risks, including counterfeit drugs, diversion, and fraud. As these challenges threaten patient safety and the integrity of global healthcare systems, serialization has emerged as a pivotal innovation in pharmaceutical logistics and regulatory compliance. Serialization involves assigning unique identifiers to [...] Read more.
The pharmaceutical supply chain has become increasingly complex and vulnerable to various risks, including counterfeit drugs, diversion, and fraud. As these challenges threaten patient safety and the integrity of global healthcare systems, serialization has emerged as a pivotal innovation in pharmaceutical logistics and regulatory compliance. Serialization involves assigning unique identifiers to individual drug packages, enabling precise tracking and authentication at every stage of the supply chain. This process provides unprecedented transparency, enhances product security, and facilitates real-time monitoring of pharmaceutical products as they move from manufacturers to end consumers. Despite its potential to revolutionize pharmaceutical traceability, the integration of serialization technologies faces numerous obstacles. These include high implementation costs, regulatory inconsistencies across regions, and the technological challenges of managing vast amounts of data. Moreover, the complex, multi-tiered nature of the global supply chain introduces additional risks related to data integrity, cybersecurity, and interoperability between systems. As pharmaceutical companies seek to navigate these challenges, innovations in serialization technology—such as blockchain, artificial intelligence (AI), the Internet of Things (IoT), and radio frequency identification (RFID)—are providing promising solutions to enhance efficiency, reduce fraud, and increase visibility. This manuscript explores both the innovative advancements and the key challenges associated with the integration of serialization in the pharmaceutical supply chain. It delves into the evolving regulatory landscape, highlighting the need for global harmonization of serialization standards, and examines the impact of serialization on securing pharmaceutical distribution networks. Additionally, the paper emphasizes the importance of collaboration among manufacturers, technology providers, and regulatory bodies in overcoming implementation barriers and realizing the full potential of serialization. As the pharmaceutical industry moves towards a more interconnected and data-driven future, serialization promises to play a central role in shaping the next generation of drug safety and supply chain management. By addressing the hurdles to adoption and leveraging emerging technologies, the pharmaceutical sector can create a more secure, transparent, and efficient supply chain that better serves public health and fosters greater trust among consumers and healthcare professionals alike.
Review Article
Open Access November 16, 2024

Digital Therapeutics: A New Dimension to Diabetes Mellitus Management

Abstract Digital therapeutics (DTx) play a transformative role in diabetes management by leveraging technology to provide personalized, data-driven medical interventions. These tools enhance self-management by offering continuous monitoring and real-time feedback on glucose levels, diet, and physical activity. This personalized approach helps patients adhere to treatment plans and make informed lifestyle [...] Read more.
Digital therapeutics (DTx) play a transformative role in diabetes management by leveraging technology to provide personalized, data-driven medical interventions. These tools enhance self-management by offering continuous monitoring and real-time feedback on glucose levels, diet, and physical activity. This personalized approach helps patients adhere to treatment plans and make informed lifestyle changes, leading to improved clinical outcomes such as reduced HbA1c levels and better overall diabetes control. The importance of DTx lies in their ability to make diabetes care more accessible and convenient. Mobile apps and telemedicine platforms enable patients to receive support and guidance from anywhere, reducing the need for frequent in-person visits. Additionally, DTx often include behavioral support features like reminders, educational content, and motivational tools, which are crucial for maintaining healthy habits and managing stress. Currently, the dynamics of DTx in diabetes are rapidly evolving, with increasing integration of artificial intelligence and machine learning to further personalize and optimize care. As the adoption of these technologies grows, they hold the potential to significantly improve patient outcomes and revolutionize diabetes management on a global scale. This article will focus on the benefits of novel digital therapeutics for prevention and management of type II diabetes that are currently available in the market.
Figures
PreviousNext
Article
Open Access October 19, 2024

Quantitative Intersectionality Scoring System (QISS): Opportunities for Enhancing Predictive Modeling, Comparative Analysis, Health Needs Assessment, and Policy Evaluation

Abstract Intersectionality has significantly enhanced our understanding of how overlapping social identities—such as race, ethnicity, gender, sex, class, and sexual orientation—interact to shape individual experiences. However, despite its theoretical importance, much of the existing literature has relied on qualitative approaches to define and study intersectionality, limiting its application in [...] Read more.
Intersectionality has significantly enhanced our understanding of how overlapping social identities—such as race, ethnicity, gender, sex, class, and sexual orientation—interact to shape individual experiences. However, despite its theoretical importance, much of the existing literature has relied on qualitative approaches to define and study intersectionality, limiting its application in predictive modeling, comparative analysis, and policy development. This paper introduces the concept of Quantitative Intersectionality Scoring System (QISS), a novel approach that assigns numerical scores to intersecting identities, thereby enabling a more systematic and data-driven analysis of intersectional effects. We argue that QISS can substantially enhance the utility and predictive validity of quantitative models by capturing the complexities of multiple, overlapping social determinants. By presenting concrete examples, such as the varying impacts of socioeconomic mobility on life expectancy among different intersectional groups, we demonstrate how QISS can yield more precise and reliable forecasts. Such a shift would allow policymakers and service providers to dynamically assess economic and health needs, as well as the uncertainties around them, as individuals move through different social and economic contexts. QISS-based models could be more responsive to the complexities of intersecting identities, allowing for a more quantified and nuanced evaluation of policy interventions. We conclude by discussing the challenges of implementing QISS and emphasizing the need for further research to validate these quantifications using robust quantitative methods. Ultimately, adopting QISS has the potential to improve the accuracy of predictive models and the effectiveness of policies aimed at promoting social justice and health equity.
Figures
PreviousNext
Perspective
Open Access December 27, 2021

Leveraging AI and ML for Enhanced Efficiency and Innovation in Manufacturing: A Comparative Analysis

Abstract The manufacturing industry has embraced modern technologies such as big data, machine learning, and artificial intelligence. This paper examines AI and machine learning developments in the manufacturing industry, comparing current practices and data-driven projects. It aims better to understand these technologies and their potential benefits and challenges. The research identifies opportunities [...] Read more.
The manufacturing industry has embraced modern technologies such as big data, machine learning, and artificial intelligence. This paper examines AI and machine learning developments in the manufacturing industry, comparing current practices and data-driven projects. It aims better to understand these technologies and their potential benefits and challenges. The research identifies opportunities for innovative business solutions and explores industry practices and research results. The paper focuses on implementation rather than technical aspects, aiming to enhance knowledge in this area.
Figures
PreviousNext
Review Article
Open Access August 29, 2022

From Deterministic to Data-Driven: AI and Machine Learning for Next-Generation Production Line Optimization

Abstract The advancement of modern manufacturing is synonymous with the growth of automation. Automation replaces human operators, improves productivity and quality, and reduces costs. However, the initial financial cost and knowledge requirements can be barriers to embracing automation. Manufacturers are now seeking smart manufacturing, known as the fourth industrial revolution. Smart manufacturing goes [...] Read more.
The advancement of modern manufacturing is synonymous with the growth of automation. Automation replaces human operators, improves productivity and quality, and reduces costs. However, the initial financial cost and knowledge requirements can be barriers to embracing automation. Manufacturers are now seeking smart manufacturing, known as the fourth industrial revolution. Smart manufacturing goes beyond automation and utilizes IoT, AI, and big data for optimized production. In a smart factory, production can be linked and controlled innovatively, leading to increased performance, agility, and reduced costs.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

Leveraging AI in Urban Traffic Management: Addressing Congestion and Traffic Flow with Intelligent Systems

Abstract Traffic congestion across the globe is a multimodal problem, intertwining vehicular, pedestrian, and bicycle traffic. The relationship between the multimodal traffic flow is a key factor in understanding urban traffic dynamics. The impact of excessive congestion extends to the excessive cost spent on traffic maintenance, as well as the inherent transportation inefficiency and delayed travel times. [...] Read more.
Traffic congestion across the globe is a multimodal problem, intertwining vehicular, pedestrian, and bicycle traffic. The relationship between the multimodal traffic flow is a key factor in understanding urban traffic dynamics. The impact of excessive congestion extends to the excessive cost spent on traffic maintenance, as well as the inherent transportation inefficiency and delayed travel times. From an urban transportation standpoint, an immediate consideration on one hand is monitoring traffic conditions and demand cycles, while on the other hand inducing flow modifications that benefit the traffic network and mitigate congestion. Embedded and centralized control systems that characterize modern traffic management systems extract traffic conditions specific to their regions but lack communication between networks. Moreover, innovative methods are required to provide more accurate up-to-date traffic forecasts that characterize real-world traffic dynamics and facilitate optimal traffic management decisions. In this chapter, we briefly outline the main difficulties and complexities in modeling, managing, and forecasting traffic dynamics. We also compare various conventional and modern Intelligent Transportation Strategies in terms of accuracy and applicability, their performance, and potential opportunities for optimization of multimodal traffic flow and congestion reduction. This chapter introduces various proposed data-driven models and tools employed for traffic flow prediction and management, investigating specific strategies' strengths, weaknesses, and benefits in addressing various real-world traffic management problems. We describe that the design phase of dependable Intelligent Transportation Systems bears unique requirements in terms of the robustness, safety, and response times of their components and the encompassing system model. Furthermore, this architectural blueprint shares similarities with distributed coordinate searching and collective adaptive systems. Town size-independent models induce systemic performance improvements through reconfigurable embedded functionality. These AI techniques feature elaborate anytime planner-engagers ensuring near-optimal performances in an unbiased behavior when the model complexity is varied. Sustainable models minimize congestion during peaks, flooding, and emergency occurrences as they adhere to area-specific regulations. Security-aware and fail-safe traffic management systems relinquish reasonable assurances of persistent operation under various environmental settings, to acknowledge metropolis and complex traffic junctions. The chapter concludes by outlining challenges, research questions, and future research paths in the field of transportation management.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

Financial Implications of Predictive Analytics in Vehicle Manufacturing: Insights for Budget Optimization and Resource Allocation

Abstract Factory owners and vehicle manufacturers increasingly opt for predictive analytics to inform their decisions. While predictive analytics have been proven to provide insights into the initiation of maintenance measures before a machine actually fails, the right models and features could have a significant impact on the budget spent and resources allocated. This means that financially oriented [...] Read more.
Factory owners and vehicle manufacturers increasingly opt for predictive analytics to inform their decisions. While predictive analytics have been proven to provide insights into the initiation of maintenance measures before a machine actually fails, the right models and features could have a significant impact on the budget spent and resources allocated. This means that financially oriented questions need to at least partially guide the decisions in the planning phase of data science projects. Data-driven approaches will play an increasingly important role, but only a few of the firms that were confident performed logistic regression models for predictive maintenance. Also, from the available knowledge, data-driven classification models connecting vehicle component failures and the occurrence of delays at the assembly line have not been published. This paper utilizes a real-world data-driven approach using classification models in predictive analytics by vehicle manufacturers and thereby links the financial implications of such data science projects to their results. We expand the existing literature on predictive maintenance and possess a unique dataset of newly launched series of vehicles, presented as-is. Our research context is of interest to researchers and practitioners in the automotive industry that manage and plan the final vehicle assembly with just-in-time principles, factoring the consequences of component failures on the assembly process. Key findings of this paper highlight that while minor tweaking of the models is possible, their potential input in decision-making processes for budget optimization is limited.
Figures
PreviousNext
Review Article
Open Access October 30, 2022

Towards Autonomous Analytics: The Evolution of Self-Service BI Platforms with Machine Learning Integration

Abstract Self-service business intelligence (BI) platforms have become essential applications for exploring, analyzing, and visualizing business data in various domains. Here, we envisage that the business intelligence platform will perform automatic and autonomous data analytics with minimal to no user interaction. We aim to offer a data-driven, intelligent, and scalable infrastructure that amplifies the [...] Read more.
Self-service business intelligence (BI) platforms have become essential applications for exploring, analyzing, and visualizing business data in various domains. Here, we envisage that the business intelligence platform will perform automatic and autonomous data analytics with minimal to no user interaction. We aim to offer a data-driven, intelligent, and scalable infrastructure that amplifies the advantages of BI systems and discovers hidden and complex insights from very large business datasets, which a business analyst can miss during manual exploratory data analysis. Towards our future vision of autonomous analytics, we propose a collective machine learning model repository with an integration layer for user-defined analytical goals within the BI platform. The proposed architecture can effectively reduce the cognitive load on users for repetitive tasks, democratizing data science expertise across data workers and facilitating a less experienced user community to develop and use advanced machine learning and statistical algorithms.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Enhancing Pharmaceutical Supply Chain Efficiency with Deep Learning-Driven Insights

Abstract The growing complexity of the operating environment urges pharmaceutical innovation. This essay addresses the need for the integration of advanced technologies in the pharmaceutical supply chain. It justifies the value proposition and presents a concrete use case for the integration of deep learning insights to make data-driven decisions. The supply chain has always been a priority for the [...] Read more.
The growing complexity of the operating environment urges pharmaceutical innovation. This essay addresses the need for the integration of advanced technologies in the pharmaceutical supply chain. It justifies the value proposition and presents a concrete use case for the integration of deep learning insights to make data-driven decisions. The supply chain has always been a priority for the pharmaceutical industry; research and development recognizes companies' increasing investment in big data strategies, with plans for a CAGR in big data tool adoption. The work presented herein has a preliminary explorative character to recuperate and integrate evidence from partly overlooked practical experience and know-how. The practical relevance of the essay is directed toward practitioners in pharmaceutical production, supply chain management, logistics, and regulatory agencies. The literature has shown a long-term concern for enhanced performance in the pharmaceutical supply chain network. This essay demonstrates the application of deep learning-driven insights to reveal non-evident flow dependencies. The main aim is to present a comprehensive insight into deep learning-driven decision support. The supply chain is portrayed in a holistic manner, seeking end-to-end visibility. Implications for public policy are discussed, such as data equity: many countries are protecting their populations and economic growth by building resilience and efficiency to ensure the capacity to move goods across supply chains. The implementation strategy is covered. The combined reduction of variability, efficiency as matured richness, reliability (on stochastic flows and their understanding through deep learning and data), and system noise (increased dampening through the inclusiveness of all stakeholders) results in increased responsiveness of supply chains for pharmaceutical products. Future work involves the integration of external data, closing the loop between planning and its application in reality.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

Predictive Analytics and Deep Learning for Logistics Optimization in Supply Chain Management

Abstract Managing supply chains efficiently has become a major concern for organizations. One of the important factors to optimize in supply chain management is logistics. The advent of technology and the increase in data availability allow for the enhancement of the efficiency of logistics in a supply chain. This discussion focuses on the blending of analytics with innovation in logistics to improve the [...] Read more.
Managing supply chains efficiently has become a major concern for organizations. One of the important factors to optimize in supply chain management is logistics. The advent of technology and the increase in data availability allow for the enhancement of the efficiency of logistics in a supply chain. This discussion focuses on the blending of analytics with innovation in logistics to improve the operations of a supply chain. An approach is presented on how predictive analytics can be used to improve logistics operations. In order to analyze big data in logistics effectively, an artificial intelligence computational technique, specifically deep learning, is employed. Two case studies are illustrated to demonstrate the practical employability of the proposed technique. This reveals the power and potential of using predictive analytics in logistics to project various KPI values ahead in the future based on the contemporary data from the logistics operations; sheds light on the innovative technique of employing deep learning through deep learning-based predictive analytics in logistics; suggests incorporating innovative techniques like deep learning with predictive analytics to develop an accurate forecasting technique in logistics and optimize operations and prevent disruption in the supply chain. The network of supply chains has become more complex, necessitating the need for the latest technological advancements. The sectors that have gained a fair amount of attention for the application of technology to optimize their operations are manufacturing, healthcare, aerospace, and the automotive industry. A little attention has been diverted to the logistics sector; many describe how analytics and artificial intelligence can be used in the logistics sector to achieve higher optimization. Currently, significant research has been done in optimizing logistics operations. Nevertheless, with the explosive volume of historical data being produced by the logistics operations of an organization, there is a great opportunity to learn valuable insights from the data accumulated over time for more long-term strategic planning. To develop the logistics operations in an organization, the use of historical data is essential to understand the trends in the operations. For example, regular maintenance planning and resource allocation based on trends are long-term activities that will not affect logistics operations immediately but can affect the business’s strategic planning in the long run. A predictive analysis technique employed on historical data of logistics can narrow down conclusions based on the future trends of logistics operations. Thus, the technique can be used to prevent the disruption of the supply chain.
Figures
PreviousNext
Review Article
Open Access December 27, 2019

Predictive Analytics in Biologics: Improving Production Outcomes Using Big Data

Abstract Biopharmaceuticals, or biologics, are a burgeoning sector in the pharmaceutical industry, predicted to reach $239.4 billion by 2025. This unparalleled growth is often attributed to the enhanced specificity offered by large molecules over small molecules. The large size of the constituent proteins necessitates the continuous implementation of big data predictive analytics to elucidate the most [...] Read more.
Biopharmaceuticals, or biologics, are a burgeoning sector in the pharmaceutical industry, predicted to reach $239.4 billion by 2025. This unparalleled growth is often attributed to the enhanced specificity offered by large molecules over small molecules. The large size of the constituent proteins necessitates the continuous implementation of big data predictive analytics to elucidate the most effective candidates in the lead optimization process. These same methodologies can be applied, and with the advent of machine learning and automated predictive analytics, this is becoming an increasingly facile task, to the augmentation and optimization of the downstream production processes that comprise the majority of the development cost of any biologic. In this work, big data from cell line generation, product and process design, and large-scale lead validation studies have been used to compare the applicability of simple statistical models against these black-box approaches for the rapid acceleration of enzymes to the pilot plant stage. This research can be expanded upon to exploit the big datasets generated as part of the progression of biologics through the development pipeline to further optimize production outcomes. Over the coming months, data from the project will be used to probe which approaches are amenable to which processes and, as a result, more amenable to various economic simulations. The computed optimization objective for the HIT must include the cost of acquiring, storing, and analyzing data to construct these predictive models, alongside the expected commercial reward of choosing an optimally ranked candidate. In this vein, perspective must be taken in the probable future price, capability outputs, and ownership issues of increasingly sophisticated data analysis software as superstructures become more frequent. It is frequently stated that decisions made to reduce production costs are data-driven, but that is not because more economically or energetically costly experiments or production methods are employed; to truly evaluate production steps, dynamic energy, and economic models need to become more commonplace. Conversion of process quality approaches from large questionnaires, risk analysis, and expert opinion-driven methods to statistical and thus more reliable approaches is an area of future research in analytics used herein.
Figures
PreviousNext
Review Article
Open Access December 27, 2019

Revolutionizing Patient Care and Digital Infrastructure: Integrating Cloud Computing and Advanced Data Engineering for Industry Innovation

Abstract This work details how the integration of cloud computing and advanced data engineering can innovate and reshape patient care and digital infrastructure. In the healthcare sector, cloud services offer the necessary support to generate digitally-oriented services and service kits. These services can contain high levels of availability, low levels of latency, and on-demand scaling capabilities, while [...] Read more.
This work details how the integration of cloud computing and advanced data engineering can innovate and reshape patient care and digital infrastructure. In the healthcare sector, cloud services offer the necessary support to generate digitally-oriented services and service kits. These services can contain high levels of availability, low levels of latency, and on-demand scaling capabilities, while following the strictest data protection laws and regulations. On the other hand, these services can be combined with data engineering techniques to construct an ecosystem that enhances and adds an optimized data layer on any cloud environment. This ecosystem includes technologies to acquire, process, and manage healthcare data while respecting all regulatory obligations and institutions and can be part of a comprehensive digitalization strategy. The objective is to augment the healthcare services that the industry offers by leveraging healthcare data and AI technologies. Designed services, processes, and technologies can be described either as industry-agnostic services or healthcare-specific services that process and manage electronic healthcare records (EHR). Industry-agnostic services offer a set of tools and methodologies to conduct optimized data experiments. The goal is to exploit any variety, velocity, volume, and veracity of medical data. Healthcare-specific services offer a set of tools and methodologies to connect to any common EHR vendor in a privacy-preserving manner. Participating companies are thus able to hold, share, and make use of healthcare data in real-time. The proposed architecture can be transformative for the healthcare industry, opening up and facilitating experimentation on new and scalable service models. The transition to a more digital health approach would help overcome the limits encountered in traditional settings. Limitations in the availability of healthcare facilities and healthcare professionals have underpinned the increasing share of telemedicine in the care process. However, the record-keeping of the patients that undergo care outside of traditional healthcare facilities is often missing and can severely influence the continuity of treatment. Identifying new methods to implement disease prevention and early intervention processes is crucial to avoid more extensive treatment and to support those on multiple line therapies. For chronic patients, having a service available that monitors the state of health and intervenes when parameters go off the wanted range is crucial. However, the same patients are the most under the influence of the decision of care providers; a second opinion might be given remotely which the patient can access at any time on-demand. To address these different kinds of services, an ecosystem composed of a dictionary's worth data layer is outlined, able to live and operate seamlessly in any cloud environment. This future work's envisioned outcome is the rapid evolution and re-definition of the European healthcare landscape.
Figures
PreviousNext
Review Article
Open Access December 27, 2019

Data-Driven Innovation in Finance: Crafting Intelligent Solutions for Customer-Centric Service Delivery and Competitive Advantage

Abstract Innovations in computing and communication technologies are reshaping finance. The seismic changes are casting uncertainty about the future of financial services. On one hand, fintech evangelists project a rosy future, asserting that the fast-moving algorithms can deliver low-cost financial services intuitively, customized to meet robust consumer expectations. On the other hand, many finance [...] Read more.
Innovations in computing and communication technologies are reshaping finance. The seismic changes are casting uncertainty about the future of financial services. On one hand, fintech evangelists project a rosy future, asserting that the fast-moving algorithms can deliver low-cost financial services intuitively, customized to meet robust consumer expectations. On the other hand, many finance veterans fret that the traditional banking model could disintermediate, bleeding banks via a ‘death by a thousand cuts’, reducing them to passive portfolio holders with no direct customer relationship, eclipsed by digital giants which use their enormous treasure troves of customer data to offer banking as an added service with nearly free cost. Amidst the upbeat technological promises and apocalyptic forebodings, there are two constant, mostly agreed-upon, truths. The first is the vital importance of data. Advances in the internet, cloud computing, and record-keeping technologies are producing an ‘exponential growth in the volume and detail of data’. Some of this big data are personal information. Smartphones are deployed in almost all developed and emerging economies, serving as little spies; tracking, recording location histories, social networks, and app usage of their unsuspecting owners; often with a great degree of precision. ‘People are walking data-factories’ in this ‘mobile digital society’. Data are the fermentation of these global exchanges, electronic commerce and communication, and financial transactions. To just take Facebook as an example, it shares 30 million people a day through updates and posts, hosting personal information on 2.23 billion users. To the alarm of the uninformed public, much of this information is available for commercial harvest. The second constant is the rise of intelligent solutions. Consumers today—be it disclosed or not—are fed tailored clothes, music, film, holiday packages—almost anything you like, notably dynamic pricing, varying in accordance with individual profiles, or personalized search results. The availability of powerful computers has enabled comparable applications that are intended to make the system more responsive to their customer profiles and desires, or to capitalize competitive business possibilities. Such changes will transform the financial industry and occupy a prominent position among the mechanisms of policy competition, reshaping the way in which financial services are bestowed and led on the demand side.
Figures
PreviousNext
Review Article
Open Access December 27, 2022

The Role of AI Driven Clinical Research in Medical Device Development: A Data Driven Approach to Regulatory Compliance and Quality Assurance

Abstract This essay explores how AI can enhance clinical research and, particularly, its pivotal role in the development of medical devices. A data-driven approach to medical device development that can streamline regulatory compliance and quality assurance is discussed. Methods that generate insights from pre-stage data and utilize it during development are detailed. The effectiveness of this approach in [...] Read more.
This essay explores how AI can enhance clinical research and, particularly, its pivotal role in the development of medical devices. A data-driven approach to medical device development that can streamline regulatory compliance and quality assurance is discussed. Methods that generate insights from pre-stage data and utilize it during development are detailed. The effectiveness of this approach in compliance audits, 510(k) submissions, and quality system audits - reducing time, effort, and risks is analyzed. The findings are illustrated with practical examples and takeaway recommendations. When reading a scientific article, how many times have you judged the quality of the research by looking at the methodology section? Artificial intelligence algorithms can be developed with the most robust and innovative technology, but if they are not properly validated, they will be worthless in the eyes of regulatory authorities. Conversely, outdated and simplistic models can still gain regulatory clearance if robustness is effectively demonstrated. For better or worse, ethics, economics, and robustness are often sacrificed in the constant government struggle to keep up with the technological edge of AI development. The slow crawl of lawmakers is constant in every field. Automating small tasks can save time and reduce risks when playing catch-up with a changing regulatory framework so the rest of the AI development can continue uninhibitedly. This dives into using FDA open data to collaborate with a food and drug law company and develop several bottom-up initiatives that supply knowledge needed for regulatory compliance and quality systems development. Methods that input pre-stage data and output actionable insights as models are provided. By sharing these resources and advice as academic researchers, efficiency in streamlining processes is maximized, thereby letting more time and resources be allocated to the actual development [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2021

Advancements in Smart Medical and Industrial Devices: Enhancing Efficiency and Connectivity with High-Speed Telecom Networks

Abstract Emerging smart medical instruments combined with advanced smart industrial equipment facilitate the collection of vast volumes of critical data. This data not only enables significantly more accurate and cost-effective diagnosis and maintenance but also enriches the datasets available for AI algorithms, leading to improved insights and outcomes. The integration of high-speed and ultra-reliable [...] Read more.
Emerging smart medical instruments combined with advanced smart industrial equipment facilitate the collection of vast volumes of critical data. This data not only enables significantly more accurate and cost-effective diagnosis and maintenance but also enriches the datasets available for AI algorithms, leading to improved insights and outcomes. The integration of high-speed and ultra-reliable telecommunications infrastructure is crucial, as it supports the cloud model. This model allows for off-device aggregation in the cloud, which effectively offloads infrastructure demands and provides an extended runway for future technological improvements before the deployment of the next generation of devices. However, in certain scenarios, latency and bandwidth limitations present significant challenges. These limitations require that a substantial amount of AI and machine learning processing is conducted directly on the transmitted data, which places rigorous demands on both the processing subsystems and the communications links themselves. The current project directly addresses the accelerator side of this multifaceted issue. It will carry out comprehensive end-to-end demonstrations leveraging pilot 5G networks and telemedicine facilities, collaborating closely with major industry participants to showcase the capabilities and potential of this innovative technology. This collaborative effort is essential to pushing the boundaries of what is possible in smart medical instruments and industrial applications [1].
Figures
PreviousNext
Review Article
Open Access December 27, 2021

Advanced Computational Technologies in Vehicle Production, Digital Connectivity, and Sustainable Transportation: Innovations in Intelligent Systems, Eco-Friendly Manufacturing, and Financial Optimization

Abstract This paper includes the impacts of the Internet of Things (IoT), Big Data, and other emerging technologies in the vehicle production sector, digital connectivity, and sustainable transport system. Automated and intelligent transportation for safe, efficient, and sustainable transport systems will be stressed. Key factors to promote automated or connected vehicles including connected environment, [...] Read more.
This paper includes the impacts of the Internet of Things (IoT), Big Data, and other emerging technologies in the vehicle production sector, digital connectivity, and sustainable transport system. Automated and intelligent transportation for safe, efficient, and sustainable transport systems will be stressed. Key factors to promote automated or connected vehicles including connected environment, integration of all transport modes, advanced cooperative systems, and policy enforcement will be discussed. This paper contains the Axiomatic Categorisation Framework (AFS) for the dynamic alignment in a collection of disparate functions in cyber-physical systems (CPS). Developed system is enhanced for breaking the rules within autonomous vehicles (AV). It means the human personal injury is inevitable while the vehicle does not do any rules. Especially in complicated traffic situations, many of the constraints are mutually exclusive, and there is no way to obey all of them at a time. Also, there is no way to ensure that the self-driving vehicle has priority in all situations [1]. Public distrust in AV systems has to be increased and the investment in this technology has to slow down. Instead, a human driver should be partially responsible for operation. The development of a driver-behavior assistant (DBA) system is proposed, which should be able to break the rules for the distances of such slow development. It is intended to be effective in non-deterministic situations while maintaining the safety of the AV and those involved in the event. A driver's actions would not only be acceptable as a driving strategy but also would be predictable, and therefore other road users could unambiguously react.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Designing Self-Learning Agentic Systems for Dynamic Retail Supply Networks

Abstract The evolution of supply chains (SC) from a linear to a network structure created an opportunity for new processes, product/service offerings, and provider-business. Rising customer service expectations have led to the need for innovative SC designs to develop and sustain competitive performance globally. Firms are forced to respond and adapt accordingly, thereby leading to design, network, [...] Read more.
The evolution of supply chains (SC) from a linear to a network structure created an opportunity for new processes, product/service offerings, and provider-business. Rising customer service expectations have led to the need for innovative SC designs to develop and sustain competitive performance globally. Firms are forced to respond and adapt accordingly, thereby leading to design, network, operational, and performance dynamics. Traditionally, SCs are treated as static structures, focusing solely on design and/or operational optimization. Such perspectives are not viable options for SC domains, as they address only a portion of the dynamic problem space, use a deterministic assumption of dominant design variables, capitalize on past data to predict future decisions, and offer pre-classified forecasting options complemented with a limited comprehension of systemic SC elasticity. Novel self-learning agentic systems are proposed that blend the sciencematics of SC decisions and dynamics. The designs guide firms seeking to build adaptive SCs using operational decision processes. The designs address the agentic nature of SC, embedding computational interaction models of firm SC networks. The designs contrast the stochastic action-taking and thereby the performance outcomes, discovering opportunities for adaptive operational designs of SC tasks. Fine-tuning and meta-learning are new design capabilities that adapt to evolving dynamic environments. Frameworks for behavioral customization and systematic exploration of the design space are provided as user guides. Exemplar designs are also provided to serve as a translation template for users to express operational models of their own contexts. To account for the dynamics of supply chains (SC), agent-based models are increasingly adopted. Such models exhibit SC structure and/or formulation dynamics. Though existing efforts commence adjacent-only structural changes, dynamism with respect to tasks is crucial for SC design and operational strategy development. Proposed is a process modeling library and workflow for discovering intricate designs of adaptive agentic systems. The library revises Dataflow and Structure, concealing sequencing and context designs of processes. Prompted specifications describe and enact designs. Applications in SC formulation discovery are provided.
Figures
PreviousNext
Review Article
Open Access December 26, 2018

Understanding Consumer Behavior in Integrated Digital Ecosystems: A Data-Driven Approach

Abstract This study aims to achieve a new understanding of how, why, and when consumer behavior is shaped, enacted, and experienced inside and across integrated digital ecosystems related to large-scale trackable goods, all in service of helping marketers optimize their business performance in the new economy. The pioneering understanding begins by exploring what motivates the choices of a homogeneous [...] Read more.
This study aims to achieve a new understanding of how, why, and when consumer behavior is shaped, enacted, and experienced inside and across integrated digital ecosystems related to large-scale trackable goods, all in service of helping marketers optimize their business performance in the new economy. The pioneering understanding begins by exploring what motivates the choices of a homogeneous group of consumers to organize their consumption of national and store brand varieties of consumer package goods in a certain manner. Thereafter, the essay explores how, if at all, the other digital activities of consumers across various product-related digital spaces and on various platforms build expertise and interest in these products such that they exert an effect on the purchase choices for these products. The essay then advances to asking how online information seeking, in various product-related digital spaces, on various platforms, and from various sources, and taking place at various points in the purchase journey affects online-offline dynamics in purchasing these products. Thereafter, the research examines how paid digital communication in various product-related digital spheres and forms, enabled by consumer advertising engagement on various platforms, boosts the offline sales of these products. Finally, by employing a new methodology that combines consumer scanning data, self-reported online activity data, and transaction data collected from an ad-tech partner, the research presents a fresh set of marketing action levers and performance outcomes on chosen products. Along the way, progress is made on four under-investigated topics in the advertising literature – the role of consumer actors and their expertise in the online-offline purchasing dynamics for ads, advertising engagement, consumer digital spaces, and consumer digital activity investment.
Figures
PreviousNext
Review Article
Open Access December 29, 2020

Enhancing Government Fiscal Impact Analysis with Integrated Big Data and Cloud-Based Analytics Platforms

Abstract While several application domains are exploiting the added-value of analytics over various datasets to obtain actionable insights and drive decision making, the public policy management domain has not yet taken advantage of the full potential of the aforementioned analytics and data models. To this end, in this paper authors present an overall architecture of a cloud-based environment that [...] Read more.
While several application domains are exploiting the added-value of analytics over various datasets to obtain actionable insights and drive decision making, the public policy management domain has not yet taken advantage of the full potential of the aforementioned analytics and data models. To this end, in this paper authors present an overall architecture of a cloud-based environment that facilitates data retrieval and analytics, as well as policy modelling, creation and optimization. The environment enables data collection from heterogeneous sources, linking and aggregation, complemented with data cleaning and interoperability techniques. An innovative approach for analytics as a service is introduced and linked with a policy development toolkit, which is an integrated web-based environment to fulfil the requirements of the public policy ecosystem stakeholders [1]. Large information databases on various public issues exist, but their usage for public policy formulation and impact analysis has been limited so far, as no cloud-based service ecosystem exists to facilitate their efficient exploitation. With the increasing availability and importance of both public big and traditional data, the need to extract, link and utilize such information efficiently has arisen. Current data-driven web technologies and models are not aligned with the needs of this domain, and therefore, potential candidates for big data, cloud-based and service-oriented public policy analysis solutions should be investigated, piloted and demonstrated [2]. This paper presents the conceptual architecture of such an ecosystem based on the capabilities of state-of-the-art cloud and web technologies, as well as the requirements of its users.
Figures
PreviousNext
Review Article
Open Access December 27, 2022

Big Data-Driven Time Series Forecasting for Financial Market Prediction: Deep Learning Models

Abstract Financial markets have become more and more complex, so has been the number of data sources. Stock price prediction has hence become a tough but important task. The time dependencies in stock price movements tend to escape from traditional models. In this work, a hybrid ARIMA-LSTM model is suggested to enhance accuracy of stock price forecasts. Based on time series indicators like adjusted closing [...] Read more.
Financial markets have become more and more complex, so has been the number of data sources. Stock price prediction has hence become a tough but important task. The time dependencies in stock price movements tend to escape from traditional models. In this work, a hybrid ARIMA-LSTM model is suggested to enhance accuracy of stock price forecasts. Based on time series indicators like adjusted closing prices of S&P 500 stocks over a decade (2010–2019), the ARIMA-LSTM model combines influences of both autoregressive time series forecasting with the substantial sequence learning property of LSTM. Data preprocessing in all aspects including missing values interpolation, outlier’s detection and data scaling – Min-Max guarantees data quality. The model is trained on 90/10 training/testing split and met with main performance metrics: MaE, MSE & RMSE. As indicated in the results, the proposed ARIMA-LSTM model gives a MAE value and MSE value of 0.248 and 0.101 respectively and RMSE of 0.319, a measure high accuracy on stock price prediction. Coupled comparative analysis with other Artificial Neural Networks (ANN) and BP Neural Networks (BPNN) are examples of machine learning reference models, further illustrates the suitability and superiority of ARIMA-LSTM approach as compared to the underlying models with the least MAE and strong predictive capability. This work demonstrates the efficiency of integrating the classical time series models with deep learning methods for financial forecasting.
Figures
PreviousNext
Article
Open Access December 27, 2020

Building Foundational Data Products for Financial Services: A MDM-Based Approach to Customer, and Product Data Integration

Abstract Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals [...] Read more.
Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals of these models are to achieve large return on investment ratios and to maximize contribution to local profit centers. What’s important is that their modeling is based only on data created, curated and maintained by these marketing organizations. The difference today is that the modeling is no longer based solely on a small number of response-determined variables that are constantly assessed in terms of importance. A quarterly campaign update generates hundreds of statistical models — involving campaign responses, purchase-lag time, the relative magnitude of the direct effect, and the cross-buying effects — using thousands of variables, including customer demographics, life stage, product transactions, household composition, and customer service history. It’s a network of models, not just a table of variable-by-residual importance values. But that’s only part of the story of data products. The predictive modeling utilized by these campaign plans is based on analytics and data preparation, which are data products in their most diminutive form. These products would be even more elementary were they not crafted quarterly by highly skilled, experienced modelers using advanced software and processes. Most companies have enough data to create models that contain not simply hundreds of variables, but thousands, so that the focus can return to information instead of data reduction. These models largely replace the internal econometric models previously used to produce advanced forecasts in the absence of campaign modeling. People used these forecasts to simulate ROI and contribution forecasts for the planned campaigns. In the old days, reliance on econometrically forecast ROI-guideline contribution values reduced the reliance on the marketing campaign modelers because of a lack of trust in their predictive ability.
Figures
PreviousNext
Review Article
Open Access December 27, 2022

Survey of Automated Testing Frameworks and Tools for Software Quality Assurance: Challenges and Best Practices

Abstract Automated testing and software quality assurance (SQA) practices are essential for ensuring the reliability, scalability, and maintainability of modern software systems. This paper presents a review of widely used automated testing frameworks, including Driven, Data-Driven, Behavior-Driven Development (BDD), and Record/Playback approaches, outlining their methodologies, benefits, and limitations [...] Read more.
Automated testing and software quality assurance (SQA) practices are essential for ensuring the reliability, scalability, and maintainability of modern software systems. This paper presents a review of widely used automated testing frameworks, including Driven, Data-Driven, Behavior-Driven Development (BDD), and Record/Playback approaches, outlining their methodologies, benefits, and limitations in different development contexts. In parallel, it examines established SQA techniques such as Test-Driven Development, static analysis, and white-box testing, which provide systematic methods for defect detection and quality improvement. The study further examines the role of practical tools, such as Selenium, TestNG, and JUnit, in supporting test automation and validation activities. In addition to highlighting technical capabilities, the paper identifies common challenges faced in automation, including incomplete requirements, integration complexities, and maintaining evolving test suites. Recommended best practices are provided to address these issues, offering guidance for organizations seeking to strengthen their software testing processes through structured frameworks, adaptive techniques, and reliable automation tools.
Figures
PreviousNext
Article

Query parameters

Keyword:  Data-Driven

View options

Citations of

Views of

Downloads of