Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access January 09, 2025

Advances in the Synthesis and Optimization of Pharmaceutical APIs: Trends and Techniques

Abstract The synthesis and optimization of Active Pharmaceutical Ingredients (APIs) is fundamental to pharmaceutical drug development, directly influencing drug efficacy, safety, and cost-effectiveness. Over recent years, significant advancements in synthetic methodologies and manufacturing technologies have transformed API production. This manuscript provides an overview of the latest innovations in API [...] Read more.
The synthesis and optimization of Active Pharmaceutical Ingredients (APIs) is fundamental to pharmaceutical drug development, directly influencing drug efficacy, safety, and cost-effectiveness. Over recent years, significant advancements in synthetic methodologies and manufacturing technologies have transformed API production. This manuscript provides an overview of the latest innovations in API synthesis, focusing on key techniques such as green chemistry, continuous flow chemistry, biocatalysis, and automation. Green chemistry principles, including solvent substitution and catalytic reactions, have enhanced sustainability by reducing waste and energy consumption. Continuous flow chemistry offers improved reaction control, scalability, and safety, while biocatalysis provides an eco-friendly alternative for synthesizing complex and chiral APIs. Additionally, the integration of automation and advanced process control using machine learning and real-time monitoring has optimized production efficiency and consistency. The manuscript also discusses the challenges associated with regulatory compliance and quality assurance, highlighting the role of advanced analytical techniques such as HPLC, NMR, and mass spectrometry in ensuring API purity. Looking ahead, personalized medicine and smart manufacturing technologies, including blockchain for traceability, are expected to drive further innovation in API production. This review concludes by emphasizing the need for continued advancements in sustainability, efficiency, and scalability to meet the evolving demands of the pharmaceutical industry, ultimately enabling the development of safer, more effective, and environmentally responsible medicines.
Review Article
Open Access May 14, 2024

A review of reliability techniques for the evaluation of Programmable logic controller

Abstract PLCs, or programmable logic controllers, are essential parts of contemporary industrial automation systems and are responsible for managing and keeping an eye on a variety of operations. PLC reliability is critical to maintaining industrial systems' continuous and secure operation. A wide range of reliability strategies were used to improve the reliability of Programmable Logic Controllers, and [...] Read more.
PLCs, or programmable logic controllers, are essential parts of contemporary industrial automation systems and are responsible for managing and keeping an eye on a variety of operations. PLC reliability is critical to maintaining industrial systems' continuous and secure operation. A wide range of reliability strategies were used to improve the reliability of Programmable Logic Controllers, and this article methodically looks at them all. The evaluation classified PLC reliability techniques into Root Cause Analysis (RCA), Reliability Centered Maintenance (RCM), Hazard analysis (HA), Reliability block diagram (RBD), Fault tree analysis (FTA), Physics of failure (PoF) and FMEA/FMECA, after thoroughly reviewing the body of literature. The proportion of reviewed papers using either RCA, RCM, FMEA/FMECA, FTA, RBD, RCM, PoF, or Hazard analysis to increase the reliability of PLCs showed that RCA, which makes up 20% of the publications reviewed, has been used the most to increase the reliability of the PLC, followed by HA, RCM, RBD, FTA, and PoF, which account for 17%, 16%, 16%,13%, 10%, and 8% of the articles reviewed, respectively. The paper discusses new developments and trends in PLC reliability, such as the application of machine learning (ML) and artificial intelligence (AI) to fault detection and predictive maintenance.
Figures
PreviousNext
Review Article
Open Access May 13, 2024

A review of components of reliability for the evaluation of Programmable logic controller

Abstract The control of processes is made smooth and effective by Programmable Logic Controllers (PLCs), which are essential to industrial automation. The assessment of PLCs' reliability is crucial since more and more sectors depend on them for crucial tasks. In-depth reviews of the components necessary to evaluate PLC system reliability are presented in this study. To ensure a robust review, the review [...] Read more.
The control of processes is made smooth and effective by Programmable Logic Controllers (PLCs), which are essential to industrial automation. The assessment of PLCs' reliability is crucial since more and more sectors depend on them for crucial tasks. In-depth reviews of the components necessary to evaluate PLC system reliability are presented in this study. To ensure a robust review, the review first clarifies the basic concepts of reliability, highlighting the significance of system uptime and the ramifications of failures in industrial settings. Next, it examined the different elements that go into a PLC's overall reliability, such as availability, testability, and (maintenance and maintainability). The percentage of the reviewed papers that employed (maintenance and maintainability), testability, or availability to improve the reliability of PLC systems showed that, availability and (maintenance and maintainability) has been employed the most for enhancing system reliability, accounting for 32% each of publications analyzed, followed by testability, accounting for 28% respectively. The scatter chart that depicts the progression of reliability components from 2010 to 2023 also explained that the use of availability and (maintenance and maintainability) was increasing. This upward trend can be explained by the fact that repairable systems are heavily reliant on availability, whereas (maintenance and maintainability) tend to avoid unnecessary equipment breakdown and testability, which ensures the ease with which the functionality of any system or component can be ascertained with the required level of precision.
Figures
PreviousNext
Review Article
Open Access August 16, 2023

Pharmaceutical Drug Traceability by Blockchain and IoT in Enterprise Systems

Abstract Pharmaceutical drug traceability is a regulatory compliance adopted by most nations in the world. A comprehensive analysis was carried out to explain the benefits of adopting enterprise system for pharmaceutical drug traceability. Counterfeit drugs are medicines that are fake and have been produced using incorrect potency, or incorrect ingredients used to manufacture these drugs. Solving the drug [...] Read more.
Pharmaceutical drug traceability is a regulatory compliance adopted by most nations in the world. A comprehensive analysis was carried out to explain the benefits of adopting enterprise system for pharmaceutical drug traceability. Counterfeit drugs are medicines that are fake and have been produced using incorrect potency, or incorrect ingredients used to manufacture these drugs. Solving the drug counterfeiting problems by identifying the most effective and innovative technologies for protecting people's health is of essence these days for the world. Drug serialization is essential concept for drug traceability in the pharmaceutical supply chain. Blockchain is the latest stringent technology that makes drug distribution more secure in the supply chain. The blockchain-based drug traceability is a distributed shared data platform that shares information that is irreversible, reliable, responsible, and transparent in the PSC. Blockchain uses two powerful module, Hyperledger Fabric and Besu to satisfy important criteria for medication traceability, such as privacy, trust, transparency, security, authorization and authentication, and scalability. Researchers in Health informatics can use blockchain designs as a useful road map to develop and implement end-to-end pharmaceutical drug traceability in the supply chain to prevent drug counterfeiting. Industrial IoT is also a key component for the pharmaceutical industry. IoT systems in pharmaceutical drug traceability can be beneficial as they are based on automation and computational methodologies.
Figures
PreviousNext
Review Article
Open Access December 15, 2022

Effective Parameters to Design an Automatic Parking System

Abstract The automated parking system is an extensive branch of smart transport systems. The smartness of such systems is determined by different parameters such as parking maneuver planning. Coding this control system includes vehicle parking and understanding the environment. A high-quality classification mask has been used on each sample to analyze the automated vehicle parking parameters. Mask [...] Read more.
The automated parking system is an extensive branch of smart transport systems. The smartness of such systems is determined by different parameters such as parking maneuver planning. Coding this control system includes vehicle parking and understanding the environment. A high-quality classification mask has been used on each sample to analyze the automated vehicle parking parameters. Mask region-based convolutional neural networks (R-CNN) was taught using a small computational workload titled faster R-CNN that operates in five frames per second. In this paper, the rapidly-exploring random tree (RRT) method was used for routing the parking space and a nonlinear model predictive control (NMPC) controller was added to develop this system. We add the line detection algorithm commands to the mask R-CNN algorithm. The results can be useful to design a secure automatic parking system as well as a powerful perception system.
Figures
PreviousNext
Article
Open Access October 28, 2021

Development of an Improved Solid Waste Collection System using Smart Sensors

Abstract Waste collection system has become a challenging task, occasioned by the overflowing garbage bins littered all over the environment, causing environmental hazard and further leading to incurable diseases which endanger life. The present-day waste collection system has proven to be inefficient, taking into consideration the advancement in the technologies on the rise in recent years as well as the [...] Read more.
Waste collection system has become a challenging task, occasioned by the overflowing garbage bins littered all over the environment, causing environmental hazard and further leading to incurable diseases which endanger life. The present-day waste collection system has proven to be inefficient, taking into consideration the advancement in the technologies on the rise in recent years as well as the continuous increase in population growth. As a result of this inefficiency observed, this work developed a model for electronic waste collection system in a telecommunication driven environment. In the system's implementation, PIC18F4620 based instrumentation, integrated with proximity sensor for external monitoring and level sensors for internal monitoring was adopted, while the controlling of the opening and closing of the cabins was implemented using a smart switching board. A remote reporting to the waste management authority so as to systematically plan route-map for garbage collection when the waste cabin is fully filled was done by deploying a 900MHz transmitter interfaced with the system’s controller. The result shows that with this model the waste cabin opens only on account of a user approaching the sensing distance of the system and the cabin is not filled. But when the cabin gets filled and a user approaches the sensing distance of the system, it directs the user to use the nearest waste cabin by displaying a message on the LCD (Liquid Crystal Display), while communicating with relevant authority for the evacuation of the cabin via SMS. It was obviously seen that the automation incorporated into the system had zero impact on the success rate of the system or system availability while introducing a latency of 5.6seconds, which is just 28.0% of the maximum allowable latency of this kind of system, while protecting the environment from environmental pollution and spread of diseases. This work highlights the potentials of (EWCS) Electronic Waste Collection System in monitoring and controlling waste disposal for healthy and clean environment.
Figures
PreviousNext
Article
Open Access August 20, 2022

Advancing Predictive Failure Analytics in Automotive Safety: AI-Driven Approaches for School Buses and Commercial Trucks

Abstract The recent evidence on AI in automotive safety shows the potential to reduce crashes and improve efficiency. Studies used AI techniques like machine learning and predictive analytics models to develop predictive collision avoidance systems. The studies collected data from various sources, such as traffic collision data and shapefiles. They utilized deep learning neural networks and 3D [...] Read more.
The recent evidence on AI in automotive safety shows the potential to reduce crashes and improve efficiency. Studies used AI techniques like machine learning and predictive analytics models to develop predictive collision avoidance systems. The studies collected data from various sources, such as traffic collision data and shapefiles. They utilized deep learning neural networks and 3D visualization techniques to analyze the data. However, there needs to be more research on AI in school bus and commercial truck safety. This paper explores the importance of AI-driven predictive failure analytics in enhancing automotive safety for these vehicles. It will discuss challenges, required data, technologies involved in predictive failure analytics, and the potential benefits and implications for the future. The conclusion will summarize the findings and emphasize the significance of AI in improving driver safety. Overall, this paper contributes to the field of automotive safety and aims to attract more research in this area.
Figures
PreviousNext
Review Article
Open Access August 29, 2022

From Deterministic to Data-Driven: AI and Machine Learning for Next-Generation Production Line Optimization

Abstract The advancement of modern manufacturing is synonymous with the growth of automation. Automation replaces human operators, improves productivity and quality, and reduces costs. However, the initial financial cost and knowledge requirements can be barriers to embracing automation. Manufacturers are now seeking smart manufacturing, known as the fourth industrial revolution. Smart manufacturing goes [...] Read more.
The advancement of modern manufacturing is synonymous with the growth of automation. Automation replaces human operators, improves productivity and quality, and reduces costs. However, the initial financial cost and knowledge requirements can be barriers to embracing automation. Manufacturers are now seeking smart manufacturing, known as the fourth industrial revolution. Smart manufacturing goes beyond automation and utilizes IoT, AI, and big data for optimized production. In a smart factory, production can be linked and controlled innovatively, leading to increased performance, agility, and reduced costs.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Enhancing Pharmaceutical Supply Chain Efficiency with Deep Learning-Driven Insights

Abstract The growing complexity of the operating environment urges pharmaceutical innovation. This essay addresses the need for the integration of advanced technologies in the pharmaceutical supply chain. It justifies the value proposition and presents a concrete use case for the integration of deep learning insights to make data-driven decisions. The supply chain has always been a priority for the [...] Read more.
The growing complexity of the operating environment urges pharmaceutical innovation. This essay addresses the need for the integration of advanced technologies in the pharmaceutical supply chain. It justifies the value proposition and presents a concrete use case for the integration of deep learning insights to make data-driven decisions. The supply chain has always been a priority for the pharmaceutical industry; research and development recognizes companies' increasing investment in big data strategies, with plans for a CAGR in big data tool adoption. The work presented herein has a preliminary explorative character to recuperate and integrate evidence from partly overlooked practical experience and know-how. The practical relevance of the essay is directed toward practitioners in pharmaceutical production, supply chain management, logistics, and regulatory agencies. The literature has shown a long-term concern for enhanced performance in the pharmaceutical supply chain network. This essay demonstrates the application of deep learning-driven insights to reveal non-evident flow dependencies. The main aim is to present a comprehensive insight into deep learning-driven decision support. The supply chain is portrayed in a holistic manner, seeking end-to-end visibility. Implications for public policy are discussed, such as data equity: many countries are protecting their populations and economic growth by building resilience and efficiency to ensure the capacity to move goods across supply chains. The implementation strategy is covered. The combined reduction of variability, efficiency as matured richness, reliability (on stochastic flows and their understanding through deep learning and data), and system noise (increased dampening through the inclusiveness of all stakeholders) results in increased responsiveness of supply chains for pharmaceutical products. Future work involves the integration of external data, closing the loop between planning and its application in reality.
Figures
PreviousNext
Review Article
Open Access January 10, 2022

Composable Infrastructure: Towards Dynamic Resource Allocation in Multi-Cloud Environments

Abstract To ensure maximum flexibility, service providers offer a variety of computing options with regard to CPU, memory capacity, and network bandwidth. At the same time, the efficient operation of current cloud applications requires an infrastructure that can adjust its configuration continuously across multiple dimensions, which are generally not statically predefined. Our research shows that these [...] Read more.
To ensure maximum flexibility, service providers offer a variety of computing options with regard to CPU, memory capacity, and network bandwidth. At the same time, the efficient operation of current cloud applications requires an infrastructure that can adjust its configuration continuously across multiple dimensions, which are generally not statically predefined. Our research shows that these requirements are hardly met with today's typical public cloud and management approaches. To provide such a highly dynamic and flexible execution environment, we propose the application-driven autonomic management of data center resources as the core vision for the development of a future cloud infrastructure. As part of this vision and the required gradual progress toward it, we present the concept of composable infrastructure and its impact on resource allocation for multi-cloud environments. We introduce relevant techniques for optimizing resource allocation strategies and indicate future research opportunities [1]. Many cloud service providers offer computing instances that can be configured with arbitrary capacity, depending on the availability of certain hardware resources. This level of configurability provides customers with the desired flexibility for executing their applications. Because of the large number of such prerequisite instances with often varying characteristics, service consumers must invest considerable effort to set up or reconfigure elaborate resource provisioning systems. Most importantly, they must differentiate the loads to be distributed between jobs that need to be executed versus placeholder jobs, i.e., jobs that trigger the automatic elasticity functionality responsible for resource allocator reconfiguration. Operations research reveals that the optimization of resource allocator reconfiguration strategies is a fundamentally difficult problem due to its NP-hardness. Despite these challenges, dynamic resource allocation in multi-clouds is becoming increasingly important since modern Internet-based service settings are dispersed across multiple providers [2].
Figures
PreviousNext
Review Article
Open Access November 16, 2022

AI-Driven Automation in Monitoring Post-Operative Complications Across Health Systems

Abstract Artificial intelligence systems have been previously used to predict post-operative complications in small studies and single institutions. Here we developed a robust artificial intelligence model that predicts the risk of having cardiac, pulmonary, thromboembolic, or septic complications after elective, non-cardiac, non-ambulatory surgery. We combined structured and unstructured electronic health [...] Read more.
Artificial intelligence systems have been previously used to predict post-operative complications in small studies and single institutions. Here we developed a robust artificial intelligence model that predicts the risk of having cardiac, pulmonary, thromboembolic, or septic complications after elective, non-cardiac, non-ambulatory surgery. We combined structured and unstructured electronic health record data from 3.5 million surgical encounters from 25 medical centers between 2009 and 2017. Our neural network model predicted postoperative comorbidities 15 to 80 times faster than classical models. As such, our model can be used to assess the risk of having a specific complication postoperatively in a fraction of a second. With our model, we believe clinicians will be able to identify high-risk surgical patients and use their good judgment to mitigate upcoming risks, ultimately improving patient outcomes [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2019

A Comprehensive Study of Proactive Cybersecurity Models in Cloud-Driven Retail Technology Architectures

Abstract This is a comprehensive, multi-year study designed to explore proactive security technologies implemented in cloud-driven retail technology architectures. Deploying cloud technologies in the retail environment creates a need for more comprehensive and proactive security technologies that protect both the psychological estate and fiscal estate. This work contributes to cloud-driven retail research [...] Read more.
This is a comprehensive, multi-year study designed to explore proactive security technologies implemented in cloud-driven retail technology architectures. Deploying cloud technologies in the retail environment creates a need for more comprehensive and proactive security technologies that protect both the psychological estate and fiscal estate. This work contributes to cloud-driven retail research by investigating anticipatory security technologies across numerous case studies. These case studies offer best practice models for elevating proactive cybersecurity in retail environments. The academic and professional communities currently lack security information and practices that apply to the retail environment. It is anticipated that the final results of this project will have value in shaping the next set of research in cybersecurity in retail environments. Many retail organizations are restricted to reactive security operations. Advanced security technologies operate on piloted activations that require the intervention of security analysts. In actuality, basic security products and security operations are now piloted by automation and machine learning. In one case study, a retail CTO shares a forensics example using a proactive security technology aimed at both psychological estate and fiscal estate. In another case study, direct discussions provide a retail university lecturer with insight into the use of driven intelligence for inventory management. The use of card technology for a model is used as an example that can be implemented as security technology which can be offered as a service to retail organizations.
Figures
PreviousNext
Review Article
Open Access December 27, 2019

Data Engineering Frameworks for Optimizing Community Health Surveillance Systems

Abstract A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like [...] Read more.
A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like blockchain or secure enclaves, and means of data storage and retrieval, have emerged. But, with these innovations comes a grand challenge: how to blend with, and adapt them to, the traditional public health practices. The long-in-place infrastructures and protocols to protect and ensure the welfare of communities are in need of change, or at least update, to enhance their marked longevity of impact directly on the health outcomes and community wellbeing they were designed to fortify. It is in this vein that the essay is written and composed. The investigation in this essay is to query what, particularly, might be the aspects and influences of the emerging veritable cornucopia of new data engineering frameworks that are either being developed specifically for health surveillance and wellness, or are available to be co opted from devices and services already thriving in the current market and research milieu. Knowing what these ways may be could well aid in molding their uptake and spread, ensuring their beneficial impacts on those communities who stand to gain the most. The essay is divided into several key segments. After this introduction, section two details the research methods. In the section that follows, the maximum health outcome potentials of these novel frameworks are reviewed. Part four of the essay takes a more critical approach, addressing how the success of these methods may be hindered and future research avenues. Lastly, the concluding information suggests some actions to take to aid best suit the implementation of these ways, and suggests some thoughts for further research after the completion of these inquiriestrand [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2021

Revolutionizing Risk Assessment and Financial Ecosystems with Smart Automation, Secure Digital Solutions, and Advanced Analytical Frameworks

Abstract For years, risk assessment and financial calculations have been based on mathematical, statistical, and actuarial studies of existing and historical data. The manual process of building datasets, processing data, deriving trends, identifying periodicities, and analyzing diagnostics is extremely expensive and time-consuming. With the automation and evolution of data science technologies, [...] Read more.
For years, risk assessment and financial calculations have been based on mathematical, statistical, and actuarial studies of existing and historical data. The manual process of building datasets, processing data, deriving trends, identifying periodicities, and analyzing diagnostics is extremely expensive and time-consuming. With the automation and evolution of data science technologies, organizations are now bringing in niche data, such as unstructured data, which contain more disruptive and precise signals for decision-making—thereby making predictions and derivative valuations more robust. This discussion highlights how investment decision-making and financial ecosystem activities are set to be transformed with the power of technical automation, data, and artificial intelligence. A noted trend in the financial investment sector is that financial valuations are highly predictive and highly non-linear in long-term occurrences. To understand these robust evolving signals and execute profitable strategies upon them, the investment management process needs to be very dynamic, open, smart, and technically deep. However, with current manual processes, reaching a high-end asset prediction still seems like a shot in the dark. In parallel, open and democratically developed financial ecosystems query relatively riskless premium opportunities in high-finance valuation and perception. The process of evolving financial ecosystems or the use of automated tools and data to move to unique frontiers could make high-yield profiting opportunities very safe and entirely riskless. Financial economic theories and realistic approximation models support this.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

Innovative Financial Technologies: Strengthening Compliance, Secure Transactions, and Intelligent Advisory Systems Through AI-Driven Automation and Scalable Data Architectures

Abstract Through a digitally connected ecosystem, the innovative realm of fintech significantly enhances human capabilities across various dimensions. AI-based fintech solutions are increasingly proving to be invaluable by providing effective enforcement of regulations that ensure compliance and protect stakeholders involved. Numerous expert investigations conducted in the arena of high-technology [...] Read more.
Through a digitally connected ecosystem, the innovative realm of fintech significantly enhances human capabilities across various dimensions. AI-based fintech solutions are increasingly proving to be invaluable by providing effective enforcement of regulations that ensure compliance and protect stakeholders involved. Numerous expert investigations conducted in the arena of high-technology litigation have reinforced both the pressing need and the immense value of enforced compliance in today's fast-paced digital landscape. Open banking APIs have boldly pioneered this critical regulatory enforcement role, allowing broader access and improved services for consumers. Predictive AI certainty, facilitated through sophisticated validation systems, represented a fundamental evolution in their rule-based legal formulations that govern many aspects of financial transactions. These advanced products were deployed within global legislative codes, allowing for standardized practices, and consequently, all market sectors quickly adopted them to ensure they remain competitive and compliant. During the latest of these professionals' encouraging comments, it became clear that awareness of the inception of these groundbreaking innovations must be convened into a steadfast commitment to continue launching natural language processing products that can refine consumer interaction. Since this pivotal point, the increasing dependency of the financial expert community on these incisive factors underscores the paramount importance they now hold for their clients and end users alike, shaping the future of finance in profound ways [1].
Figures
PreviousNext
Review Article
Open Access December 26, 2021

Deep Learning Applications for Computer Vision-Based Defect Detection in Car Body Paint Shops

Abstract The major automated plants have produced large volumes of high-quality products at low cost by introducing various technologies, including robotics and artificial intelligence. The code of many defects on the surface of products is embedded with economic loss and sometimes functionality loss because products are rarely found with defects. Therefore, most items’ production is done based on [...] Read more.
The major automated plants have produced large volumes of high-quality products at low cost by introducing various technologies, including robotics and artificial intelligence. The code of many defects on the surface of products is embedded with economic loss and sometimes functionality loss because products are rarely found with defects. Therefore, most items’ production is done based on prediction and has an invisible fluctuation in production. The detection process for hidden defect images requires a lot of costs and needs to be supported for better progress and quality enhancement. Paint shop defects should be analyzed from color changes to detect defects effectively by preventing the variability of product demand over time. It is not easy to take visible light images without noise because the paint surfaces are glossy. A few parts of illumination and shadows remain in images, even in larger size and high-resolution images. The various painted surfaces are also needed to reflect both color and texture information in computer vision models to classify defects precisely. Several automated detection systems have been applied to paint shop inspections using lasers, infrared, x-ray, electrical, magnetic, and acoustic sensors. The chance of paint shop defects can be low, unnecessarily low, compared to clouds in the sky, but those chances impact defect functionalities. Thus, they are called as “lessons learned.” Lately, artificial intelligence has been introduced to the field of factory automation, and many defect detection feeds have found footsteps in machine learning and deep learning. Recent attempts at deep learning-based defect detection are proposing simple techniques using specific neural network architectures with big data. However, big data is still in its early stages, and significant challenges exist in normalizing and annotating such data. To get cost-efficient and timely solutions tailored to automotive paint shops, it might be a better consideration to combine deep learning solutions with traditional computer vision and more elaborate machine learning methods.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Enhancing Regulatory Compliance in Finance through Big Data Analytics and AI Automation

Abstract This paper shows how Big Data Analytics (BDA) and Artificial Intelligence (AI) automation facilitate regulatory compliance in Finance. Regulatory compliance is essential in helping institutions to mitigate reputational, litigation, and financial risk. Existing literature reveals several preconditions for compliance. However, much of the literature has adopted an internal view of compliance without [...] Read more.
This paper shows how Big Data Analytics (BDA) and Artificial Intelligence (AI) automation facilitate regulatory compliance in Finance. Regulatory compliance is essential in helping institutions to mitigate reputational, litigation, and financial risk. Existing literature reveals several preconditions for compliance. However, much of the literature has adopted an internal view of compliance without considering external regulatory frameworks. This research draws on the cognitive model of regulation that looks at regulatory compliance as a social construct. It uses a triangulation research method comprising literature review, interview of trade compliance experts, and questionnaire survey of compliance practitioners to understand how regulation affects compliance and what role ICTs play in implementing compliance. The findings of this study present a regulatory compliance framework comprising four cognitive stages and a conceptual regulatory compliance system that presents how BDA and AI automation are applied to mitigate regulatory complexity and enhance regulatory compliance. The conceptual regulatory compliance system shows how BDA and AI enable institutions to dynamically assess regulatory risk, automatically monitor compliance, and intelligently predict risk violations mitigating regulatory complexity and preventing producing unnecessary documents. It provides theoretical contributions to understanding regulatory evolution and compliance and practical implications for understanding how regulation evolves to be more complicated and elements of a regulatory compliance system mitigate proliferating regulations. Additionally, it provides avenues for future research into the relationship between competing regulatory mandates and how institutions cope with that. Regulations are important for ensuring compliance and governance in finance and to curb systemic risk. Complying with regulations is difficult due to their growing volume, complexity, and fragmentation. Institutions use large-scale Information and Communication Technologies (ICTs), such as Big Data Analytics (BDA) and Artificial Intelligence (AI) automation, to monitor compliance and mitigate regulatory complexity. However, less is known about how firms comply with regulation. Most literature does not thoroughly investigate regulatory elements nor explicitly relate them to compliance.
Figures
PreviousNext
Review Article
Open Access December 26, 2018

Understanding Consumer Behavior in Integrated Digital Ecosystems: A Data-Driven Approach

Abstract This study aims to achieve a new understanding of how, why, and when consumer behavior is shaped, enacted, and experienced inside and across integrated digital ecosystems related to large-scale trackable goods, all in service of helping marketers optimize their business performance in the new economy. The pioneering understanding begins by exploring what motivates the choices of a homogeneous [...] Read more.
This study aims to achieve a new understanding of how, why, and when consumer behavior is shaped, enacted, and experienced inside and across integrated digital ecosystems related to large-scale trackable goods, all in service of helping marketers optimize their business performance in the new economy. The pioneering understanding begins by exploring what motivates the choices of a homogeneous group of consumers to organize their consumption of national and store brand varieties of consumer package goods in a certain manner. Thereafter, the essay explores how, if at all, the other digital activities of consumers across various product-related digital spaces and on various platforms build expertise and interest in these products such that they exert an effect on the purchase choices for these products. The essay then advances to asking how online information seeking, in various product-related digital spaces, on various platforms, and from various sources, and taking place at various points in the purchase journey affects online-offline dynamics in purchasing these products. Thereafter, the research examines how paid digital communication in various product-related digital spheres and forms, enabled by consumer advertising engagement on various platforms, boosts the offline sales of these products. Finally, by employing a new methodology that combines consumer scanning data, self-reported online activity data, and transaction data collected from an ad-tech partner, the research presents a fresh set of marketing action levers and performance outcomes on chosen products. Along the way, progress is made on four under-investigated topics in the advertising literature – the role of consumer actors and their expertise in the online-offline purchasing dynamics for ads, advertising engagement, consumer digital spaces, and consumer digital activity investment.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Optimizing Unclaimed Property Management through Cloud-Enabled AI and Integrated IT Infrastructures

Abstract With unclaimed property assets reaching record levels, businesses have become, in some cases, overwhelmed and hamstrung by stagnant, unoptimized processes. That sentiment is compounded by ever-evolving regulatory changes, resulting in organizations struggling to hit compliance deadlines while delivering an optimal claimant experience. Often, early systems had periods of short-term success but are [...] Read more.
With unclaimed property assets reaching record levels, businesses have become, in some cases, overwhelmed and hamstrung by stagnant, unoptimized processes. That sentiment is compounded by ever-evolving regulatory changes, resulting in organizations struggling to hit compliance deadlines while delivering an optimal claimant experience. Often, early systems had periods of short-term success but are on the verge of obsolescence, resulting in stressed workflows and cumbersome integrations. Deploying an integrated IT infrastructure, supported by cloud-enabled AI, represents the quickest path to modernizing unclaimed property management. A fully integrated IT infrastructure is crucial to optimize the management of unclaimed property [1]. When lone solutions exist across an organization, companies miss out on automation opportunities generated through the interconnectedness of systems and data. AI presents organizations with the opportunity to traverse these gaps, enabling a vast library of applications to improve the perturbed workflows of unclaimed property teams. Automated data extraction, document comparison, fraudulent claim detection, and workflow completion analysis are just a few popular applications well suited for the unclaimed property space. In addition to the lagging technology currently deployed by many organizations, the unclaimed property landscape itself is evolving. Compliance issuance, asset availability, rates, the ability to collect fraudulently posted claims, and the claimant experience have all become hot-button items that are now front of mind for regulation agencies and businesses alike. Issuing duplication letters in a compliant manner, accommodating claimant inquiries regarding held assets, and managing, processing, and understanding the operational impact of rate changes are vexing problems many organizations now find themselves playing catch-up to address. The opportunity posed by cloud-enabled AI is furthered by economic, regulatory, and report cycle pressures on unclaimed property teams to do more with the same size or fewer resources. It’s now no longer simply a case of hitting the audit date deadline and checking off a box but an emerging priority for businesses at all sides of the market, from Fortune 500 to mid-market firms. In-house shared service teams are comfortable in areas of monitoring and curating business data; however, unclaimed property is an unknown territory with a learning curve, compliance gaps, and operational holes that, if ignored, stand to scale up exponentially. The combined fallout from regulatory changes and the recent pandemic have only made the situation riskier, with increased volatility in balancing time-sensitive tasks against stringent regulatory deadlines and growing claimant outreach.
Figures
PreviousNext
Review Article
Open Access November 24, 2022

Bridging Traditional ETL Pipelines with AI Enhanced Data Workflows: Foundations of Intelligent Automation in Data Engineering

Abstract Machine Learning (ML) and Artificial Intelligence (AI) are having an increasingly transformative impact on all industries and are already used in many mission-critical use cases in production, bringing considerable value. Data engineering, which combines ETL pipelines with other workflows managing data and machine learning operations, is also significantly impacted. The Intelligent Data [...] Read more.
Machine Learning (ML) and Artificial Intelligence (AI) are having an increasingly transformative impact on all industries and are already used in many mission-critical use cases in production, bringing considerable value. Data engineering, which combines ETL pipelines with other workflows managing data and machine learning operations, is also significantly impacted. The Intelligent Data Engineering and Automation framework offers the groundwork for intelligent automation processes. However, ML/AI are not the only disruptive forces; new Big Data technologies inspired by Web2.0 companies are also reshaping the Internet. Companies having the largest Big Data footprints not only provide applications with a Big Data operational model but also source their competitive advantage from data in the form of AI services and, consequently, impact the cost/performance equilibrium of ETL pipelines. All these technologies and reasons help explain why the traditional ETL pipeline design should adapt to current and emerging technologies and may be enhanced through artificial intelligence.
Figures
PreviousNext
Article
Open Access December 27, 2021

Digital Transformation in Insurance: Migrating Enterprise Policy Systems to .NET Core

Abstract Migrating enterprise policy systems to .NET Core is a key objective of digital transformation in the Insurance IT ecosystem. This change directly addresses strategic drivers: enabling adoption of cloud-first development, resisting market pressure for more flexible and usable enterprise solutions, and preparing for changing demands from regulation and compliance. Phases of operational benefit [...] Read more.
Migrating enterprise policy systems to .NET Core is a key objective of digital transformation in the Insurance IT ecosystem. This change directly addresses strategic drivers: enabling adoption of cloud-first development, resisting market pressure for more flexible and usable enterprise solutions, and preparing for changing demands from regulation and compliance. Phases of operational benefit aligned with risk mitigation form the basis of the migration roadmap, with a strong focus on engaging all relevant stakeholders. Market pressure for a SEAMLESS user experience across ALL applications is a fundamental driver for Investment in digital transformation. Gaps remain in enterprise Operations, where Legislative and regulatory accountability Demand rigid and complex solutions that Liberty has not yet been able to provide. New risk-based capital requirements, Data-Sovereignty controls, Controls for sensitive Data in the Cloud, and new Audit requirements create a long list of challenges for the ecosystem that can no longer be Deferred. At the same time, Cross-organisational integration is becoming more important and integrating partners from the insurance supply-chain requires a much more flexible approach to development and Deployment. These factors combine to generate a credible case for accelerated digital investment with a focus on Migration to Cloud Platforms, with related Risk mitigation, Quality Improvements, and flexibility benefits that close Industry gaps.
Figures
PreviousNext
Review Article
Open Access December 09, 2021

Containerization and Microservices in Payment Systems: A Study of Kubernetes and Docker in Financial Applications

Abstract The banking sector has shown a strong interest in scaling out and utilizing the microservices architectural pattern within their payments domain, not only to manage increased transaction volumes, but also for compliance and risk-related control. Financial organizations are adopting containerization technologies like Kubernetes and Docker to align with the microservices paradigm. Containerization [...] Read more.
The banking sector has shown a strong interest in scaling out and utilizing the microservices architectural pattern within their payments domain, not only to manage increased transaction volumes, but also for compliance and risk-related control. Financial organizations are adopting containerization technologies like Kubernetes and Docker to align with the microservices paradigm. Containerization provides the foundation for automation and operational excellence of microservice-based applications by enabling continuous deployment and automated build-test-release cycles. However, deploying a Kubernetes cluster and the services it hosts in production is not sufficient to guarantee a secure and compliant operating environment. Kubernetes itself should be secured to protect workloads, and risks associated with the services being deployed must be managed continuously.
Figures
PreviousNext
Review Article
Open Access December 22, 2020

Cloud Migration Strategies for High-Volume Financial Messaging Systems

Abstract Key business objectives for digital infrastructure cloud adoption are often framed in terms of reducing cost, improving fault tolerance and resilience, simplifying scale, and enabling innovation. Given the critical nature of the financial sector, however, where timeliness and price can significantly determine an outcome, cloud migration in delivery environments demands greater throughput on the [...] Read more.
Key business objectives for digital infrastructure cloud adoption are often framed in terms of reducing cost, improving fault tolerance and resilience, simplifying scale, and enabling innovation. Given the critical nature of the financial sector, however, where timeliness and price can significantly determine an outcome, cloud migration in delivery environments demands greater throughput on the critical path and, in many enterprise-scale settings, forgoes hybrid complexity and multi-cloud risks. Nevertheless, slack in system designs does exist; financial institutions enable market functionality—trading, clearing/best execution—despite potentially being able to meet such sets with lower service levels than other verticals. A cloud multi-account structure for sensitive data, for example, naturally limits exposure when combined with observed risk. Fulfilling predictions of elasticity during periods of high demand usually requires support from a dedicated environment (or environments) located nearer to the operations. Components can consequently be allocated on a per-account basis or maintained as shared sink systems to which the dedicated streams write. The automation code can similarly be targeted for dedicated accounts, avoiding the resource constraints that beset such operations during industry events like emergency triage/contact desking.
Figures
PreviousNext
Review Article
Open Access December 27, 2022

Survey of Automated Testing Frameworks and Tools for Software Quality Assurance: Challenges and Best Practices

Abstract Automated testing and software quality assurance (SQA) practices are essential for ensuring the reliability, scalability, and maintainability of modern software systems. This paper presents a review of widely used automated testing frameworks, including Driven, Data-Driven, Behavior-Driven Development (BDD), and Record/Playback approaches, outlining their methodologies, benefits, and limitations [...] Read more.
Automated testing and software quality assurance (SQA) practices are essential for ensuring the reliability, scalability, and maintainability of modern software systems. This paper presents a review of widely used automated testing frameworks, including Driven, Data-Driven, Behavior-Driven Development (BDD), and Record/Playback approaches, outlining their methodologies, benefits, and limitations in different development contexts. In parallel, it examines established SQA techniques such as Test-Driven Development, static analysis, and white-box testing, which provide systematic methods for defect detection and quality improvement. The study further examines the role of practical tools, such as Selenium, TestNG, and JUnit, in supporting test automation and validation activities. In addition to highlighting technical capabilities, the paper identifies common challenges faced in automation, including incomplete requirements, integration complexities, and maintaining evolving test suites. Recommended best practices are provided to address these issues, offering guidance for organizations seeking to strengthen their software testing processes through structured frameworks, adaptive techniques, and reliable automation tools.
Figures
PreviousNext
Article
Open Access December 27, 2021

Best Practices of CI/CD Adoption in Java Cloud Environments: A Review

Abstract The continuous integration (CI) and continuous delivery/deployment (CD) methods are key tools in the field of modern software development, and they assist in the rapid, reliable and quality delivery of software. These DevOps methods are automated, and the code development, testing, and deployment processes are streamlined, which reduces the risk of integration, enhances productivity, and minimizes [...] Read more.
The continuous integration (CI) and continuous delivery/deployment (CD) methods are key tools in the field of modern software development, and they assist in the rapid, reliable and quality delivery of software. These DevOps methods are automated, and the code development, testing, and deployment processes are streamlined, which reduces the risk of integration, enhances productivity, and minimizes human labor. To implement CI/CD, Java cloud applications can utilize cloud-native services such as AWS Code Pipeline, Azure DevOps, and Google Cloud Build, as well as tools like Jenkins, GitLab CI/CD, GitHub Actions, CircleCI, Travis CI, and Bamboo. Basic concepts of CI/CD include automation, regular integration, testing, intensive testing, constant feedback, and process improvement. Some of the major pipeline phases include deployment, monitoring, testing, artefact management, build automation, and source code management. Despite clear benefits, challenges remain, including infrastructure complexity, dependency management, test reliability, and cultural barriers, particularly in large-scale or enterprise Java projects. This work provides a thorough analysis of CI/CD procedures and resources, including frameworks, best practices, and challenges for Java cloud applications. It highlights strategies to optimize adoption, improve software quality, and accelerate delivery cycles.
Figures
PreviousNext
Review Article
Open Access December 26, 2021

Rule-Based Automation for IT Service Management Workflows

Abstract The automation of IT Service Management (ITSM) workflows using explicit rules and data has been established for years. Domain-specific rule engines interpret rules written in declarative rule modelling languages and generate forwarding arrows to process event streams and support decision making. Such automation is augmented by rule-driven Quality Assurance for correctness, safety, and risk [...] Read more.
The automation of IT Service Management (ITSM) workflows using explicit rules and data has been established for years. Domain-specific rule engines interpret rules written in declarative rule modelling languages and generate forwarding arrows to process event streams and support decision making. Such automation is augmented by rule-driven Quality Assurance for correctness, safety, and risk management. The service desk is the onshore base of an ITSM supply chain. An end-to-end incident response service resolves incidents using only onshore resources and employs back office teams to help with unresolvable incidents. The forward factories of rule-based automation for ticket processing service are identified. Several rule-based workflows in incident and change management have been published. Further glimpses of the future across all ITSM workflows are provided based on training in an online ITSM service with automated operations. Rule engines are specialised components that direct the processing of data flows according to pre-defined rules. Decision factories complement the more common event-driven rule engines. While event processing occurs below the polling frequency of the source, rules in decision factories are triggered based on the arrival of data. These factories are applied in ITSM for risk and safety evaluation and quality assurance. Rule-enriched architectures incorporate domain-specific modelling languages to ensure correctness with respect to qualitative quality attributes. Dedicated factories provide resilience, detect slack or over-utilisation, and offer point-in-time assurance and testing.
Figures
PreviousNext
Review Article
Open Access December 18, 2020

Event-Driven Architectures for Real-Time Regulatory Monitoring in Global Banking

Abstract The global banking industry is subject to ever-growing regulatory requirements, designed to prevent financial tour de force repeats tearing through the world economy. The changes are incomplete and new rules being enacted each year. Implementing and executing these rules and regulations requires the guiding principles from senior management to reach the product desks in a clear and efficient way. [...] Read more.
The global banking industry is subject to ever-growing regulatory requirements, designed to prevent financial tour de force repeats tearing through the world economy. The changes are incomplete and new rules being enacted each year. Implementing and executing these rules and regulations requires the guiding principles from senior management to reach the product desks in a clear and efficient way. Technical systems must implement these rules. Differences in interpretation, implementation, and warnings must be addressed during normal operations. Most importantly, systems must provide warning alerts to management and the business as early as possible, to allow for proper handling. History has shown that the importance of early warnings has been overlooked repeatedly. Real-time capabilities are essential to meet these business needs. Organizations must therefore be ready to embrace a next-generation architecture that enables real-time alert and warning generation. Systems based on a streaming architecture, combined with systems enabling the real-time flow of events between domains supported by orchestration, provide a solid foundation to meet these requirements.
Figures
PreviousNext
Review Article
Open Access December 27, 2023

MLOps Frameworks for Reliable Model Deployment in Cloud Data Platforms

Abstract Machine learning operations (MLOps) comprises the practices, methods, and tooling that facilitate the deployment of reliable ML models in production environments. While many aspects of cloud data platforms are designed to enable reliability, only some managed ML services support the MLOps goals of continuous integration, continuous delivery, data lineage tracking, associated reproducibility, [...] Read more.
Machine learning operations (MLOps) comprises the practices, methods, and tooling that facilitate the deployment of reliable ML models in production environments. While many aspects of cloud data platforms are designed to enable reliability, only some managed ML services support the MLOps goals of continuous integration, continuous delivery, data lineage tracking, associated reproducibility, governance, and security. Furthermore, reliability encompasses not only the fulfillment of service-level objectives, but also systematic monitoring, alerting, and incident response automation. Architectural patterns are proposed to enable reliable deployment in cloud data platforms, focusing on the implementation of continuous integration and testing pipelines for ML models and the formulation of continuous delivery and rollout strategies. Continuous integration pipelines reduce the risk of regressions and ensure sufficient model performance at the time of deployment, while continuous delivery pipelines enable rapid updates to production models within acceptable risk profiles. The landscape of publicly available MLOps frameworks, tools, and services is also examined, emphasizing the pros and cons of established and rising solutions in containerization, orchestration, model serving, and inference. Containerization and orchestration contributes to the building of reliable deployment pipelines in cloud data platforms, whether general-purpose tools (e.g. Docker and Kubernetes) or solutions tailored for ML workloads. Containerized serving frameworks designed for high-throughput, low-latency inference can benefit a wide range of business applications, while auto-scaling and model versioning capabilities enhance the ease of use of cloud-native ML services.
Figures
PreviousNext
Review Article

Query parameters

Keyword:  Automation

View options

Citations of

Views of

Downloads of