Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access February 14, 2025

A multi-loci time-series descriptive study on noise levels in a pediatric emergency care department

Abstract Objective: To investigate the status of the acoustic environment of a typical Chinese pediatric emergency care department in a time series and identify the relationship between noise levels and factors such as crowd density and movement. Methods: A descriptive study was designed based on a multi-loci time-series method. We measured three loci under three variable settings: the [...] Read more.
Objective: To investigate the status of the acoustic environment of a typical Chinese pediatric emergency care department in a time series and identify the relationship between noise levels and factors such as crowd density and movement. Methods: A descriptive study was designed based on a multi-loci time-series method. We measured three loci under three variable settings: the decibel value, observation volume, and emergency care volume. Results: The noise levels of the three loci were significantly higher than the internationally recommended levels, exceeding rate reached more than 86.3%. The 24-hour mean map of the three loci showed similar fluctuation patterns, all of which had two peaks at approximately 10:00 AM and 16:00 PM. Conclusions: The daytime and nighttime noise levels were well-fitted by cubic functions with different coefficients. It is suggested that crowd density and movement may play important roles in noise mean fluctuations, which can be optimized to ensure a satisfactory environment in a pediatric emergency care department.
Figures
PreviousNext
Article
Open Access November 01, 2023

Individual Wave Component Signal Modeling, Parameters Extraction, and Analysis

Abstract The accurate estimation of Individual Wave Components (IWC) is crucial for automated diagnosis of the human digestive system in a clinical setting. However, this process can be challenging due to signal contamination by other signal sources in the body, such as the lungs and heart, as well as environmental noise. To address this issue, various denoising techniques are commonly employed in bowel [...] Read more.
The accurate estimation of Individual Wave Components (IWC) is crucial for automated diagnosis of the human digestive system in a clinical setting. However, this process can be challenging due to signal contamination by other signal sources in the body, such as the lungs and heart, as well as environmental noise. To address this issue, various denoising techniques are commonly employed in bowel sound signal processing. While denoising is important, it can increase computational complexity, making it challenging for portable devices. Therefore, signal processing algorithms often require a trade-off between fidelity and computational complexity. This study aims to evaluate an IWC parameter extraction algorithm that was previously developed and reconstruct the IWC without denoising using synthetic and clinical data. To that end, the role of a reliable model in creating synthetic data is paramount. The rigorous testing of the algorithm is limited by the availability of quality and quantity recorded data. To overcome this challenge, a mathematical model has been proposed to generate synthetic bowel sound data that can be used to test new algorithms. The proposed algorithm’s robust performance is evaluated using both synthetic and clinically recorded data. We perform time-frequency analysis of original and reconstructed bowel sound signals in various digestive system states and characterize the performance using Monte Carlo simulation when denoising is not applied. Overall, our study presents a promising algorithm for accurate IWC estimation that can be useful for predicting anomalies in the digestive system.
Figures
PreviousNext
Article
Open Access March 18, 2023

The Efficiency of the Proposed Smoothing Method over the Classical Cubic Smoothing Spline Regression Model with Autocorrelated Residual

Abstract Spline smoothing is a technique used to filter out noise in time series observations when predicting nonparametric regression models. Its performance depends on the choice of the smoothing parameter. Most of the existing smoothing methods applied to time series data tend to over fit in the presence of autocorrelated errors. This study aims to determine the optimum performance value, goodness of [...] Read more.
Spline smoothing is a technique used to filter out noise in time series observations when predicting nonparametric regression models. Its performance depends on the choice of the smoothing parameter. Most of the existing smoothing methods applied to time series data tend to over fit in the presence of autocorrelated errors. This study aims to determine the optimum performance value, goodness of fit and model overfitting properties of the proposed Smoothing Method (PSM), Generalized Maximum Likelihood (GML), Generalized Cross-Validation (GCV), and Unbiased Risk (UBR) smoothing parameter selection methods. A Monte Carlo experiment of 1,000 trials was carried out at three different sample sizes (20, 60, and 100) and three levels of autocorrelation (0.2, 05, and 0.8). The four smoothing methods' performances were estimated and compared using the Predictive Mean Squared Error (PMSE) criterion. The findings of the study revealed that: for a time series observation with autocorrelated errors, provides the best-fit smoothing method for the model, the PSM does not over-fit data at all the autocorrelation levels considered ( the optimum value of the PSM was at the weighted value of 0.04 when there is autocorrelation in the error term, PSM performed better than the GCV, GML, and UBR smoothing methods were considered at all-time series sizes (T = 20, 60 and 100). For the real-life data employed in the study, PSM proved to be the most efficient among the GCV, GML, PSM, and UBR smoothing methods compared. The study concluded that the PSM method provides the best fit as a smoothing method, works well at autocorrelation levels (ρ=0.2, 0.5, and 0.8), and does not over fit time-series observations. The study recommended that the proposed smoothing is appropriate for time series observations with autocorrelation in the error term and econometrics real-life data. This study can be applied to; non – parametric regression, non – parametric forecasting, spatial, survival, and econometrics observations.
Figures
PreviousNext
Article
Open Access October 24, 2022

Quantum Properties of Coherently Driven Three-Level Atom Coupled to Vacuum Reservoir

Abstract A three-level laser with an open cavity and a two-mode vacuum reservoir is explored for its quantum properties. Our investigation begins with a normalized order of the noise operators associated with the vacuum reservoir. The master equation and linear operators' equations of motion are used to determine the equations of evolution of the atomic operators' expectation values. The equation of motion [...] Read more.
A three-level laser with an open cavity and a two-mode vacuum reservoir is explored for its quantum properties. Our investigation begins with a normalized order of the noise operators associated with the vacuum reservoir. The master equation and linear operators' equations of motion are used to determine the equations of evolution of the atomic operators' expectation values. The equation of motion answers are then used to calculate the mean photon number, photon number variance, and quadrature variance for single–mode cavity light and two–mode cavity light. As a result, for γ=0, the quadrature variance of light mode a is greater than the mean photon number for two-mode cavity light. As a result, for the two-mode cavity light, the maximum quadrature squeezing is 43.42 percent.
Figures
PreviousNext
Article
Open Access August 12, 2021

Evaluation and Analysis of Noise and Vibration Exposure Level on Operator of QT40B, QTJ4-40, Lister and LM2-45 Block Moulding Machine

Abstract High levels of occupational noise and vibration remain a problem in all regions of the world. In Nigeria, 12−15% of the workforce are exposed to these hazards by WHO, 2001. This research intends to achieve the following objectives; To assess the noise emitted during the moulding of various types of blocks, to determine the level of vibration induced to workers of block moulders during activities [...] Read more.
High levels of occupational noise and vibration remain a problem in all regions of the world. In Nigeria, 12−15% of the workforce are exposed to these hazards by WHO, 2001. This research intends to achieve the following objectives; To assess the noise emitted during the moulding of various types of blocks, to determine the level of vibration induced to workers of block moulders during activities and to determine the effect of noise and vibration on workers. The following materials and equipment were used; QT40B manual block moulding machine, LM2-45 Mobile Block moulding Machine, Lister powered block moulding machine, QTJ4-40 block moulding machine using 9 and 6 inches Plates, Vibrometer and Noise monitor. The workers were exposed to noise levels above 75dB and vibration levels above 5ms-2 set as upper limit values in the Directive 44/EC from 2002 – on the Minimum Health and safety Requirements Regarding to Exposure of Workers to the Risk Arising from Physical Agents Vibration.
Figures
PreviousNext
Article
Open Access August 09, 2021

Covid-19 and the Environment: Challenges and Opportunities

Abstract After the outbreak of the covid-19 disease in the world of human life, living organisms and their environment were affected in various ways. The outbreak of the covid-19 virus has posed many opportunities and challenges to the world environment. This article aims to investigate the effects of the outbreak of covid-19 disease on the environment. This research has studied the effects of the covid-19 [...] Read more.
After the outbreak of the covid-19 disease in the world of human life, living organisms and their environment were affected in various ways. The outbreak of the covid-19 virus has posed many opportunities and challenges to the world environment. This article aims to investigate the effects of the outbreak of covid-19 disease on the environment. This research has studied the effects of the covid-19 virus on the environment through library and review methods. It has been studied and analyzed in the form of articles and related researches. The results of published sources show that quarantine and the requirement of humans to stay at home to break the covid-19 transmission chain caused the animals to feel safe, move out of their natural territory and into urban and rural areas. Reducing noise and air pollution and greenhouse gas emissions by reducing vehicle traffic and shutting down factories are other positive effects of the covid-19 outbreak that has helped improve air quality and reduce global warming. In addition to these positive effects, reducing conservation activities during the Covid-19 era in some areas has increased habitat destruction and poaching. Increasing household and hospital waste production, increasing the consumption of plastics and disposable materials, and decreasing waste recycling are the negative effects of the covid-19 virus epidemic, which, by destroying resources, puts pressure on the environment. Increased consumption of detergents and disinfectants will have many detrimental effects on the environment. In general, the positive effects of the covid-19 virus on the temporary and short-term environment seem to be small compared to the long-term consequences. Therefore, by overcoming covid-19, we should focus on rebuilding society and a healthy economy, and by fully understanding the opportunities and threats of this virus, we should consciously train environmental behaviors.
Figures
PreviousNext
Article
Open Access October 15, 2022

Big Data and AI/ML in Threat Detection: A New Era of Cybersecurity

Abstract The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even [...] Read more.
The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even under pressure from regulatory boards, have strived to harness the power of data and leverage it to enhance safety and security, maximize performance, and mitigate risks. However, the adversaries themselves have capitalized on the unequal battle of big data and artificial intelligence to inflict widespread chaos. Therefore, the demand for big data analytics and AI/ML for high-fidelity intelligence, surveillance, and reconnaissance is at its highest. Today, in the cybersecurity realm, the detection of adverse incidents poses substantial challenges due to the sheer variety, volume, and velocity of deep packet inspection data. State-of-the-art detection techniques have fallen short of detecting the latest attacks after a big data breach incident. On the other hand, computational intelligence techniques such as machine learning have reignited the search for solutions for diverse monitoring problems. Recent advancements in AI/ML frameworks have the potential to analyze IoT/edge-generated big data in near real-time and assist risk assessment and mitigation through automated threat detection and modeling in the big data and AI/ML domain. Industry best practices and case studies are examined that endeavor to showcase how big data coupled with AI/ML unlocks new dimensions and capabilities in improved vigilance and monitoring, prediction of adverse incidents, intelligent modeling, and future uncertainty quantification by data resampling correction. All of these avenues lead to enhanced robustness, security, safety, and performance of industrial processes, computing, and infrastructures. A view of the future and how the potential threats due to the misuse of new technologies from bandwidth to IoT/edge, blockchain, AI, quantum, and autonomous fields is discussed. Cybersecurity is again playing out at a pace set by adversaries with low entry barriers and debilitating tools. The need for innovative solutions for defense from the emerging threat landscape, harnessing the power of new technologies and collaboration, is emphasized.
Figures
PreviousNext
Article
Open Access December 27, 2020

Enhancing Pharmaceutical Supply Chain Efficiency with Deep Learning-Driven Insights

Abstract The growing complexity of the operating environment urges pharmaceutical innovation. This essay addresses the need for the integration of advanced technologies in the pharmaceutical supply chain. It justifies the value proposition and presents a concrete use case for the integration of deep learning insights to make data-driven decisions. The supply chain has always been a priority for the [...] Read more.
The growing complexity of the operating environment urges pharmaceutical innovation. This essay addresses the need for the integration of advanced technologies in the pharmaceutical supply chain. It justifies the value proposition and presents a concrete use case for the integration of deep learning insights to make data-driven decisions. The supply chain has always been a priority for the pharmaceutical industry; research and development recognizes companies' increasing investment in big data strategies, with plans for a CAGR in big data tool adoption. The work presented herein has a preliminary explorative character to recuperate and integrate evidence from partly overlooked practical experience and know-how. The practical relevance of the essay is directed toward practitioners in pharmaceutical production, supply chain management, logistics, and regulatory agencies. The literature has shown a long-term concern for enhanced performance in the pharmaceutical supply chain network. This essay demonstrates the application of deep learning-driven insights to reveal non-evident flow dependencies. The main aim is to present a comprehensive insight into deep learning-driven decision support. The supply chain is portrayed in a holistic manner, seeking end-to-end visibility. Implications for public policy are discussed, such as data equity: many countries are protecting their populations and economic growth by building resilience and efficiency to ensure the capacity to move goods across supply chains. The implementation strategy is covered. The combined reduction of variability, efficiency as matured richness, reliability (on stochastic flows and their understanding through deep learning and data), and system noise (increased dampening through the inclusiveness of all stakeholders) results in increased responsiveness of supply chains for pharmaceutical products. Future work involves the integration of external data, closing the loop between planning and its application in reality.
Figures
PreviousNext
Review Article
Open Access December 26, 2021

Deep Learning Applications for Computer Vision-Based Defect Detection in Car Body Paint Shops

Abstract The major automated plants have produced large volumes of high-quality products at low cost by introducing various technologies, including robotics and artificial intelligence. The code of many defects on the surface of products is embedded with economic loss and sometimes functionality loss because products are rarely found with defects. Therefore, most items’ production is done based on [...] Read more.
The major automated plants have produced large volumes of high-quality products at low cost by introducing various technologies, including robotics and artificial intelligence. The code of many defects on the surface of products is embedded with economic loss and sometimes functionality loss because products are rarely found with defects. Therefore, most items’ production is done based on prediction and has an invisible fluctuation in production. The detection process for hidden defect images requires a lot of costs and needs to be supported for better progress and quality enhancement. Paint shop defects should be analyzed from color changes to detect defects effectively by preventing the variability of product demand over time. It is not easy to take visible light images without noise because the paint surfaces are glossy. A few parts of illumination and shadows remain in images, even in larger size and high-resolution images. The various painted surfaces are also needed to reflect both color and texture information in computer vision models to classify defects precisely. Several automated detection systems have been applied to paint shop inspections using lasers, infrared, x-ray, electrical, magnetic, and acoustic sensors. The chance of paint shop defects can be low, unnecessarily low, compared to clouds in the sky, but those chances impact defect functionalities. Thus, they are called as “lessons learned.” Lately, artificial intelligence has been introduced to the field of factory automation, and many defect detection feeds have found footsteps in machine learning and deep learning. Recent attempts at deep learning-based defect detection are proposing simple techniques using specific neural network architectures with big data. However, big data is still in its early stages, and significant challenges exist in normalizing and annotating such data. To get cost-efficient and timely solutions tailored to automotive paint shops, it might be a better consideration to combine deep learning solutions with traditional computer vision and more elaborate machine learning methods.
Figures
PreviousNext
Review Article

Query parameters

Keyword:  Noise

View options

Citations of

Views of

Downloads of