Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access February 06, 2026

Predictive Modeling of Public Sentiment Using Social Media Data and Natural Language Processing Techniques

Abstract Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled [...] Read more.
Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled tweets, and develops predictive models for binary sentiment classification using Naive Bayes, Logistic Regression, and the transformer-based BERT model. Experiments were conducted on a balanced subset of 12,000 tweets after comprehensive NLP preprocessing. Evaluation using accuracy, F1-score, and confusion matrices revealed that BERT significantly outperforms traditional models, achieving an accuracy of 89.5% and an F1-score of 0.89 by effectively modeling contextual and semantic nuances. In contrast, Naive Bayes and Logistic Regression demonstrated reasonable but consistently lower performance. To support practical deployment, we introduce SentiFeel, an interactive tool enabling real-time sentiment analysis. While resource constraints limited the dataset size and training epochs, future work will explore full corpus utilization and the inclusion of neutral sentiment classes. These findings underscore the potential of transformer models for enhanced public opinion monitoring, marketing analytics, and policy forecasting.
Figures
PreviousNext
Article
Open Access June 28, 2025

Development of a Hemodialysis Data Collection and Clinical Information System and Establishment of an Intradialytic Blood Pressure/Pulse Rate Predictive Model

Abstract This research is a collaboration involving a university team, a partnering corporation, and a hemodialysis clinic, which is a cross-disciplinary research initiative in the field of Artificial Intelligence of Things (AIoT) within the medical informatics domain. The research has two objectives: (1) The development of an Internet of Things (IoT)-based Information System customized for the hemodialysis machines at the clinic, including transmission bridges, clinical personnel dedicated web/app, and a backend server. The system has been deployed at the clinic and is now officially operational; (2) The research also utilized de-identified, anonymous data (collected by the officially operational system) to train, evaluate, and compare Deep Learning-based Intradialytic Blood Pressure (BP)/Pulse Rate (PR) Predictive Models [...] Read more.
This research is a collaboration involving a university team, a partnering corporation, and a hemodialysis clinic, which is a cross-disciplinary research initiative in the field of Artificial Intelligence of Things (AIoT) within the medical informatics domain. The research has two objectives: (1) The development of an Internet of Things (IoT)-based Information System customized for the hemodialysis machines at the clinic, including transmission bridges, clinical personnel dedicated web/app, and a backend server. The system has been deployed at the clinic and is now officially operational; (2) The research also utilized de-identified, anonymous data (collected by the officially operational system) to train, evaluate, and compare Deep Learning-based Intradialytic Blood Pressure (BP)/Pulse Rate (PR) Predictive Models, with subsequent suggestions provided. Both objectives were executed under the supervision of the Institutional Review Board (IRB) at Mackay Memorial Hospital in Taiwan. The system completed for objective one has introduced three significant services to the clinic, including automated hemodialysis data collection, digitized data storage, and an information-rich human-machine interface as well as graphical data displays, which replaces traditional paper-based clinical administrative operations, thereby enhancing healthcare efficiency. The graphical data presented through web and app interfaces aids in real-time, intuitive comprehension of the patients’ conditions during hemodialysis. Moreover, the data stored in the backend database is available for physicians to conduct relevant analyses, unearth insights into medical practices, and provide precise medical care for individual patients. The training and evaluation of the predictive models for objective two, along with related comparisons, analyses, and recommendations, suggest that in situations with limited computational resources and data, an Artificial Neural Network (ANN) model with six hidden layers, SELU activation function, and a focus on artery-related features can be employed for hourly intradialytic BP/PR prediction tasks. It is believed that this contributes to the collaborating clinic and relevant research communities.
Figures
Figure 3 (c)
Figure 3 (d)
Figure 4 (b)
Figure 4 (c)
Figure 4 (d)
Figure 4 (e)
Figure 4 (f)
Figure 4 (g)
Figure 4 (h)
Figure 5 (b)
Figure 6 (b)
Figure 6 (c)
Figure 6 (d)
Figure 6 (e)
Figure 6 (f)
Figure 7 (b)
Figure 7 (c)
Figure 7 (d)
Figure 7 (e)
Figure 7 (f)
Figure 7 (g)
Figure 8 (b)
Figure 8 (c)
Figure 8 (d)
Figure 9 (b)
Figure 9 (c)
Figure 9 (d)
Figure 10 (b)
Figure 10 (c)
Figure 10 (d)
Figure 10 (e)
Figure 10 (f)
Figure 11 (b)
Figure 11 (c)
Figure 11 (d)
Figure 11 (e)
PreviousNext
PDF Html Xml
Article
Open Access October 19, 2024

Quantitative Intersectionality Scoring System (QISS): Opportunities for Enhancing Predictive Modeling, Comparative Analysis, Health Needs Assessment, and Policy Evaluation

Abstract Intersectionality has significantly enhanced our understanding of how overlapping social identities—such as race, ethnicity, gender, sex, class, and sexual orientation—interact to shape individual experiences. However, despite its theoretical importance, much of the existing literature has relied on qualitative approaches to define and study intersectionality, limiting its application in [...] Read more.
Intersectionality has significantly enhanced our understanding of how overlapping social identities—such as race, ethnicity, gender, sex, class, and sexual orientation—interact to shape individual experiences. However, despite its theoretical importance, much of the existing literature has relied on qualitative approaches to define and study intersectionality, limiting its application in predictive modeling, comparative analysis, and policy development. This paper introduces the concept of Quantitative Intersectionality Scoring System (QISS), a novel approach that assigns numerical scores to intersecting identities, thereby enabling a more systematic and data-driven analysis of intersectional effects. We argue that QISS can substantially enhance the utility and predictive validity of quantitative models by capturing the complexities of multiple, overlapping social determinants. By presenting concrete examples, such as the varying impacts of socioeconomic mobility on life expectancy among different intersectional groups, we demonstrate how QISS can yield more precise and reliable forecasts. Such a shift would allow policymakers and service providers to dynamically assess economic and health needs, as well as the uncertainties around them, as individuals move through different social and economic contexts. QISS-based models could be more responsive to the complexities of intersecting identities, allowing for a more quantified and nuanced evaluation of policy interventions. We conclude by discussing the challenges of implementing QISS and emphasizing the need for further research to validate these quantifications using robust quantitative methods. Ultimately, adopting QISS has the potential to improve the accuracy of predictive models and the effectiveness of policies aimed at promoting social justice and health equity.
Figures
PreviousNext
Perspective
Open Access November 04, 2022

An Artificial Intelligence Approach to Manage Crop Water Requirements in South Africa

Abstract Estimation of crop water requirements is of paramount importance towards the management of agricultural water resources, which is a major mitigating strategy against the effects of climate change on food security. South Africa water shortage poses a threat on agricultural efficiency. Since irrigation uses about 60% of the fresh water available, it therefore becomes important to optimise the use of [...] Read more.
Estimation of crop water requirements is of paramount importance towards the management of agricultural water resources, which is a major mitigating strategy against the effects of climate change on food security. South Africa water shortage poses a threat on agricultural efficiency. Since irrigation uses about 60% of the fresh water available, it therefore becomes important to optimise the use of irrigation water in order to maximize crop yield at the farm level in order to avoid wastage. In this study, combined application of an artificial neural network (ANN) and a crop – growth simulation model for the estimation of crop irrigation water requirements and the irrigation scheduling of potatoes at Winterton irrigation scheme, South Africa was investigated. The crop-water demand from planting to harvest date, when to irrigate, the optimum stage in the drying cycle when to apply water and the amount of irrigation water to be applied per time, were estimated in this study. Five feed –forward backward propagation artificial neural network predictive models were developed with varied number of neurons and hidden layers and evaluated. The optimal ANN model, which has 5 inputs, 5 neurons, 1 hidden layer and 1 output was used to predict monthly reference evapotranspiration (ETo) in the Winterton area. The optimal ANN model produced a root-mean-square error (RMSE) of 0.67, Pearson correlation coefficient (r) of 0.97 and coefficient of determination (R2) of 0.94. The validation of the model between the measured and predicted ETo shows a r value of 0.9048. The predicted ETo was one of the input variables into a crop growth simulation model, called CROPWAT. The results indicated that the total crop water requirement was 1259.2 mm/decade and net irrigation water requirement was 1276.9 mm/decade, spread over a 5-day irrigation time during the entire 140 days of cropping season for potatoes. A combination of the artificial neural networks and the crop growth simulation models have proved to be a robust technique for estimating crop irrigation water requirements in the face of limited or no daily meteorological datasets.
Figures
PreviousNext
Article
Open Access March 20, 2022

Botanical education for vocational training students and primary and secondary teacher

Abstract A domain of practical learning over the theoretical is provided in this work, for this several places of natural interest have been studied (Natural Parks), a quality pedagogical training is obtained, through which the students of Vocational Training and Primary and Secondary Education Teachers obtain competences in the management of natural spaces, which are of interest for conservation, [...] Read more.
A domain of practical learning over the theoretical is provided in this work, for this several places of natural interest have been studied (Natural Parks), a quality pedagogical training is obtained, through which the students of Vocational Training and Primary and Secondary Education Teachers obtain competences in the management of natural spaces, which are of interest for conservation, competences in flora, plant communities, habitats and interpretation of the landscape. The learning is eminently practical, which allows trained personnel to enter the labor market. The study of natural spaces has been carried out using direct observation techniques, with the participation of specialist teachers in various fields, because the interpretation of vegetation, habitats and landscape requires multidisciplinary techniques. For this, teaching methodologies in Botany are used, how have the phytosociological sampling techniques been; Geology, Edaphology, and Climatology, in the latter case creating future predictive models that allow the student to make decisions about the management of a territory; this study has made it possible to carry out a comprehensive interpretation of the natural environment, with a notable pedagogical improvement in learning.
Figures
PreviousNext
Article
Open Access December 27, 2023

Leveraging Artificial Intelligence to Enhance Supply Chain Resilience: A Study of Predictive Analytics and Risk Mitigation Strategies

Abstract The management of supply chains is increasingly complex. This study provides a comparative analysis of the cost-benefit analysis for managing various risks. It identifies the financial implications of leveraging artificial intelligence in supply chains to better address risk. Empirical results show a business case for managing some sources of risk more proactively facilitated through predictive [...] Read more.
The management of supply chains is increasingly complex. This study provides a comparative analysis of the cost-benefit analysis for managing various risks. It identifies the financial implications of leveraging artificial intelligence in supply chains to better address risk. Empirical results show a business case for managing some sources of risk more proactively facilitated through predictive modeling techniques offered by AI. Across investigation streams, the use of AI results in an average total cost saving ranging from 41,254 to 4,099,617. Findings from our research can be used to inform managers and theorists about the implications of integrating AI technologies to manage risks in the supply chain. Our work also highlights areas for future research. Given the growing interest in studying sub-second forecasting, our research could be a point of departure for future investigations aimed at considering the impact of forecasting horizons such as an intra-day basis. We formulate a conceptual framework that considers how and to what extent performance evaluation metrics vary according to differences in the fidelity of predictive models and factor importance for identifying risks. We also utilize a mixed-method approach to demonstrate the applicability of our ideas in practice. Our results illustrate the financial implications of integrating AI predictive tools with business processes. Results suggest that real-world companies can circumvent inefficiencies associated with trying to manage many classes of risk via the use of AI-enhanced predictive analytics. As managers need to justify investment to top management, our work supports decision-making by providing a means of conducting a trade-off analysis at the tactical level.
Figures
PreviousNext
Review Article
Open Access December 27, 2023

Leveraging Machine Learning Techniques for Predictive Analysis in Merger and Acquisition (M&A)

Abstract M&A is a strategic concept of business growth through consolidation, gaining market access, increasing strategic positions, and increasing operational efficiency. To understand the dynamics of M&A, this paper looks at aspects such as targeted firm identification, evaluation, bidding for the target firm, and post-acquisition integration. All forms of M&A, including horizontal, [...] Read more.
M&A is a strategic concept of business growth through consolidation, gaining market access, increasing strategic positions, and increasing operational efficiency. To understand the dynamics of M&A, this paper looks at aspects such as targeted firm identification, evaluation, bidding for the target firm, and post-acquisition integration. All forms of M&A, including horizontal, vertical, conglomerate, and acquisitions, are discussed in terms of goals and values, including synergy, cost reduction, competitive advantages, and access to better technology. However, issues such as cultural assimilation, adhesion to regulations, and calculating an inaccurate value are also resolved. The paper then goes deeper to provide insight into how predictive analytics applies to M&A, using ML to improve decision-making with forecasting benefits. Including healthcare, education, and construction industries, the presented predictive models using regression analysis, neural networks, and ensemble techniques help to make decisions. Through time series and real-time data, PDA enables sound M&A strategies, effective risk management and smooth integration.
Figures
PreviousNext
Review Article
Open Access November 19, 2022

Analyzing Behavioral Trends in Credit Card Fraud Patterns: Leveraging Federated Learning and Privacy-Preserving Artificial Intelligence Frameworks

Abstract We investigate and analyze the trends and behaviors in credit card fraud attacks and transactions. First, we perform logical analysis to find hidden patterns and trends, then we leverage game-theoretical models to illustrate the potential strategies of both the attackers and defenders. Next, we demonstrate the strength of industry-scale, privacy-preserving artificial intelligence solutions by [...] Read more.
We investigate and analyze the trends and behaviors in credit card fraud attacks and transactions. First, we perform logical analysis to find hidden patterns and trends, then we leverage game-theoretical models to illustrate the potential strategies of both the attackers and defenders. Next, we demonstrate the strength of industry-scale, privacy-preserving artificial intelligence solutions by presenting the results from our recent exploratory study in this respect. Furthermore, we describe the intrinsic challenges in the context of developing reliable predictive models using more stringent protocols, and hence the need for sector-specific benchmark datasets, and provide potential solutions based on state-of-the-art privacy models. Finally, we conclude the paper by discussing future research lines on the topic, and also the possible real-life implications. The paper underscores the challenges in creating robust AI models for the banking sector. The results also showcase that privacy-preserving AI models can potentially augment sharing capabilities while mitigating liability issues of public-private sector partnerships [1].
Figures
PreviousNext
Review Article
Open Access December 27, 2019

Predictive Analytics in Biologics: Improving Production Outcomes Using Big Data

Abstract Biopharmaceuticals, or biologics, are a burgeoning sector in the pharmaceutical industry, predicted to reach $239.4 billion by 2025. This unparalleled growth is often attributed to the enhanced specificity offered by large molecules over small molecules. The large size of the constituent proteins necessitates the continuous implementation of big data predictive analytics to elucidate the most [...] Read more.
Biopharmaceuticals, or biologics, are a burgeoning sector in the pharmaceutical industry, predicted to reach $239.4 billion by 2025. This unparalleled growth is often attributed to the enhanced specificity offered by large molecules over small molecules. The large size of the constituent proteins necessitates the continuous implementation of big data predictive analytics to elucidate the most effective candidates in the lead optimization process. These same methodologies can be applied, and with the advent of machine learning and automated predictive analytics, this is becoming an increasingly facile task, to the augmentation and optimization of the downstream production processes that comprise the majority of the development cost of any biologic. In this work, big data from cell line generation, product and process design, and large-scale lead validation studies have been used to compare the applicability of simple statistical models against these black-box approaches for the rapid acceleration of enzymes to the pilot plant stage. This research can be expanded upon to exploit the big datasets generated as part of the progression of biologics through the development pipeline to further optimize production outcomes. Over the coming months, data from the project will be used to probe which approaches are amenable to which processes and, as a result, more amenable to various economic simulations. The computed optimization objective for the HIT must include the cost of acquiring, storing, and analyzing data to construct these predictive models, alongside the expected commercial reward of choosing an optimally ranked candidate. In this vein, perspective must be taken in the probable future price, capability outputs, and ownership issues of increasingly sophisticated data analysis software as superstructures become more frequent. It is frequently stated that decisions made to reduce production costs are data-driven, but that is not because more economically or energetically costly experiments or production methods are employed; to truly evaluate production steps, dynamic energy, and economic models need to become more commonplace. Conversion of process quality approaches from large questionnaires, risk analysis, and expert opinion-driven methods to statistical and thus more reliable approaches is an area of future research in analytics used herein.
Figures
PreviousNext
Review Article
Open Access December 27, 2022

Advance of AI-Based Predictive Models for Diagnosis of Alzheimer's Disease (AD) in Healthcare

Abstract The effects on the elderly are disproportionately Alzheimer’s disease (AD) is one of the most prevalent and chronic types of dementia. Alzheimer's disease (AD), a fatal illness that can harm brain structures and cells long before symptoms appear, is currently incurable and incurable. Using brain MRI pictures from a publicly accessible Kaggle dataset, this study suggests a prediction model based [...] Read more.
The effects on the elderly are disproportionately Alzheimer’s disease (AD) is one of the most prevalent and chronic types of dementia. Alzheimer's disease (AD), a fatal illness that can harm brain structures and cells long before symptoms appear, is currently incurable and incurable. Using brain MRI pictures from a publicly accessible Kaggle dataset, this study suggests a prediction model based on Convolutional Neural Networks (CNNs) to help with the early detection of Alzheimer's disease. Four levels of dementia have been applied to the 6,400 photos in the collection: not demented, slightly demented, moderately demented, and considerably mildly demented. Pixel normalization, class balancing utilizing data augmentation techniques, and picture scaling to 128×128 pixels were all part of a thorough workflow for data preparation. To improve the gathering of spatial dependence in volumetric MRI data, a 3D convolutional neural network (CNN) architecture was used. We used important performance measures including F1-score, recall, accuracy, precision, and log loss to gauge the model's effectiveness. A review of the available data indicates that the total F1-score, accuracy, recall, and precision were 99.0%, 99.0%, and 99.38%, respectively. The findings demonstrate the model's potential for practical use in early AD diagnosis and establish its robustness with the help of confusion matrix analysis and performance curves.
Figures
PreviousNext
Article
Open Access December 27, 2020

Building Foundational Data Products for Financial Services: A MDM-Based Approach to Customer, and Product Data Integration

Abstract Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals [...] Read more.
Imagine a consumer financial services company with 20 million customers. Its sales and marketing organizations collaborate across product lines, deploying hundreds of marketing campaigns each quarter that aim to increase customer product usage and/or cross-buying of products. Each campaign is based on forecasts of customer responses derived from predictive models updated every quarter. The goals of these models are to achieve large return on investment ratios and to maximize contribution to local profit centers. What’s important is that their modeling is based only on data created, curated and maintained by these marketing organizations. The difference today is that the modeling is no longer based solely on a small number of response-determined variables that are constantly assessed in terms of importance. A quarterly campaign update generates hundreds of statistical models — involving campaign responses, purchase-lag time, the relative magnitude of the direct effect, and the cross-buying effects — using thousands of variables, including customer demographics, life stage, product transactions, household composition, and customer service history. It’s a network of models, not just a table of variable-by-residual importance values. But that’s only part of the story of data products. The predictive modeling utilized by these campaign plans is based on analytics and data preparation, which are data products in their most diminutive form. These products would be even more elementary were they not crafted quarterly by highly skilled, experienced modelers using advanced software and processes. Most companies have enough data to create models that contain not simply hundreds of variables, but thousands, so that the focus can return to information instead of data reduction. These models largely replace the internal econometric models previously used to produce advanced forecasts in the absence of campaign modeling. People used these forecasts to simulate ROI and contribution forecasts for the planned campaigns. In the old days, reliance on econometrically forecast ROI-guideline contribution values reduced the reliance on the marketing campaign modelers because of a lack of trust in their predictive ability.
Figures
PreviousNext
Review Article
Open Access December 02, 2020

Predictive Modeling and Machine Learning Frameworks for Early Disease Detection in Healthcare Data Systems

Abstract Predictive modeling, supported by machine learning technology, aims to analyze data in order to guide decision-making towards actions generating desired values in the future. It encompasses the set of techniques used to build models that estimate the value of a certain variable predicting a forthcoming event from the past or current values of relevant attributes. In predictive healthcare modeling, [...] Read more.
Predictive modeling, supported by machine learning technology, aims to analyze data in order to guide decision-making towards actions generating desired values in the future. It encompasses the set of techniques used to build models that estimate the value of a certain variable predicting a forthcoming event from the past or current values of relevant attributes. In predictive healthcare modeling, the built models represent the relationship among the data concerning customer, provider, production, and other aspects of the healthcare environment in order to assist the decision processes in the prevention of diseases and in the planning of preventive actions by detection of high-risk patients. Contrary to trend analysis, whose goal is to describe past events, predictive models aim to provide useful indications regarding future events and changes. Predictive healthcare modeling supports actions that try to prevent the manifestation of diseases in healthy individuals or try to diagnose as early as possible the incidence of a disease in patients at risk. A sound predictive analysis encompasses not only the model-training task, but also the aspects of data quality, preprocessing, and fusion during its entire implementation lifecycle to ensure appropriate input data preparation. The robustness of the predictive model and its results depends highly on data quality. Due to the variety of data sources in healthcare environments, it becomes essential to use preprocessing in order to remove noise and inconsistencies. The increasing number of endorsable data exchange standards makes each data exchange achievable, but it demands the implementation of a data-governance program. In addition, the influence of the hospital-database architect on the architecture of an early-diagnosis model is important to guarantee appropriate input-formatting modularity.
Figures
PreviousNext
Review Article
Open Access December 26, 2021

Architectural Frameworks for Large-Scale Electronic Health Record Data Platforms

Abstract Architectural frameworks for large-scale Electronic Health Record (EHR) data platforms are described. Existing EHR data platform architectures often leverage multiple cloud-based solutions blended with institutional infrastructures to manage and analyze clinical data at scale. Key design principles governing the scale of existing EHR data architecture include model design, governance structure, [...] Read more.
Architectural frameworks for large-scale Electronic Health Record (EHR) data platforms are described. Existing EHR data platform architectures often leverage multiple cloud-based solutions blended with institutional infrastructures to manage and analyze clinical data at scale. Key design principles governing the scale of existing EHR data architecture include model design, governance structure, data access management, data security/policy/protection, data-information-language-based standardization, and analytics tool alignment, among others. The rapidly evolving technology landscape and the unprecedented volume of incident and retrospective clinical data being collected and generated within healthcare organizations have led to the emergent need for a dedicated architectural framework to support large-scale computing in the health informatics domain. The application areas of large-scale computing in health informatics include real-time predictive analytics, risk stratification, patient cohort analytics, development of predictive models for specific institutions or population groups, and many more. The use of EHR data for a multitude of decision-making processes in both clinical and non-clinical settings has prompted the establishment of policies prescribing the conditions of access and use of EHR data for non-employed individuals in the organization. Consequently, the demand for accessing, using, and managing EHR data at scale has impacted the over.
Figures
PreviousNext
Review Article
Open Access December 26, 2021

Scalable Data Warehouse Architecture for Population Health Management and Predictive Analytics

Abstract Scalable architecture principles for data warehousing are introduced to support population health management and predictive analytics. These principles are validated through the design of an accompanying Data Pipeline that allows the integration of non-traditional data sources, the use of real-time data for descriptive analytics dashboards, and support for the generation of supervised Machine [...] Read more.
Scalable architecture principles for data warehousing are introduced to support population health management and predictive analytics. These principles are validated through the design of an accompanying Data Pipeline that allows the integration of non-traditional data sources, the use of real-time data for descriptive analytics dashboards, and support for the generation of supervised Machine Learning models. Several analytical capabilities have been implemented to exemplify the practical application of the principles, including predictive models for Risk Stratification in health care. Optimal cost-effectiveness and performance considerations ensure the practical relevance of the architectural principles and associated Data Pipeline. In recent years, the availability of Low-Cost Data Storage services and the increasing popularity of Streaming technologies opened new possibilities for the storage and processing of Streaming data on a near-real-time basis. These technologies can help Developing Countries in tackling many relevant issues such as Urban Planning, Environmental Management, Migration Policies, etc. A multi-tier approach combining Cloud-based Storage with Data Warehousing and Data Mining technologies can offer an interesting architecture to exploit Big Data related to populations.
Figures
PreviousNext
Review Article

Query parameters

Keyword:  Predictive Models

View options

Citations of

Views of

Downloads of