https://journal.iberamia.org/index.php/intartif/issue/feed Inteligencia Artificial 2025-12-08T20:55:51+01:00 Editor editor@iberamia.org Open Journal Systems <p style="text-align: justify;"><span style="color: #000000;"><strong><em><a style="color: #003366; text-decoration: underline;" href="http://journal.iberamia.org/" target="_blank" rel="noopener">Inteligencia Artificial</a></em></strong><span id="result_box" class="" lang="en"> is an international open access journal promoted by <span class="">the Iberoamerican Society of</span> Artificial Intelligence (<a href="http://www.iberamia.org">IBERAMIA</a>). </span></span>Since 1997, the journal publishes high-quality original papers reporting theoretical or applied advances in all areas of Artificial Intelligence. <span style="color: rgba(0, 0, 0, 0.87); font-family: 'Noto Sans', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: justify; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">There are no fees for subscription, publication nor editing tasks<span class="VIiyi" lang="en"><span class="JLqJ4b ChMk0b" data-language-for-alternatives="en" data-language-to-translate-into="es" data-phrase-index="0">.</span></span> <span class="VIiyi" lang="en"><span class="JLqJ4b ChMk0b" data-language-for-alternatives="en" data-language-to-translate-into="es" data-phrase-index="0">Articles can be written in English, Spanish or Portuguese and <a href="https://journal.iberamia.org/index.php/intartif/about/submissions">will be subjected</a> to a double-blind peer review process.</span></span> <span class="VIiyi" lang="en"><span class="JLqJ4b ChMk0b" data-language-for-alternatives="en" data-language-to-translate-into="es" data-phrase-index="0">The journal is abstracted and indexed in several <a href="http://journal.iberamia.org/index.php/intartif/metrics">data bases</a>. </span></span><br /></span></p> https://journal.iberamia.org/index.php/intartif/article/view/2504 Analyzing Municipal Patterns of Suicide and Depression in Mexico: A Multilayer Network Approach 2025-09-08T17:04:02+02:00 Jorge Manuel Pool Cen jpool@centrogeo.edu.mx Hugo Carlos Martínez hcarlos@centrogeo.edu.mx Gandhi Hernández Chan ghernandez@centrogeo.edu.mx Martha Cordero Oropeza martha.cordero.o@gmail.com Alfredo Montero Arciniega aamontarc@gmail.com Pedro Mendoza Pablo alberto.mendoza.pablo@gmail.com <p>This study employs a multilayer network approach to analyze the spatial and temporal patterns of suicide and depression across Mexican municipalities from 2015 to 2020. Using a panel dataset of mental health cases, substance use, and healthcare infrastructure, we constructed a multilayer graph based on cosine similarity. The Infomap clustering algorithm was then applied to identify communities of municipalities with similar mental health profiles. Our results reveal five distinct clusters with significant variations in the levels and temporal dynamics of the analyzed indicators. Notably, two clusters consistently exhibited higher rates of substance use and adverse mental health outcomes. These findings demonstrate the efficacy of network-based methods for identifying at-risk<br />municipal groupings, thereby informing targeted public health interventions.</p> 2025-12-08T00:00:00+01:00 Copyright (c) 2025 Iberamia & The Authors https://journal.iberamia.org/index.php/intartif/article/view/2507 From the algorithm to the clinical interpretation of childbirth anxiety: analysis and explainability of obstetric predictive models based on psychological indicators 2025-09-08T18:14:46+02:00 Juan A. Recio-Garcia jareciog@fdi.ucm.es Ana Martin-Casado ana.martincasado@unir.net <p>Anxiety during pregnancy constitutes a relevant factor that can significantly influence labor development. This study presents a novel approach based on explainable artificial intelligence to predict both the type and duration of labor using psychological indicators of anxiety prior to delivery. Employing data from 235 full-term pregnant women from two Spanish hospitals, we developed a multilayer perceptron model to classify eutocic and dystocic deliveries, achieving a capacity to identify 88\% of dystocic deliveries. Additionally, we implemented a regression model that predicts labor time with a mean error of 2 hours, correctly predicting 86% of cases with an error margin of less than 3 hours. The application of explainability techniques to the developed models allows for understanding the specific influence of each anxiety factor on labor development. These results demonstrate the potential of AI models to improve obstetric care and optimize healthcare resource allocation.</p> 2025-12-08T00:00:00+01:00 Copyright (c) 2025 Iberamia & The Authors https://journal.iberamia.org/index.php/intartif/article/view/2508 Multimodal Emotion Recognition for Empathic Virtual Agents in Mental Health Interventions 2025-09-08T19:30:42+02:00 Marcelo Alejandro Huerta-Espinoza marceloahe@gmail.com Ansel Yoan Rodriguez Gonzalez ansel@cicese.edu.mx Juan Crisoforo Martinez Miranda jmiranda@cicese.edu.mx <p>Depression and anxiety disorders affect millions of individuals globally and are commonly addressed through psychological interventions. A growing technological approach to support such treatments involves the use of embodied conversational agents that employ motivational interviewing, a method that promotes behavioral change through empathic engagement. Despite its critical role in therapeutic efficacy, empathy remains a significant challenge for virtual agents to emulate. Emotion Recognition (ER) technologies offer a potential solution by enabling agents to perceive and respond appropriately to users' emotional states. Given the inherently multimodal nature of human emotion, unimodal ER approaches often fall short in accurately interpreting affective cues. In this work, we propose a multimodal emotion recognition model that integrates verbal and non-verbal signals (text and video) using a Cross-Modal Attention fusion strategy. Trained and evaluated on the IEMOCAP dataset, our approach leverages Ekman's taxonomy of basic emotions and demonstrates superior performance over unimodal baselines across key metrics such as accuracy and F1-score. By prioritizing text as the main modality and dynamically incorporating complementary visual cues, the model proves effective in complex emotion classification tasks. The proposed model is designed for integration into an existing conversational agent aimed at supporting individuals experiencing emotional and psychological distress. Future work will involve embedding the model in the conversational agent platform for emotionally distressed users, aiming to assess its real-world impact on engagement, user experience, and perceived empathy.</p> 2025-12-08T00:00:00+01:00 Copyright (c) 2025 Iberamia & The Authors https://journal.iberamia.org/index.php/intartif/article/view/2079 Integrated Feature Fusion in Multiclass Maize Leaf Disease Recognition 2025-11-10T11:55:36+01:00 Prabhnoor Bachhal prabhnoor.bachhal@chitkara.edu.in Vinay Kukreja vinay.kukreja@chitkara.edu.in Sachin Ahuja ed.engineering@cumail.in Vatsala Anand vatsala.anand@chitkara.edu.in <p> Plant diseases are the main factor in plant mortality and destruction, especially in trees. Early discovery, however, can assist to manage and treat this issue efficiently. To increase output, crop and plant lesions are detected and stopped as soon as feasible. Because it relies solely on visual observation, manual inspection of plant leaf diseases is time-consuming and expensive. The authors offer methods for identifying and categorizing plant leaf diseases using computer vision. Pre-processing original images to visualize contaminated areas, feature extraction from unprocessed or segmented images, feature fusion, feature selection, and classification are a few examples of computer vision approaches. The fusion technique is used to combine the target's numerical data features, which go beyond the picture, with the extracted image features to increase the target's feature representation. The following are the principal issues that researchers found in the literature: Low-contrast infected regions. Extract redundant and irrelevant information, which degrades classification accuracy; Redundant and irrelevant information may lengthen computation times and the targeted models performance will suffer as a result. This study proposed a framework for classifying plant leaf diseases based on the best feature selection and a deep learning fusion model. In the suggested approach, contrast is first enhanced using a pre-processing model, and then the issue of an unbalanced dataset is resolved via data augmentation. The proposed Deep Fusion Learning Model (DFLM) shows an accuracy of 98.8% in comparison with other models.</p> 2025-12-08T00:00:00+01:00 Copyright (c) 2025 Iberamia & The Authors