Inteligencia Artificial https://journal.iberamia.org/index.php/intartif <p style="text-align: justify;"><span style="color: #000000;"><strong><em><a style="color: #003366; text-decoration: underline;" href="http://journal.iberamia.org/" target="_blank" rel="noopener">Inteligencia Artificial</a></em></strong><span id="result_box" class="" lang="en"> is an international open access journal promoted by <span class="">the Iberoamerican Society of</span> Artificial Intelligence (<a href="http://www.iberamia.org">IBERAMIA</a>). </span></span>Since 1997, the journal publishes high-quality original papers reporting theoretical or applied advances in all areas of Artificial Intelligence. <span style="color: rgba(0, 0, 0, 0.87); font-family: 'Noto Sans', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: justify; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">There are no fees for subscription, publication nor editing tasks<span class="VIiyi" lang="en"><span class="JLqJ4b ChMk0b" data-language-for-alternatives="en" data-language-to-translate-into="es" data-phrase-index="0">.</span></span> <span class="VIiyi" lang="en"><span class="JLqJ4b ChMk0b" data-language-for-alternatives="en" data-language-to-translate-into="es" data-phrase-index="0">Articles can be written in English, Spanish or Portuguese and <a href="https://journal.iberamia.org/index.php/intartif/about/submissions">will be subjected</a> to a double-blind peer review process.</span></span> <span class="VIiyi" lang="en"><span class="JLqJ4b ChMk0b" data-language-for-alternatives="en" data-language-to-translate-into="es" data-phrase-index="0">The journal is abstracted and indexed in several <a href="http://journal.iberamia.org/index.php/intartif/metrics">data bases</a>. </span></span><br /></span></p> en-US <p>Open Access publishing.<br />Lic. under <a href="http://creativecommons.org/licenses/by-nc/4.0">Creative Commons CC-BY-NC</a><br />Inteligencia Artificial (Ed. IBERAMIA)<br />ISSN: 1988-3064 (on line).<br />(C) IBERAMIA &amp; The Authors</p> editor@iberamia.org (Editor) journal@iberamia.org (Technical Contact (Only technical issues!)) Mon, 08 Dec 2025 20:55:51 +0100 OJS 3.3.0.4 http://blogs.law.harvard.edu/tech/rss 60 Analyzing Municipal Patterns of Suicide and Depression in Mexico: A Multilayer Network Approach https://journal.iberamia.org/index.php/intartif/article/view/2504 <p>This study employs a multilayer network approach to analyze the spatial and temporal patterns of suicide and depression across Mexican municipalities from 2015 to 2020. Using a panel dataset of mental health cases, substance use, and healthcare infrastructure, we constructed a multilayer graph based on cosine similarity. The Infomap clustering algorithm was then applied to identify communities of municipalities with similar mental health profiles. Our results reveal five distinct clusters with significant variations in the levels and temporal dynamics of the analyzed indicators. Notably, two clusters consistently exhibited higher rates of substance use and adverse mental health outcomes. These findings demonstrate the efficacy of network-based methods for identifying at-risk<br />municipal groupings, thereby informing targeted public health interventions.</p> Jorge Manuel Pool Cen, Hugo Carlos Martínez, Gandhi Hernández Chan, Martha Cordero Oropeza, Alfredo Montero Arciniega, Pedro Mendoza Pablo Copyright (c) 2025 Iberamia & The Authors http://creativecommons.org/licenses/by-nc/4.0 https://journal.iberamia.org/index.php/intartif/article/view/2504 Mon, 08 Dec 2025 00:00:00 +0100 From the algorithm to the clinical interpretation of childbirth anxiety: analysis and explainability of obstetric predictive models based on psychological indicators https://journal.iberamia.org/index.php/intartif/article/view/2507 <p>Anxiety during pregnancy constitutes a relevant factor that can significantly influence labor development. This study presents a novel approach based on explainable artificial intelligence to predict both the type and duration of labor using psychological indicators of anxiety prior to delivery. Employing data from 235 full-term pregnant women from two Spanish hospitals, we developed a multilayer perceptron model to classify eutocic and dystocic deliveries, achieving a capacity to identify 88\% of dystocic deliveries. Additionally, we implemented a regression model that predicts labor time with a mean error of 2 hours, correctly predicting 86% of cases with an error margin of less than 3 hours. The application of explainability techniques to the developed models allows for understanding the specific influence of each anxiety factor on labor development. These results demonstrate the potential of AI models to improve obstetric care and optimize healthcare resource allocation.</p> Juan A. Recio-Garcia, Ana Martin-Casado Copyright (c) 2025 Iberamia & The Authors http://creativecommons.org/licenses/by-nc/4.0 https://journal.iberamia.org/index.php/intartif/article/view/2507 Mon, 08 Dec 2025 00:00:00 +0100 Multimodal Emotion Recognition for Empathic Virtual Agents in Mental Health Interventions https://journal.iberamia.org/index.php/intartif/article/view/2508 <p>Depression and anxiety disorders affect millions of individuals globally and are commonly addressed through psychological interventions. A growing technological approach to support such treatments involves the use of embodied conversational agents that employ motivational interviewing, a method that promotes behavioral change through empathic engagement. Despite its critical role in therapeutic efficacy, empathy remains a significant challenge for virtual agents to emulate. Emotion Recognition (ER) technologies offer a potential solution by enabling agents to perceive and respond appropriately to users' emotional states. Given the inherently multimodal nature of human emotion, unimodal ER approaches often fall short in accurately interpreting affective cues. In this work, we propose a multimodal emotion recognition model that integrates verbal and non-verbal signals (text and video) using a Cross-Modal Attention fusion strategy. Trained and evaluated on the IEMOCAP dataset, our approach leverages Ekman's taxonomy of basic emotions and demonstrates superior performance over unimodal baselines across key metrics such as accuracy and F1-score. By prioritizing text as the main modality and dynamically incorporating complementary visual cues, the model proves effective in complex emotion classification tasks. The proposed model is designed for integration into an existing conversational agent aimed at supporting individuals experiencing emotional and psychological distress. Future work will involve embedding the model in the conversational agent platform for emotionally distressed users, aiming to assess its real-world impact on engagement, user experience, and perceived empathy.</p> Marcelo Alejandro Huerta-Espinoza, Ansel Y. Rodríguez-González, Juan Martinez-Miranda Copyright (c) 2025 Iberamia & The Authors http://creativecommons.org/licenses/by-nc/4.0 https://journal.iberamia.org/index.php/intartif/article/view/2508 Mon, 08 Dec 2025 00:00:00 +0100 Integrated Feature Fusion in Multiclass Maize Leaf Disease Recognition https://journal.iberamia.org/index.php/intartif/article/view/2079 <p> Plant diseases are the main factor in plant mortality and destruction, especially in trees. Early discovery, however, can assist to manage and treat this issue efficiently. To increase output, crop and plant lesions are detected and stopped as soon as feasible. Because it relies solely on visual observation, manual inspection of plant leaf diseases is time-consuming and expensive. The authors offer methods for identifying and categorizing plant leaf diseases using computer vision. Pre-processing original images to visualize contaminated areas, feature extraction from unprocessed or segmented images, feature fusion, feature selection, and classification are a few examples of computer vision approaches. The fusion technique is used to combine the target's numerical data features, which go beyond the picture, with the extracted image features to increase the target's feature representation. The following are the principal issues that researchers found in the literature: Low-contrast infected regions. Extract redundant and irrelevant information, which degrades classification accuracy; Redundant and irrelevant information may lengthen computation times and the targeted models performance will suffer as a result. This study proposed a framework for classifying plant leaf diseases based on the best feature selection and a deep learning fusion model. In the suggested approach, contrast is first enhanced using a pre-processing model, and then the issue of an unbalanced dataset is resolved via data augmentation. The proposed Deep Fusion Learning Model (DFLM) shows an accuracy of 98.8% in comparison with other models.</p> Prabhnoor Bachhal, Vinay Kukreja, Sachin Ahuja, Vatsala Anand Copyright (c) 2025 Iberamia & The Authors http://creativecommons.org/licenses/by-nc/4.0 https://journal.iberamia.org/index.php/intartif/article/view/2079 Mon, 08 Dec 2025 00:00:00 +0100