Logo do repositório
Comunidades & Coleções
Busca no Repositório
Guia Arandu
  • Sobre
  • Equipe
  • Como depositar
  • Fale conosco
  • English
  • Português do Brasil
Entrar
Novo usuário? Clique aqui para cadastrar.Esqueceu sua senha?
  1. Início
  2. Pesquisar por Assunto

Navegando por Assunto "Inteligência artificial"

Filtrar resultados informando o último nome do autor
Agora exibindo 1 - 20 de 41
  • Resultados por Página
  • Opções de Ordenação
  • Imagem de Miniatura
    Item
    A comprehensive software aging analysis in LLMs-based systems
    (2025) Santos, César Henrique Araújo dos; Andrade, Ermeson Carneiro de; http://lattes.cnpq.br/2466077615273972; http://lattes.cnpq.br/9618931332191622
    Large language models (LLMs) are increasingly popular in academia and industry due to their wide applicability across various domains. With their rising use in daily tasks, ensuring their reliability is crucial for both specific tasks and broader societal impact. Failures in LLMs can lead to serious consequences such as interruptions in services, disruptions in workflow, and delays in task completion. Despite significant efforts to understand LLMs from different perspectives, there has been a lack of focus on their continuous execution over long periods to identify signs of software aging. In this study, we experimentally investigate software aging in LLM-based systems using Pythia, OPT, and GPT Neo as the LLM models. Through statistical analysis of measurement data, we identify suspicious trends of software aging associated with memory usage under various workloads. These trends are further confirmed by the Mann-Kendall test. Additionally, our process analysis reveals potential suspicious processes that may contribute to memory degradation.
  • Imagem de Miniatura
    Item
    Uma abordagem baseada em aprendizado de máquina para dimensionamento de requisitos de software
    (2016-12-13) Fernandes Neto, Eça da Rocha; Soares, Rodrigo Gabriel Ferreira; http://lattes.cnpq.br/2526739219416964; http://lattes.cnpq.br/6325583065151828
    This work proposes to perform the automatic sizing of software requirements using a machine learning approach. The database used is real and was obtained from a company that works with Scrum-based development process and Planning Poker es- timation. During the studies, data pre-processing, classification and selection of best attributes were used along with the term frequency–inverse document frequency algo- rithm (tf-idf) and principal component analysis (PCA). Machine learning and automatic sorting occurred with the use of Support Vector Machines (SVM) based on available data history. The final tests were performed with and without attribute selection by PCA. It is demonstrated that the assertiveness is greater when the best attributes are selected. The final tool can estimate the size of user stories with a generalization of up to 91 %. The results were considered likely to be used in the production environment without any problems to the development team.
  • Imagem de Miniatura
    Item
    Abordagem comparativa entre a aplicação da metodologia KATAM e inventário tradicional em plantios de Khaya senegalensis (Desr.) A. Juss
    (2023-09-15) Silva, Kamilo Alaboodi da; Silva, Emanuel Araújo; Hakamada, Rodrigo Eiji; http://lattes.cnpq.br/4186459700983170; http://lattes.cnpq.br/2765651276275384; http://lattes.cnpq.br/5612600854790108
    The forest inventory helps forest managers taking decisions. Installing, measuring and managing a network of inventory plots is a costly and time-consuming activity. The remote sensing techniques are increasingly gaining ground in the forestry sector because they have the potential to reduce costs without incurring any loss of precision, but they are not widely used due to their high cost. In this context, the Swedish company Katam Technologies has developed a solution for acquiring and analyzing forest data: KATAM Forest, which works using the KASLAM algorithm, which has not yet been widely used and tested in national forests. The goal of this study was to compare, in terms of accuracy and operational performance, the application of KASLAM artificial intelligence through the KATAM Forest application in forest inventory activities in Khaya senegalensis (Desr.) A. Juss plantations (5 years old), located in the state of Pernambuco, with the sampling techniques of a traditional forest inventory. Diameter at breast height (DBH) data was collected within 9 plots, as well as videos with artificial intelligence, recorded within the coordinates of the sampling units. Descriptive statistics were performed on the DBH data by plot, followed by the parametric Shapiro-Wilk normality test, where, if the null hypothesis was rejected, a non-parametric Mann-Whitney U test was required to understand the difference in averages. Operational performance was assessed using the time data obtained during the inventory process within the plots in both approaches. The DBH variable in the two inventory methodologies does not have a clear distribution concentrated close to the mean. The non-parametric test resulted in the averages obtained for DBH not showing statistical differences between the methodologies at the 5% significance level. The operational performance of the Katam methodology was half of the traditional inventory. The Katam technologies are very promising in terms of reducing time and costs in forest inventory operations. Therefore, further studies are recommended so that the subject can be disseminated in a practical way.
  • Imagem de Miniatura
    Item
    Análise de mensagens de Commit com IA: uma nova perspectiva para o algoritmo SZZ
    (2025-03-17) Souza, Camila Nunes de Paula; Cabral, George Gomes; http://lattes.cnpq.br/8227256452129177; http://lattes.cnpq.br/8347479672060133
    Este trabalho propõe uma abordagem inovadora para aprimorar o algoritmo SZZ utilizado na identificação de commits que introduzem defeitos em sistemas de software. A metodologia proposta envolve o uso do ChatGPT, para realizar uma análise semântica das mensagens de commit, classificando-as em duas categorias: ”introduz bug”e ”não introduz bug”. O objetivo é melhorar a confiabilidade das classificações geradas pelo SZZ, reduzindo falsos positivos e melhorando a qualidade dos dados utilizados para a geração de modelos preditivos de detecção de defeitos. Para validar a abordagem, foram realizados experimentos com duas bases de dados (Neutron e Nova), utilizando os classificadores Random Forest e SVC, além de técnicas de balanceamento como oversampling e undersampling. Os resultados demonstram que a integração do ChatGPT ao SZZ resultou em uma redução significativa de commits erroneamente classificados como introdução de bugs, além de melhorar o desempenho dos classificadores, especialmente o Random Forest. Conclui-se que a utilização de LLMs pode aprimorar a eficácia do SZZ, contribuindo para a melhoria da qualidade de software e a eficiência na detecção de defeitos.
  • Imagem de Miniatura
    Item
    Análise de soluções criadas em tecnologia da informação com uso de inteligência artificial para cidades inteligentes
    (2024-08-31) Silva, Lucas Melo da; Machado, Luiz Claudio Ribeiro; http://lattes.cnpq.br/6359712741593257
    Esse artigo tem como objeto de estudo o impacto causado nos cidadãos Recifenses usuários das soluções em Tecnologia da informação criadas pela Empresa Municipal de Informática - EMPREL. o Objetivo é medir esse impacto baseado nas funcionalidades dos programas em benefício aos usuários, descrevê-los e demonstrar sua relevância mensurada pelos números de usuários cadastrados e beneficiados com as políticas. A metodologia é a pesquisa bibliográfica qualitativa e quantitativa com uso de dados estatísticos secundários. Verificou-se quais os programas mais acessados e quais os benefícios trazidos aos usuários. Foi possível concluir que os programas de tecnologia da informação da Prefeitura do Recife são relevantes para seus usuários, sejam na forma de benefícios financeiros, informativos e participação popular nos problemas e desafios de melhoria e políticas públicas na comunidade.
  • Imagem de Miniatura
    Item
    Análise fundamentalista e técnica: a importância do analista e do progresso tecnológico no processo de análise de investimentos
    (2019) Silva, Diego de Oliveira; Gomes, Sónia Maria Fonseca Pereira Oliveira; http://lattes.cnpq.br/9795791528582607
    This paper explores fundamentalist and technical analysis and the importance of the analyst and technological progress in the investiment analysis process, given the importance of the financial market in the process of optimal allocation of resources in the economy. To do this, through a bibliographic research, based on a narrative and integrative review, a descriptive analysis of the main aspects of each one of the investment analysis techniques was carried out. With emphasis, from the perspective of fundamentalist analysis, the analysis of economic scenario and the importance of accounting information, and from perspective of technical analysis, the main graphs and indicators and the importance of technology. In addition, the analyst's relevant role and technological progress were explained as significant factors for sucess in the successful implementation of techniques in investiment analysis. A brief approach has also been made regarding artificial intelligence and the advent of robot investor technology and how such technology has revolutionized the way we operate in the market and contributed to the effectiveness of the techniques. The discussion showed that balance and complementarity are the key to sucess in implementing investiment analysis methodologies, as well as the analyst and technology mutually contribute to optimizing the results achieved.
  • Imagem de Miniatura
    Item
    Avaliação de algoritmos baseados em Deep Learning para Localizar placas veiculares brasileiras em ambientes complexos
    (2019) Marques, Bruno Henrique Pereira; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/3847789259699701
    With the increase in the number of private vehicles, we can observe the increase in the number of violations of traffic laws, theft of vehicles and, thus, a better management and traffic control is necessary. A vehicle and its owner are recognized through the unique and required vehicle license plate (LP), and to be inspected and extracted data with greater efficiency, it is recommended to use automated systems for detecting and recognizing vehicle license plates. This work introduce a study and evaluation of algorithms based on Deep Learning to locate Brazilian LPs in complex environments. For the achievement of the experiments, a bank of images of Brazilian LPs was created based on problems like images with different resolution, quality, lighting and perspective of scene. Were used the Deep Learning algorithms YOLOv2 and YOLOv3, which has not yet been studied to the best of our knowledge. In addition, the Tree-structured Parzen Estimator (TPE) algorithm was used to optimize hyperparameters and maximize the performance of selected convolutional neural networks. For the evaluation, the performance metrics were used: prediction time, Intersection over Union (IoU) and confidence rate. The experiments result demonstrate that YOLOv3 presented better performance, obtaining 99.3% of vehicle license plate detection.
  • Imagem de Miniatura
    Item
    Avaliação de algoritmos multi-classe para classificação de solicitações enviadas a Ouvidoria Geral do Estado de Pernambuco
    (2021-03-29) Carvalho, Luiz Henrique Teixeira; Ferreira, Jeneffer Cristine; http://lattes.cnpq.br/3000364145302421
    The Ombudsman’s Office is a public agency that covers the entire state of Pernambuco and every day receives several requests with the most varied themes involving all other organs of the state, with that in certain times of the year, these requests can come to burden the resources of State. The main objective of this work is to apply the multi-class classification algorithms to the data obtained from the transparency portal, and to try to predict requests sent to the Ombudsman’s Office of the State of Pernambuco To obtain data from the Ombudsman’s Office of the State of Pernambuco, data scraping was carried out on the Pernambuco Transparency Portal of Pernambuco. Data for the years 2017, 2018 and 2019 were obtained. The algorithms Decision Tree, Random Forest, Bagging and kNN were applied to the ombudsman data. The results showed that the automatic data classification algorithms, particularly the Decision Tree, Random Forest, Bagging algorithms achieved 55 percent and 32 percent in the type and organ classes respectively, taking advantage of one hit every two attempts in the type class and one hit every three attempts in the organ class. The algorithms were also evaluated about their performance in time of model creation and training, with the Decision Tree algorithm as the most performative.
  • Imagem de Miniatura
    Item
    O ChatGPT como ferramenta pedagógica: novas perspectivas para o ensino de espanhol
    (2024-03-05) Araújo, Emmanuel Tiago Cardoso Corrêa de; Oliveira, Aline Fonseca de; http://lattes.cnpq.br/1895304971163472
    Este artículo presenta los resultados de un estudio teórico acerca de la aplicabilidad Del ChatGPT, una herramienta de Procesamiento de Lenguaje Natural (NLP) desarrollada por OpenAI, como recurso pedagógico en la enseñanza del Español como Lengua Extranjera (ELE). La investigación aborda el creciente impacto de la Inteligencia Artificial (IA) en el ámbito educativo, destacando cómo la tecnología ha revolucionado las prácticas de enseñanza y aprendizaje. Se examina la evolución de las herramientas pedagógicas a lo largo del tiempo, evidenciando cómo la integración de la IA en la educación representa un salto cualitativo hacia la personalización de la enseñanza, ofreciendo experiencias de aprendizaje más inmersivas, interactivas y adaptativas. El artículo destaca la importancia de elaborar prompts claros, precisos y bien contextualizados para maximizar la eficacia de la interacción con esta herramienta de IA. La investigación también explora estrategias para mejorar la formulación de prompts, enfatizando la importancia de la adaptabilidad y flexibilidad del ChatGPT para ajustarse a diversos contextos de aprendizaje y niveles de competência en español. Además, se proponen y analizan modelos de prompts específicos para la enseñanza Del español, ilustrando cómo estos comandos pueden ser estructurados para satisfacer las diferentes necesidades educativas.
  • Imagem de Miniatura
    Item
    Classificação de banhistas na faixa segura de praia
    (2018) Silva, Ricardo Luna da; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/3088880066515750
    In order to avoid risks in aquatic environments, drownings and shark attack, beach areas should be constantly monitored. When necessary, rescue workers must respond quickly to the case. This work aims to propose a classification algorithm for people as part of a system for automatic monitoring in beach areas.Certain environmental factors are quitech allenging, such as varying brightness on cloudy days,the position of the sun at different times of the day, difficulty in segmenting images, submerged people, and position away from the camera. For this type of problem in the literature is commonly found, for people detection, the use of image descriptors in conjunction with a classifier. This work performs a beach image study using the following image descriptors and their combinations in pairs: Hu Moments, Zernike Moments, Gabor Filter, Guided Gradient Histogram(HOG),Local Binary Patterns(LBP) and Haar.Inaddition,a dimensionality reduction technique (PCA) is applied for feature selection. The detection rate is evaluated with the following classifiers: text it Random Forest, casca de classifier and textit Support Vector Machine (SVM) with linear and radial textit kernel. The experiments demonstrated that the SVM classifier with radial kernel using the HOG and LBP descriptors applying the PCA technique showed promising results, obtaining 90.31% accuracy
  • Imagem de Miniatura
    Item
    Coh-Metrix PT-BR: uma API web de análise textual para à educação
    (2021-03-02) Salhab, Raissa Camelo; Mello, Rafael Ferreira Leite de; http://lattes.cnpq.br/6190254569597745; http://lattes.cnpq.br/6761163457130594
    CohMetrix is a computational system that provides different measures of textual analysis, including legibility, coherence and textual cohesion. These measures allow a more indepth analysis of different types of educational texts such as essays, answers to open questions and messages in educational forums. This paper describes the features of a prototype, which encompass a website and an API, of a Brazilian Portuguese version of CohMetrix measures.
  • Imagem de Miniatura
    Item
    Comparação de algoritmos de reconhecimento de gestos aplicados à sinais estáticos de Libras
    (2019-07-12) Cruz, Lisandra Sousa da; Cordeiro, Filipe Rolim; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/4807739914511076; http://lattes.cnpq.br/2111589326272463
    Brazilian Sign Language (BSL) has been created in order to cope with a necessity of a non-verbal communication for the deafs, which during a long time were indoctrinated to learn the Brazilian Portuguese as their first language. Nowadays, the BSL is the Brazil’s second official language and first deaf’s language, as well as the Portuguese for the listener. Nevertheless, even with large recognition, the Brazil’s second official language is not known by the majority of the Brazilian population. The inclusion process aims to allow equality for the impaired, such that the deficiency does not become an impediment factor for living together in society. With the technology arrival and the Artificial Inteligence (AI) advances, it was created technologic artifices to allow inclusion. In the AI, the pattern recognition is one of more approached subthemes in the present, and it is widely applied for the gesture classification of many sign languages in literature. This research has, as key task, the identification of the hands that form a certain BSL gesture and, thus, the recognition of the class it belongs to. Based on American Sign Language (ASL) classification, the Feature Fusion-based Convolutional Neural Network (FFCNN), an extended network from Convolutional Neural Network (CNN), obtained the best accuracy in comparison to other networks, such as Visual Geometry Group (VGG). Therefore, based on this scenario, this work applies the FFCNN to BSL static gestures to verify whether the FFCNN obtain the best accuracy as well as obtained in ASL or not. In order to achieve the goal, this work compares three classifiers: the Visual Geometry Group (VGG), a CNN with variation of 13 and 16 layers, the FFCNN, and a Multi Layer Perceptron network used in recognition of BSL static gestures in literature. The algorithms were applied in a BSL dataset with 9,600 images of 40 signals. The results demonstrate that VGG with 16 layers obtained the best accuracy regarding the described models in this work, corresponding to 99,45%.
  • Imagem de Miniatura
    Item
    Comparação de modelos de ia para extração de dados em glicosímetros
    (2024-09-25) Carmo, Genivaldo Braynner Teixeira do; Correia, Julyanne Maria dos Santos; Silva Filho, Ronaldo Rodrigues da; Sampaio, Pablo Azevedo; Medeiros, Robson Wagner Albuquerque de
    A diabetes é uma condição crônica que requer monitoramento constante dos níveis de glicose no sangue, sendo essencial o uso de glicosímetros para a obtenção dessas informações. Este trabalho tem como objetivo comparar três modelos de Inteligência Artificial, Gemini, GPT-4o e Llava 1.5, para identificar qual deles extrai, de forma mais eficaz, os dados de glicose, data e hora dos glicos´ımetros. Utilizando técnicas de engenharia de prompt, busca-se aprimorar a precisão e eficiência na extração desses dados, otimizando o monitoramento e contribuindo para a melhor gestão da saúde de pacientes diabéticos.
  • Imagem de Miniatura
    Item
    Desenvolvimento de aplicação em Outsystems para área de saúde utilizando práticas do HIPAA compliance
    (2025-03-26) Carvalho, Udney Epaminondas; Bocanegra, Silvana; Marques, Paulo César Florentino; http://lattes.cnpq.br/1264573844331881; http://lattes.cnpq.br/4596111202208863; http://lattes.cnpq.br/3835096844800301
    A iminente necessidade das empresas de adotarem o processo de transformação digital induziu muitas a buscarem recursos que possam fornecer entregas ágeis e robustas para a digitalização dos seus processos. Esta transformação digital também atinge o setor de saúde, que atrelado aos desafios inerentes a própria natureza destas mudanças, também precisa lidar com cautela ao tratar das informações sensíveis dos pacientes e o compartilhamento destes dados. Para atender a demandas como estas, onde é necessário agilidade e segurança para a elaboração de projetos, tem se popularizado o uso de plataformas low-code, que por usar os benefícios da computação em nuvem e a possibilidade de criar código utilizando recursos visuais, vai facilitar o aprendizado técnico e permitir a criação de aplicações robustas em um tempo reduzido. O presente trabalho tem como objetivo apresentar o uso de uma plataforma low-code (OutSystems) no desenvolvimento de uma aplicação web para gerenciamento e realização de consultas médicas. Como estudo de caso, será utilizado um produto da start up ZophIA.tech, que faz uso de inteligência artificial aprimorada por análise geométrica para auxiliar no diagnóstico de esquizofrenia e outras doenças mentais através da fala e gestos de pacientes. Serão implementadas as regras de segurança de dados do padrão americano HIPAA para tratar com informações sensíveis dos pacientes.
  • Imagem de Miniatura
    Item
    Desenvolvimento de um sistema auxiliar para controle de acesso de veículos para a Universidade Federal Rural de Pernambuco
    (2024-03-08) Izidio, Stefany Vitória da Conceição; Garrozi, Cícero; http://lattes.cnpq.br/0488054917286587; http://lattes.cnpq.br/0642557485551355
    Currently, vehicle access control to the Federal Rural University of Pernambuco is done manually on paper by university employees. There is also direct release for vehicles that register with the university and receive a specific sticker to use on the windshield. This type of control is not very safe, as it can be easily cloned and used by vehicles without real authorization. Furthermore, there is a short diversion of the employee's attention when he performs the manual work of writing down the sign on paper. This work aims to make the vehicle control process more reliable and safe through the development of a prototype of a system that assists in access control. This work proposes a solution by capturing an image of the license plate, identifying the vehicle plate and checking in a database whether the plate is previously registered or not. And, the system produces a light signal to indicate to the employee whether the license plate is registered or not. To achieve this, a hardware product was assembled and embedded software was developed. The hardware consists of a set of electronic devices such as LEDs, camera, processing device, etc. The software is a set of libraries that were, for the most part, developed in Python. For the embedded software, a set of images with photos of Brazilian car license plates was used to train an object detection model to detect the license plates. Finally, an optical character recognition service was used to extract the content of the plate, thus making it possible to register and emit the light signal to the user.
  • Imagem de Miniatura
    Item
    Detecção de fake news: uma abordagem baseada em Large Language Models e Prompt Engineering
    (2025-03-20) Fonseca, Pablo Weslley Silva da; Lima, Rinaldo José de; http://lattes.cnpq.br/7645118086647340; http://lattes.cnpq.br/6258598537884813
    Este trabalho aborda o uso de Large Language Models (LLMs) para a detecção de fake news ou notícias falsas no idioma inglês e português. As notícias falsas têm gerado impactos negativos, como desinformação e conflitos sociais, sendo amplamente disseminadas pelas redes sociais. Embora métodos tradicionais de verificação sejam eficazes, como checagem manual e agências de verificação de fatos, a aplicação de algoritmos de machine learning e deep learning trouxe avanços importantes. No entanto, esses modelos apresentam limitações, como perda de contexto semântico e custos de treinamento. A introdução da arquitetura Transformers possibilitou avanços significativos com LLMs, como BERT, GPT e T5, devido à sua capacidade de compreender padrões linguísticos complexos. Este trabalho propõe uma abordagem de detecção de notícias falsas a partir recuperações de informações pela Web e o modelo Qwen2.5-7B-Instruct, comparando o desempenho com propostas que combina recuperação de informações com modelos tradicionais e LLMs. Os resultados destacam vantagens e desvantagens, contribuindo para futuras melhorias em sistemas automatizados de detecção de notícias falsas.
  • Imagem de Miniatura
    Item
    Estudo comparativo de técnicas de seleção de contextos em sistemas de recomendação de domínio cruzado sensivéis ao contexto
    (2018) Brito, Victor Sales de; Silva, Douglas Véras e; http://lattes.cnpq.br/2969243668455081; http://lattes.cnpq.br/0340874538265508
    There are several approaches to implement a recommendation system, such as CrossDomain Context-Aware Recommendation Systems (CD-CARS), which was used in this work because it enables quality improvement of recommendations using multiple domains (e.g. books, movies and musics), while taking into account the use of contexts (e.g. season, time, company and location). However, caution is needed in using contexts to make items suggestions, since the contexts may impair the recommendation performance when they are considered “irrelevants”. Therefore, the selection of relevant contexts is a key factor for the development of CD-CARS, and there is a lack of papers for selection techniques in datasets with contextual information and cross-domain. Thus, this work applied the Information Gain (IG), Chi-square test, Minimum Redundancy Maximum Relevance (MRMR) and Monte Carlo Feature Selection (MCFS) techniques in twelve datasets with three different contextual dimensions (time, location and company) and distinct domains (books, television and musics). Finally, from the results obtained, the MCFS technique was able to classify the relevance of the contexts in a more satisfactory way than other techniques.
  • Imagem de Miniatura
    Item
    Um estudo de caso para previsão de partidas de futebol utilizando o ChatGPT
    (2024-10-01) Silva, Thiago Luiz Barbosa da; Nascimento, Leandro Marques do; http://lattes.cnpq.br/9163931285515006
    The present study aims to develop and test a tool for predicting football match outcomes using the ChatGPT language model. The research explores the potential of this technology to process match data and generate predictions, comparing its performance with the probabilities offered by betting houses. The method includes data collection through web scraping from sources such as Placar de Futebol and FBref, which allowed the creation of a rich database with detailed information about teams, championships and statistics. From this database, the tool was developed within the Arena Sport Club project, which includes features for visualizing results and football-related information. Different prompt-generation strategies were implemented in the tool to determine the best way to instruct the model to provide accurate predictions. The results showed that the model has the potential to make effective football match predictions, approaching the accuracy rates of betting houses. However, the study identified challenges such as high financial costs and the need for continuous adjustments to address the complexity of the matches and the variables involved. The conclusion suggests that while ChatGPT offers a promising tool for sports predictions, its use in real-world contexts needs to be optimized. Future research can enhance the application of this technology, reducing costs and improving accuracy over time.
  • Imagem de Miniatura
    Item
    Explainable Artificial Intelligence - uma análise dos trade-offs entre desempenho e explicabilidade
    (2023-08-18) Assis, André Carlos Santos de; Andrade, Ermeson Carneiro de; Silva, Douglas Véras e; http://lattes.cnpq.br/2969243668455081; http://lattes.cnpq.br/2466077615273972; http://lattes.cnpq.br/3963132175829207
    Explainability is essential for users to efficiently understand, trust, and manage computer systems that use artificial intelligence. Thus, as well as assertiveness, understanding how the decision-making process of the models occurred is fundamental. While there are studies that focus on the explainability of artificial intelligence algorithms, it is important to highlight that, as far as we know, none of them have comprehensively analyzed the trade-offs between performance and explainability. In this sense, this research aims to fill this gap by investigating both transparent algorithms, such as Decision Tree and Logistic Regression, and opaque algorithms, such as Random Forest and Support Vector Machine, in order to evaluate the trade-offs between performance and explainability. The results reveal that opaque algorithms have a low explanability and do not perform well regarding response time due to their complexity, but are more assertive. On the other hand, transparent algorithms have a more effective explainability and better performance regarding response time, but in our experiments, we observed that accuracy obtained was lower than the accuracy of opaque models.
  • Imagem de Miniatura
    Item
    Geração aumentada para recuperação de dados urbanos integrados: consolidando dados do IBGE, Censo, CNEFE e OSM para a otimização do planejamento urbano
    (2025-03-21) Conceição, Keyson Raphael Acioli da; Lima, Rinaldo José de; http://lattes.cnpq.br/7645118086647340; http://lattes.cnpq.br/3198610477751043
    Nos últimos anos, os campos da Inteligência Artificial (IA) e do aprendizado de máquina (AM) revolucionaram o domínio do planejamento urbano, pois permitem que volumes substanciais de dados sejam analisados de forma eficaz, incentivando melhor alocação de recursos e entregas de serviços públicos. Para atingir este objetivo, o agente inteligente proposto neste trabalho reúne dados de várias fontes, incluindo Censo Demográfico, Cadastro Nacional de Endereços para Fins Estatísticos - CNEFE, e OpenStreetMap (OSM) para oferecer respostas baseadas em contexto relacionadas à distribuição da população e acesso a diferentes serviços urbanos. A abordagem proposta inclui um pipeline de processamento que implementa normalização, indexação vetorial das informações e representação semântica para tornar as consultas mais eficazes. Para avaliar o sistema proposto, foi conduzido um experimento com especialistas em planejamento urbano e analisamos a relevância, clareza e utilidade das respostas geradas pelo sistema. Tais resultados mostram que o agente é capaz de detectar áreas com pouca cobertura de serviços necessários, indicando uma alocação adequada. No entanto, outros desafios, tais como a necessidade de melhor clarificação das respostas e ampliação da cobertura espacial, foram reconhecidos como oportunidades para trabalho futuro.
  • «
  • 1 (current)
  • 2
  • 3
  • »
Logo do SIB-UFRPE
Arandu - Repositório Institucional da UFRPE

Universidade Federal Rural de Pernambuco - Biblioteca Central
Rua Dom Manuel de Medeiros, s/n, Dois Irmãos
CEP: 52171-900 - Recife/PE

+55 81 3320 6179  repositorio.sib@ufrpe.br
Logo da UFRPE

DSpace software copyright © 2002-2025 LYRASIS

  • Enviar uma sugestão