01.1 - Graduação (Sede)
URI permanente desta comunidadehttps://arandu.ufrpe.br/handle/123456789/2
Navegar
21 resultados
Resultados da Pesquisa
Item Predição do consumo energético de dispositivos LoRa usando aprendizagem de máquina(2024-12-10) Pimentel, Henrique Pablo Pinheiro dos Santos; Araújo, Danilo Ricardo Barbosa de; http://lattes.cnpq.br/2708354422178489; http://lattes.cnpq.br/0078523045227122A Internet das Coisas (IoT) é um conceito em constante evolução que tem conquistado destaque tanto na comunidade acadêmica quanto na indústria. Dentro dela, o consumo energético é um fator fundamental para determinar o tempo de funcionamento dos dispositivos e a frequência necessária para realizar a manutenção deles. Este artigo investiga a aplicação de algoritmos de aprendizado de máquina para predição do consumo energético de dispositivos IoT-LoRa, permitindo estimar a duração da bateria dos dispositivos e sua autonomia. A metodologia considerou a criação de um conjunto de dados a partir de experimentos com placas de desenvolvimento Event stream processing (ESP32), capturando métricas como tempo de hibernação, tipo de conexão e consumo energético. Técnicas de Inteligência Artificial (IA) são então aplicadas para prever o consumo energético com base nessas variáveis. De acordo com os resultados obtidos, a melhor técnica para prever o consumo energético é a Decision Tree, com um coeficiente de determinação superior a 96%. O estudo contribui para processos decisórios que visam selecionar dispositivos IoT considerando a autonomia projetada para as baterias de tais dispositivos.Item O ChatGPT como ferramenta pedagógica: novas perspectivas para o ensino de espanhol(2024-03-05) Araújo, Emmanuel Tiago Cardoso Corrêa de; Oliveira, Aline Fonseca de; http://lattes.cnpq.br/1895304971163472Este artículo presenta los resultados de un estudio teórico acerca de la aplicabilidad Del ChatGPT, una herramienta de Procesamiento de Lenguaje Natural (NLP) desarrollada por OpenAI, como recurso pedagógico en la enseñanza del Español como Lengua Extranjera (ELE). La investigación aborda el creciente impacto de la Inteligencia Artificial (IA) en el ámbito educativo, destacando cómo la tecnología ha revolucionado las prácticas de enseñanza y aprendizaje. Se examina la evolución de las herramientas pedagógicas a lo largo del tiempo, evidenciando cómo la integración de la IA en la educación representa un salto cualitativo hacia la personalización de la enseñanza, ofreciendo experiencias de aprendizaje más inmersivas, interactivas y adaptativas. El artículo destaca la importancia de elaborar prompts claros, precisos y bien contextualizados para maximizar la eficacia de la interacción con esta herramienta de IA. La investigación también explora estrategias para mejorar la formulación de prompts, enfatizando la importancia de la adaptabilidad y flexibilidad del ChatGPT para ajustarse a diversos contextos de aprendizaje y niveles de competência en español. Además, se proponen y analizan modelos de prompts específicos para la enseñanza Del español, ilustrando cómo estos comandos pueden ser estructurados para satisfacer las diferentes necesidades educativas.Item Relações de consumo com as tecnologias digitais de informação e comunicação: o caso da Nat Natura no Twitter(2024-10-04) Santos, Melyssa Ingrid dos; Leão, Éder Lira de Souza; http://lattes.cnpq.br/4434499456331867; http://lattes.cnpq.br/2698814524148772Due to the increasing use and personification of virtual assistants by companies in search of better serving their customers and thus achieving new arrangements in the customer-company relationship, the need arose to better understand how this growing phenomenon occurs and what effects it produces in our current consumer society. Therefore, this work aimed to (i) relate the Culture of Convergence to the emergence of new consumer relationships; (ii) identify how Natura and the Nat Natura persona are perceived by consumers and what effects it generates in new consumer relationships; (iii) analyze the profile of Nat Natura on the digital social network Twitter; (iv) as well as identify and analyze the elements that characterize the brand representation for new consumers, through the analysis of data from a questionnaire applied to the public. To obtain the results, the levels of knowledge and involvement of consumers related to the brand persona were observed. For this purpose, a qualitative research was carried out through a literature review and the profile of Nat Natura on the digital social network Twitter was analyzed. In this way, 6 publications from their profile on Twitter were selected and analyzed, which obtained the greatest engagement and reach between September 2022 and August 2024. The results show that it is in the informality of the language, in the use of similarities in tastes, identity and values with its audience that the Nat Natura persona seeks to represent and create more personal connections with its new consumers.Item Inteligência artificial no ensino fundamental com robótica lego, aprendizagem baseada em projetos e gamificação(2024) Souza, Diogo Albuquerque Dias de; Rodrigues, Rodrigo Lins; http://lattes.cnpq.br/5512849006877767; http://lattes.cnpq.br/3374743431217595Item Um estudo de caso para previsão de partidas de futebol utilizando o ChatGPT(2024-10-01) Silva, Thiago Luiz Barbosa da; Nascimento, Leandro Marques do; http://lattes.cnpq.br/9163931285515006The present study aims to develop and test a tool for predicting football match outcomes using the ChatGPT language model. The research explores the potential of this technology to process match data and generate predictions, comparing its performance with the probabilities offered by betting houses. The method includes data collection through web scraping from sources such as Placar de Futebol and FBref, which allowed the creation of a rich database with detailed information about teams, championships and statistics. From this database, the tool was developed within the Arena Sport Club project, which includes features for visualizing results and football-related information. Different prompt-generation strategies were implemented in the tool to determine the best way to instruct the model to provide accurate predictions. The results showed that the model has the potential to make effective football match predictions, approaching the accuracy rates of betting houses. However, the study identified challenges such as high financial costs and the need for continuous adjustments to address the complexity of the matches and the variables involved. The conclusion suggests that while ChatGPT offers a promising tool for sports predictions, its use in real-world contexts needs to be optimized. Future research can enhance the application of this technology, reducing costs and improving accuracy over time.Item Desenvolvimento de um sistema auxiliar para controle de acesso de veículos para a Universidade Federal Rural de Pernambuco(2024-03-08) Izidio, Stefany Vitória da Conceição; Garrozi, Cícero; http://lattes.cnpq.br/0488054917286587; http://lattes.cnpq.br/0642557485551355Currently, vehicle access control to the Federal Rural University of Pernambuco is done manually on paper by university employees. There is also direct release for vehicles that register with the university and receive a specific sticker to use on the windshield. This type of control is not very safe, as it can be easily cloned and used by vehicles without real authorization. Furthermore, there is a short diversion of the employee's attention when he performs the manual work of writing down the sign on paper. This work aims to make the vehicle control process more reliable and safe through the development of a prototype of a system that assists in access control. This work proposes a solution by capturing an image of the license plate, identifying the vehicle plate and checking in a database whether the plate is previously registered or not. And, the system produces a light signal to indicate to the employee whether the license plate is registered or not. To achieve this, a hardware product was assembled and embedded software was developed. The hardware consists of a set of electronic devices such as LEDs, camera, processing device, etc. The software is a set of libraries that were, for the most part, developed in Python. For the embedded software, a set of images with photos of Brazilian car license plates was used to train an object detection model to detect the license plates. Finally, an optical character recognition service was used to extract the content of the plate, thus making it possible to register and emit the light signal to the user.Item Geração automática de sistemas backend com o suporte de IA generativa seguindo a arquitetura limpa(2024-03-06) Costa, Henrique Sabino da; Burégio, Vanilson André de Arruda; http://lattes.cnpq.br/3518416272921878; http://lattes.cnpq.br/5381537544189009In this work, we investigated the potential contribution of automatic code synthesis technologies, particularly OpenAI’s GPT-4, to the maintenance and adherence to best practices in software architecture in startups. Given the characteristic of these companies to operate in environments of rapid change and innovation, but with limited resources, practices such as unit testing and documentation are often neglected. Conversely, we emphasize the importance of such practices for their contribution to the maintainability and scalability of applications. As a means to reconcile the fast pace of development with the need for good practices, we proposed the use of generative language models (GLM), specifically GPT-4, for code generation following the principles of clean architecture, a set of concepts defined by Robert C. Martin for developing scalable and maintainable projects. The methodological approach was a combination of qualitative and quantitative analysis, focused on the exploration and adaptation of prompts for code generation and the development of practical exemplifications in various programming languages. Notably, three projects in C#, JavaScript, and Python were produced, which were evaluated according to metrics of abstraction, instability, and adherence to the Main Sequence - key concepts in maintaining clean architecture. The results indicated that, despite the potential of the proposed technology to accelerate development and promote adherence to good practices through automation, there are significant gaps in GPT-4 ability to generate code fully aligned with clean architecture and executable without manual intervention. Problems related to inconsistency in the project structure and the integrity of the generated code were observed, suggesting that, while the tool offers a promising foundation for enhancing efficiency in less complex projects, its applicability in complex and diverse contexts still presents challenges. Therefore, it is concluded that the use of GLMs like GPT-4 in the automatic generation of code represents a valuable auxiliary tool for startups in software development. However, the need for manual adjustments in the code and the assurance of full adherence to recommended software architecture practices reinforce the idea that such technologies should be seen as complementary to human work and not as complete substitutes. For future work, it is recommended to deepen the investigation of GLMs specialized in code generation and to expand the experiments to encompass a wider range of programming languages and frameworks, aiming to maximize the applicability and effectiveness of this innovative approach.Item Técnicas de comitês para a estimação de esforço na correção de software(2019-12-10) Guimarães, Ariana Lima; Soares, Rodrigo Gabriel Ferreira; http://lattes.cnpq.br/2526739219416964; http://lattes.cnpq.br/2605671850587343A well-defined planning of a software project, since the early stages, is indispensable to its success, whether the development refers to product’s creation or maintenance. Accordingly to the software life cycle, maintenance is continuously executed after the product’s building and delivery, in parallel to the tests execution by engineers and/or users. In this stage, User Stories and Issue Reports are the first documents to be presented. These documents describe, in natural language, business requirements, error scenarios found, expected corrections and enhancements for the system. Its objectives are, among other things, ranking the activities needed to be accomplish during the project. Therefore, in line with the available resources – human, financial and temporal -, it is possible to estimate the effort that will be necessary in the activities development and generate essential information for an effective and efficient planning. As these documents are written in natural texts, it raises the opportunity to use Natural Language Processing and Machine Learning (ML) to predict software effort. In practice, in the daily life of software factories, it is common to use experts’ and project staff’s opinion to judge the effort required by an activity during Planning Poker sessions. Usually, in this technique, the effort is measured in Story Points, which follow Fibonacci sequence. But this planning model requires the scaling of more resources to be executed. The application of ML causes in a system, after the learning phase, the ability to seize the team experience and replicate it quickly and automatically to estimate the activities effort. Thus, this work covers the ML field, proposing a PV-DM Ensemble approach to extract features of Issue Reports to estimate Story Points, the effort indicator. Compared to the two other approaches of BoW and simple PV-DM, the proposed technique has presented good results, about 80% of f-measure, in a supervised learning SVM classifier. The experiments results proved to be a starting point for further study of PV-DM Ensemble approach and its improvement.Item Abordagem comparativa entre a aplicação da metodologia KATAM e inventário tradicional em plantios de Khaya senegalensis (Desr.) A. Juss(2023-09-15) Silva, Kamilo Alaboodi da; Silva, Emanuel Araújo; Hakamada, Rodrigo Eiji; http://lattes.cnpq.br/4186459700983170; http://lattes.cnpq.br/2765651276275384; http://lattes.cnpq.br/5612600854790108The forest inventory helps forest managers taking decisions. Installing, measuring and managing a network of inventory plots is a costly and time-consuming activity. The remote sensing techniques are increasingly gaining ground in the forestry sector because they have the potential to reduce costs without incurring any loss of precision, but they are not widely used due to their high cost. In this context, the Swedish company Katam Technologies has developed a solution for acquiring and analyzing forest data: KATAM Forest, which works using the KASLAM algorithm, which has not yet been widely used and tested in national forests. The goal of this study was to compare, in terms of accuracy and operational performance, the application of KASLAM artificial intelligence through the KATAM Forest application in forest inventory activities in Khaya senegalensis (Desr.) A. Juss plantations (5 years old), located in the state of Pernambuco, with the sampling techniques of a traditional forest inventory. Diameter at breast height (DBH) data was collected within 9 plots, as well as videos with artificial intelligence, recorded within the coordinates of the sampling units. Descriptive statistics were performed on the DBH data by plot, followed by the parametric Shapiro-Wilk normality test, where, if the null hypothesis was rejected, a non-parametric Mann-Whitney U test was required to understand the difference in averages. Operational performance was assessed using the time data obtained during the inventory process within the plots in both approaches. The DBH variable in the two inventory methodologies does not have a clear distribution concentrated close to the mean. The non-parametric test resulted in the averages obtained for DBH not showing statistical differences between the methodologies at the 5% significance level. The operational performance of the Katam methodology was half of the traditional inventory. The Katam technologies are very promising in terms of reducing time and costs in forest inventory operations. Therefore, further studies are recommended so that the subject can be disseminated in a practical way.Item Uma abordagem baseada em aprendizado de máquina para dimensionamento de requisitos de software(2016-12-13) Fernandes Neto, Eça da Rocha; Soares, Rodrigo Gabriel Ferreira; http://lattes.cnpq.br/2526739219416964; http://lattes.cnpq.br/6325583065151828This work proposes to perform the automatic sizing of software requirements using a machine learning approach. The database used is real and was obtained from a company that works with Scrum-based development process and Planning Poker es- timation. During the studies, data pre-processing, classification and selection of best attributes were used along with the term frequency–inverse document frequency algo- rithm (tf-idf) and principal component analysis (PCA). Machine learning and automatic sorting occurred with the use of Support Vector Machines (SVM) based on available data history. The final tests were performed with and without attribute selection by PCA. It is demonstrated that the assertiveness is greater when the best attributes are selected. The final tool can estimate the size of user stories with a generalization of up to 91 %. The results were considered likely to be used in the production environment without any problems to the development team.
- «
- 1 (current)
- 2
- 3
- »