Bacharelado em Ciência da Computação (Sede)
URI permanente desta comunidadehttps://arandu.ufrpe.br/handle/123456789/6
Siglas das Coleções:
APP - Artigo Publicado em Periódico
TAE - Trabalho Apresentado em Evento
TCC - Trabalho de Conclusão de Curso
Navegar
6 resultados
Resultados da Pesquisa
Item Um estudo de caso para previsão de partidas de futebol utilizando o ChatGPT(2024-10-01) Silva, Thiago Luiz Barbosa da; Nascimento, Leandro Marques do; http://lattes.cnpq.br/9163931285515006The present study aims to develop and test a tool for predicting football match outcomes using the ChatGPT language model. The research explores the potential of this technology to process match data and generate predictions, comparing its performance with the probabilities offered by betting houses. The method includes data collection through web scraping from sources such as Placar de Futebol and FBref, which allowed the creation of a rich database with detailed information about teams, championships and statistics. From this database, the tool was developed within the Arena Sport Club project, which includes features for visualizing results and football-related information. Different prompt-generation strategies were implemented in the tool to determine the best way to instruct the model to provide accurate predictions. The results showed that the model has the potential to make effective football match predictions, approaching the accuracy rates of betting houses. However, the study identified challenges such as high financial costs and the need for continuous adjustments to address the complexity of the matches and the variables involved. The conclusion suggests that while ChatGPT offers a promising tool for sports predictions, its use in real-world contexts needs to be optimized. Future research can enhance the application of this technology, reducing costs and improving accuracy over time.Item Geração automática de sistemas backend com o suporte de IA generativa seguindo a arquitetura limpa(2024-03-06) Costa, Henrique Sabino da; Burégio, Vanilson André de Arruda; http://lattes.cnpq.br/3518416272921878; http://lattes.cnpq.br/5381537544189009In this work, we investigated the potential contribution of automatic code synthesis technologies, particularly OpenAI’s GPT-4, to the maintenance and adherence to best practices in software architecture in startups. Given the characteristic of these companies to operate in environments of rapid change and innovation, but with limited resources, practices such as unit testing and documentation are often neglected. Conversely, we emphasize the importance of such practices for their contribution to the maintainability and scalability of applications. As a means to reconcile the fast pace of development with the need for good practices, we proposed the use of generative language models (GLM), specifically GPT-4, for code generation following the principles of clean architecture, a set of concepts defined by Robert C. Martin for developing scalable and maintainable projects. The methodological approach was a combination of qualitative and quantitative analysis, focused on the exploration and adaptation of prompts for code generation and the development of practical exemplifications in various programming languages. Notably, three projects in C#, JavaScript, and Python were produced, which were evaluated according to metrics of abstraction, instability, and adherence to the Main Sequence - key concepts in maintaining clean architecture. The results indicated that, despite the potential of the proposed technology to accelerate development and promote adherence to good practices through automation, there are significant gaps in GPT-4 ability to generate code fully aligned with clean architecture and executable without manual intervention. Problems related to inconsistency in the project structure and the integrity of the generated code were observed, suggesting that, while the tool offers a promising foundation for enhancing efficiency in less complex projects, its applicability in complex and diverse contexts still presents challenges. Therefore, it is concluded that the use of GLMs like GPT-4 in the automatic generation of code represents a valuable auxiliary tool for startups in software development. However, the need for manual adjustments in the code and the assurance of full adherence to recommended software architecture practices reinforce the idea that such technologies should be seen as complementary to human work and not as complete substitutes. For future work, it is recommended to deepen the investigation of GLMs specialized in code generation and to expand the experiments to encompass a wider range of programming languages and frameworks, aiming to maximize the applicability and effectiveness of this innovative approach.Item Coh-Metrix PT-BR: uma API web de análise textual para à educação(2021-03-02) Salhab, Raissa Camelo; Mello, Rafael Ferreira Leite de; http://lattes.cnpq.br/6190254569597745; http://lattes.cnpq.br/6761163457130594CohMetrix is a computational system that provides different measures of textual analysis, including legibility, coherence and textual cohesion. These measures allow a more indepth analysis of different types of educational texts such as essays, answers to open questions and messages in educational forums. This paper describes the features of a prototype, which encompass a website and an API, of a Brazilian Portuguese version of CohMetrix measures.Item Avaliação de algoritmos baseados em Deep Learning para Localizar placas veiculares brasileiras em ambientes complexos(2019) Marques, Bruno Henrique Pereira; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/3847789259699701With the increase in the number of private vehicles, we can observe the increase in the number of violations of traffic laws, theft of vehicles and, thus, a better management and traffic control is necessary. A vehicle and its owner are recognized through the unique and required vehicle license plate (LP), and to be inspected and extracted data with greater efficiency, it is recommended to use automated systems for detecting and recognizing vehicle license plates. This work introduce a study and evaluation of algorithms based on Deep Learning to locate Brazilian LPs in complex environments. For the achievement of the experiments, a bank of images of Brazilian LPs was created based on problems like images with different resolution, quality, lighting and perspective of scene. Were used the Deep Learning algorithms YOLOv2 and YOLOv3, which has not yet been studied to the best of our knowledge. In addition, the Tree-structured Parzen Estimator (TPE) algorithm was used to optimize hyperparameters and maximize the performance of selected convolutional neural networks. For the evaluation, the performance metrics were used: prediction time, Intersection over Union (IoU) and confidence rate. The experiments result demonstrate that YOLOv3 presented better performance, obtaining 99.3% of vehicle license plate detection.Item Estudo comparativo de técnicas de seleção de contextos em sistemas de recomendação de domínio cruzado sensivéis ao contexto(2018) Brito, Victor Sales de; Silva, Douglas Véras e; http://lattes.cnpq.br/2969243668455081; http://lattes.cnpq.br/0340874538265508There are several approaches to implement a recommendation system, such as CrossDomain Context-Aware Recommendation Systems (CD-CARS), which was used in this work because it enables quality improvement of recommendations using multiple domains (e.g. books, movies and musics), while taking into account the use of contexts (e.g. season, time, company and location). However, caution is needed in using contexts to make items suggestions, since the contexts may impair the recommendation performance when they are considered “irrelevants”. Therefore, the selection of relevant contexts is a key factor for the development of CD-CARS, and there is a lack of papers for selection techniques in datasets with contextual information and cross-domain. Thus, this work applied the Information Gain (IG), Chi-square test, Minimum Redundancy Maximum Relevance (MRMR) and Monte Carlo Feature Selection (MCFS) techniques in twelve datasets with three different contextual dimensions (time, location and company) and distinct domains (books, television and musics). Finally, from the results obtained, the MCFS technique was able to classify the relevance of the contexts in a more satisfactory way than other techniques.Item Classificação de banhistas na faixa segura de praia(2018) Silva, Ricardo Luna da; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/3088880066515750In order to avoid risks in aquatic environments, drownings and shark attack, beach areas should be constantly monitored. When necessary, rescue workers must respond quickly to the case. This work aims to propose a classification algorithm for people as part of a system for automatic monitoring in beach areas.Certain environmental factors are quitech allenging, such as varying brightness on cloudy days,the position of the sun at different times of the day, difficulty in segmenting images, submerged people, and position away from the camera. For this type of problem in the literature is commonly found, for people detection, the use of image descriptors in conjunction with a classifier. This work performs a beach image study using the following image descriptors and their combinations in pairs: Hu Moments, Zernike Moments, Gabor Filter, Guided Gradient Histogram(HOG),Local Binary Patterns(LBP) and Haar.Inaddition,a dimensionality reduction technique (PCA) is applied for feature selection. The detection rate is evaluated with the following classifiers: text it Random Forest, casca de classifier and textit Support Vector Machine (SVM) with linear and radial textit kernel. The experiments demonstrated that the SVM classifier with radial kernel using the HOG and LBP descriptors applying the PCA technique showed promising results, obtaining 90.31% accuracy