TCC - Bacharelado em Ciência da Computação (Sede)

URI permanente para esta coleçãohttps://arandu.ufrpe.br/handle/123456789/415

Navegar

Resultados da Pesquisa

Agora exibindo 1 - 9 de 9
  • Imagem de Miniatura
    Item
    Um currículo de aprendizado por reforço para o cenário “Run to Score with Keeper” do Google Research Football Environment
    (2019-12-10) Silva, Jonatan Washington Pereira da; Sampaio, Pablo Azevedo; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/8865836949700771; http://lattes.cnpq.br/6846637095187550
    Reinforcement learning is a group of techniques that allow an agent to interact with a particular environment. Agents observe the state of the environment and perform an action, the action is evaluated through a reward obtained. The agent objective is to maximize this reward. Various issues such as three-dimensional locomotion and electronic games have been addressed by reinforcement learning (KURACH et al., 2019). The Trainament of agents for a soccer game usually has sparse rewards, what slows learning (MATIISEN et al., 2019). One technique that can solve this obstacle is the curriculum learning proposed in (BENGIO et al., 2009). This technique use simplest tasks of the main task and the increase difficult level with the time. In This work we present two curriculum, identified as 5-15-30-50 e 3-10-20-67, for the scenario Run To Score With Keeper of Football Academy. We have shown that curriculums on average achieved better results compared to training only in the main scenario, without curriculum. Curriculum 3-10-20-67 achieved a better result even considering the pattern deviation.
  • Imagem de Miniatura
    Item
    Graph Embeddings para Node Classification em representação baseada em grafos de frases em linguagem natural
    (2019) Silva, João Marcos Nascimento da; Lima, Rinaldo José de; http://lattes.cnpq.br/7645118086647340; http://lattes.cnpq.br/5276914899067852
    Due to the large amount of works developed in the biomedical field and the availability ofhuge databases on biomedical entities, including proteins, genes and viruses, it comesthe need to be able to automatically index such human knowledge bases.Such need has led to the development and computational tools to assist the researcherin the recovery of specific information involving certain proteins and their relations. Inthis context, two of the main problems in the biomedical area involving techniques of Text Mining most investigated are the Named Entity Recognition (NER) and RelationExtraction.This work focuses on the first problem that serves as a basis for the second, i.e., first wehave to identify and classify the entities and then, with the identified/classified entities,identify the existing relations between them, if any. The approach adopted in this paperis based on the recent techniques of supervised/non-supervised learning of deep neural networks, or Deep Learning (DL). In particular, the problem of NER is investigated usingrecent techniques of dense feature representation using DL.At first, the sentences from a biomedical corpus are represented as graphs thanks tothe generation of annotations (metadata) generated automatically by natural language processing tools, such as tokenization, syntactic parsing, etc. These graphs are thenimported into a graph-based database so that various queries submitted to this data base can be optimized in order to extract both lexical and syntactic attributes (or features) ofthe entities (or nodes) present in the graphs. The information generated in the previousstep is used as input Deep Learning-based algorithms called Graph Embedding (GE)that map the representation of graph nodes (entity) in a dense vector representation(vector of real numbers) that has several properties of interest for this search. Finally,such dense representation of features) are employed as input for supervised machine learning algorithms.This work presents an experimental study where some of the existent algorithms of GEare compared, along with several types of sentence representation based on graphs,and their impacts on the task of entity classification (NER), or node classification. Theexperimental results are promising, reaching more than 90% accuracy in the best cases
  • Imagem de Miniatura
    Item
    Avaliação entre algoritmos de filtragem colaborativa baseada em vizinhança e transferência de conhecimento para CD-CARS
    (2019) Silva, Guilherme Melo da; Silva, Douglas Véras e; http://lattes.cnpq.br/2969243668455081; http://lattes.cnpq.br/7122596102314881
    Recommendations in scenarios with the lack of preferences expressed by users is an importantlimitation for Recommendation Systems (RS). Due to this problem, cross-domain RS (CDRS)searches have gained relevance, where collaborative filtering (CF) is one of the most exploitedtechniques in this area. The CD-CARS system shows that the use of contextual information,available in user preferences, can optimize CF neighborhood-based algorithms, a techniquewidely used in multidomain CF. Although they provide accurate recommendations, some neigh-borhood-based algorithms such as the one used in the CD-CARS have the limitation of the useof multi-domains only in the occurrence of user overlap between domains, a non-trivial scenarioin real databases. This work presents a comparative analysis of different recommendation algo-rithms involving collaborative filtering techniques. The CD-CARS’ NNUserNgbr-transClosure(CF neighborhood-based) and Tracer (CF transfer learning-based) algorithms, were used as thebasis for the recommendation algorithms. In the experiments, the CF algorithms were integratedinto the context-aware techniques, addressed in the CD-CARS: Contextual Pre-Filtering andPost-Filtering, being applied on two data sets, formed by two auxiliary domains and one target,with and without overlap between domains. The MAE and RMSE performance metrics wereused to evaluate the algorithms. The results of the experiments showed that the Tracer algorithmpresented better results concerning the NNUserNgbr-transClosure algorithm in all experimentscenarios without user overlap, with and without the use of the Contextual Pre-Filtering or Post-Filtering.
  • Imagem de Miniatura
    Item
    Uma análise do impacto da experiência prévia com pensamento computacional no desempenho de estudantes em programação no ensino superior
    (2019) Silva, Emanuel Leite Oliveira da; Falcão, Taciana Pontual da Rocha; http://lattes.cnpq.br/5706959249737319; http://lattes.cnpq.br/5886730483799524
    This paper aims to study the effect of previous contact with Computational Thinking instudents of higher education courses. Computational Thinking is a skill that aims to de-velop logical thinking and algorithmic thinking on an ongoing and lifelong basis, helpingthem to solve personal and professional life problems using the techniques of computer science. According to research, more than 50% of students in computer courses willdrop out of the course and one of the main reasons is the difficulty in learning and as-similating the basic and advanced concepts of programming, becoming unmotivated.Thus, this work investigated the feasibility of using computational thinking to help thosestudents with programming learning difficulties. Therefore, two student profiles wereidentified, who had contact with Computational Thinking before and after attending Pro-gramming, and questionnaires were applied to evaluate the perspectives they had onthedisciplineanditsbenefit,whethertheuseofComputationalThinkingwasproductiveor not. Two teachers from the UFRPE Computer Degree course were also interviewedto examine their perspective on Computational Thinking on student performance, com-paring students who had contact before and after attending Programming. From thestudents’ perspective, the use of Computational Thinking assists them in cognitive de-velopment, improving logical thinking and algorithmic thinking, and programming learn-ing. Teachers believe that Computational Thinking cognitively prepares students forProgramming, reducing the effort to assimilate the basics and seeing this approach asan improvement for students.
  • Imagem de Miniatura
    Item
    Classificação de imagens de textura geradas por gráficos de recorrências no problema de pessoas sofrendo ataques epiléticos
    (2019) Queiroz, Danielly de Moura Borba; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/7461629772562910
    Epilepsy is a neurological condition characterized by the occurrence of epileptic seizuresthat recur in variations. These seizures are clinical manifestations of an abnormal dis-charge of neurons, which are cells that make up the brain. Some features make earlydiagnosis of epilepsy a major challenge, even for the most experienced clinicians. Asmedical aid, there are tests such as electroencephalogram (EEG) represented by timeseries widely used in the diagnosis of epilepsy. Time series are present in various areasof study, such as medicine, biology, economics, among others. Your graphics exposehidden patterns and alter data such as texture patterns as well as those that can beused by texture extraction methods. In addition, there are several tools for extractingtime series information, one of which is the hit image, which is currently used to verifythe change of an unsigned pattern. This paper presents a study of texture descriptorsand classifiers in images of healthy and epileptic people generated by recurrence im-ages. The texture descriptors using this study were: Local Binary Models (LBP), LocalPhase Quantification (LPQ) and Gabor Filter Bank. To the best of our knowledge, nostudy has yet been performed, applying these descriptors to base recurrence imagesused in this work. The evaluation is performed through the average hit, precision, recalland f-measure rate resulting from the following classifiers: textit Random Forest, andtextit Support Vector Machine (SVM). The experiments showed that the SVM classi-fier using the LPQ descriptor showed promising results, obtaining 92.1% hit, recall andf-measure mean and for accuracy obtained 92.26%.
  • Imagem de Miniatura
    Item
    Projeto integrado de redes ópticas de longa distância e Metropolitanas usando algoritmos de inteligência computacional: estudo de caso para o estado de Pernambuco
    (2017) Nascimento, Jorge Candeias do; Araújo, Danilo Ricardo Barbosa de; http://lattes.cnpq.br/2708354422178489; http://lattes.cnpq.br/8065833426856653
    Nowadays, several network technologies with different prices and adaptations are appearing in the market. A network topology project involves several metrics; the metrics are used to evaluate a project. In the evaluation we use metrics such as robustness metrics (which help in the network’s ability to recover from a failure), blocking probability and energy consumption. The best way to optimize infrastructure in a network design would be to use the latest technologies, only the most efficient ones, even if such technologies are more expensive. However, of the metrics to be considered in this type of project, one of them is the cost (capital employed). Therefore, it is not always feasible to use the most expensive ones on the market. Many technical issues can help control the metrics of these projects, among which is the network topology (link interconnection). Multiobjective evolutionary algorithms (algorithms inspired by the evolution of the species) have been studied in the state of the art for the conception of network topologies. At the same time, clustering algorithms (algorithms specialized in separating samples into groups) have been used in other types of network studies. This study aimed to make use of computational intelligence algorithms in the construction of a network topology project, using the state of Pernambuco as a case study. In a first stage of the study, a clustering algorithm was used in the division of the state into groups. The intention of this part of the work was to measure the coverage of the network in relation to the entire size of the state, and thus ensure the completeness of the network. In addition, the clustering stage also aimed to propose a cost control model through the merging of different technologies for the network (Passive or active) depending on the function of the network segment. In a second step, an evolutionary multiobjective algorithm was used to compose several network topologies that served the clusters created in the previous step. This algorithm has evolved the various network topologies in order to improve four metrics, Blocking Probability, Cost, Energy Consumption and Algebraic Connectivity. The multiobjective algorithm was designed as a memetic algorithm, and, after a set of executions, the algorithm performances were compared with and without the alteration. The results of the tests, in the first stage, showed that the clustering techniques are quite efficient and adaptable to the proposed goal both in terms of network completeness and cost control. Already in the second stage, or multiobjective search stage, it was verified, through the use of a quality indicator (hypervolume), that there was an improvement of the algorithm in relation to convergence and diversity to the Pareto curve, with the use in its new form as memetic algorithm.
  • Imagem de Miniatura
    Item
    Desenvolvimento de um algoritmo baseado em lógica fuzzy para segmentação de lesões em imagens de mamografia digital
    (2018) Bezerra, Kallebe Felipe Pereira; Cordeiro, Filipe Rolim; http://lattes.cnpq.br/4807739914511076; http://lattes.cnpq.br/3067789764865525
    Breast cancer has been a growing problem for women around the world. According to the World Health Organization (WHO), it is the most common type of cancer among women, with increasing participation, making it one of the most fatal types of cancer worldwide. In Brazil, it is the leading cause of cancer death among women, with 59.000 new cases of cancer in 2018, with an incidence of about 59,70 cases per 100,000 women. Several methods of prevention have been developed, but one of the most effective methods for the detection of lesions is the diagnosis through digital mammography. However, the interpretation of mammography can be a difficult task even for a specialist, since the analysis is affected by several factors, such as image quality, radiologist experience and type of lesion. 12% to 30% of breast cancer cases are not detected because of bad mammography interpretation. The main objective of this work is the study and development of a tumor segmentation technique in mammography images using Fuzzy logic. It aims to insert the Fuzzy approach in the algorithm Random Walker, in order to propose a new solution for lesion segmentation. It aims to insert the Fuzzy approach in the algorithm Random Walker, in order to propose a new solution for segmentation of tumors. Finally, this work compare the results with state of the art techniques. The database has 322 mammography images obtained from 161 patients. However, only 57 of the images contain masses. Results showed that the proposed approach of the Random Walker with Fuzzy logic, used for mass segmentation, obtained better results when compared with the classic Walker Random algorithm, besides decreasing the user effort in the algorithm initialization step.
  • Imagem de Miniatura
    Item
    Segmentação de banhistas utilizando algoritmos de agrupamento com seleção automática do número de grupos em regiões litorâneas
    (2019) Moura, Allan Alves de; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/3319938637009294
    The increasing number of shark attacks has been frightening people that lives in coastal areas, making it impossible to bath in certain places. In an attack situation, most of the time a course of action to save the victim’s life is only taken after the incident already has occurred, which a lifeguard tries to help her. An auxiliary tool for lifeguards was thought in order to mitigate these events and allow the lifeguards to act before the incident happens, alerting the professional if someone tries to surpass a delimited zone. The first step to bring this auxiliary tool to life is the technique of image segmentation on beach photos in search for regions that share visual similarities in order to find people inside the sea. Therefore the objective of this work is to study and find a good image segmentation algorithm capable of automatically selecting the best number of groups without the parameter control necessity. The selected algorithm will be used to implement the first phase of the lifeguard auxiliary tool in search for image regions that represent bathers. Image pre-processing techniques like beach removal were evaluated, as well as characteristics vectors selection used to compare elements. The combination between algorithms and characteristics vectors were evaluated with and without beach removal. The analyzed algorithms were: hierachical aglomerative, hierarchival divisive, X-means, auto group segmentation and automatic colored image segmentation. All of them were applied to three different characteristics vectors composed by the color system RGB (red, green and blue), LAB and the combination of RGB + LAB. The most promising result, taking into account the visual analysis and the algorithm comportamental analysis, was obtained by the automatic color image segmentation with RGB+ LAB, composed characteristics vector, with value of 1.5245 extracted from Dunn’s index, using the beach removal as post-processing technique.
  • Imagem de Miniatura
    Item
    Classificação de banhistas na faixa segura de praia
    (2018) Silva, Ricardo Luna da; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/3088880066515750
    In order to avoid risks in aquatic environments, drownings and shark attack, beach areas should be constantly monitored. When necessary, rescue workers must respond quickly to the case. This work aims to propose a classification algorithm for people as part of a system for automatic monitoring in beach areas.Certain environmental factors are quitech allenging, such as varying brightness on cloudy days,the position of the sun at different times of the day, difficulty in segmenting images, submerged people, and position away from the camera. For this type of problem in the literature is commonly found, for people detection, the use of image descriptors in conjunction with a classifier. This work performs a beach image study using the following image descriptors and their combinations in pairs: Hu Moments, Zernike Moments, Gabor Filter, Guided Gradient Histogram(HOG),Local Binary Patterns(LBP) and Haar.Inaddition,a dimensionality reduction technique (PCA) is applied for feature selection. The detection rate is evaluated with the following classifiers: text it Random Forest, casca de classifier and textit Support Vector Machine (SVM) with linear and radial textit kernel. The experiments demonstrated that the SVM classifier with radial kernel using the HOG and LBP descriptors applying the PCA technique showed promising results, obtaining 90.31% accuracy