Bacharelado em Sistemas de Informação (Sede)

URI permanente desta comunidadehttps://arandu.ufrpe.br/handle/123456789/12


Siglas das Coleções:

APP - Artigo Publicado em Periódico
TAE - Trabalho Apresentado em Evento
TCC - Trabalho de Conclusão de Curso

Navegar

Resultados da Pesquisa

Agora exibindo 1 - 2 de 2
  • Imagem de Miniatura
    Item
    Avaliação de plataformas para o reconhecimento de placas veiculares brasileiras
    (2021-12-14) Amaral, Carlos Ivan Santos do; Garrozi, Cícero; http://lattes.cnpq.br/0488054917286587; http://lattes.cnpq.br/8099840025648951
    With the growing number of private vehicles in Brazil, better methods for managing and inspecting the vehicle fleet is becoming increasingly necessary. License plates (LP) are unique and mandatory objects with the purpose of identifying the vehicle as well as its owner. It is recommended that the efficient collection of information on license plates be performed by automated systems for LP detection and recognition. These systems are fundamental for the supervision and management of different activities related to vehicle traffic. In this regard, this paper presents a study that identifies methods for LP detection and recognition with algorithms based on machine learning and deep learning. To produce this experiment, we succeeded in collecting an image bank of vehicles in toll plazas that are located in the municipality of Cabo de Santo Agostinho - PE and provide access to the Governador Eraldo Gueiros Port Industrial Complex - SUAPE. The objective of this work was to provide a comparison between Microsoft Azure's computer vision service for LP object detection in conjunction with Google Vision's Optical Character Recognition (OCR) services with the YOLO v4 Deep Learning algorithm. The result of the experiment showed that under similar configuration conditions in both models studied, YOLO v4 performed better, achieving a 92% precision rate in license plate detection and recognition.
  • Imagem de Miniatura
    Item
    Comparação de algoritmos de reconhecimento de gestos aplicados à sinais estáticos de Libras
    (2019-07-12) Cruz, Lisandra Sousa da; Cordeiro, Filipe Rolim; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/4807739914511076; http://lattes.cnpq.br/2111589326272463
    Brazilian Sign Language (BSL) has been created in order to cope with a necessity of a non-verbal communication for the deafs, which during a long time were indoctrinated to learn the Brazilian Portuguese as their first language. Nowadays, the BSL is the Brazil’s second official language and first deaf’s language, as well as the Portuguese for the listener. Nevertheless, even with large recognition, the Brazil’s second official language is not known by the majority of the Brazilian population. The inclusion process aims to allow equality for the impaired, such that the deficiency does not become an impediment factor for living together in society. With the technology arrival and the Artificial Inteligence (AI) advances, it was created technologic artifices to allow inclusion. In the AI, the pattern recognition is one of more approached subthemes in the present, and it is widely applied for the gesture classification of many sign languages in literature. This research has, as key task, the identification of the hands that form a certain BSL gesture and, thus, the recognition of the class it belongs to. Based on American Sign Language (ASL) classification, the Feature Fusion-based Convolutional Neural Network (FFCNN), an extended network from Convolutional Neural Network (CNN), obtained the best accuracy in comparison to other networks, such as Visual Geometry Group (VGG). Therefore, based on this scenario, this work applies the FFCNN to BSL static gestures to verify whether the FFCNN obtain the best accuracy as well as obtained in ASL or not. In order to achieve the goal, this work compares three classifiers: the Visual Geometry Group (VGG), a CNN with variation of 13 and 16 layers, the FFCNN, and a Multi Layer Perceptron network used in recognition of BSL static gestures in literature. The algorithms were applied in a BSL dataset with 9,600 images of 40 signals. The results demonstrate that VGG with 16 layers obtained the best accuracy regarding the described models in this work, corresponding to 99,45%.