01.1 - Graduação (Sede)
URI permanente desta comunidadehttps://arandu.ufrpe.br/handle/123456789/2
Navegar
5 resultados
Resultados da Pesquisa
Item Semantic segmentation for people detection on beach images(2021-03-01) Monte, Leonardo de Araujo; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/0547792731866043Cameras monitoring are increasingly aided by computer vision systems that identify risk situations. This work is part of an automatic track system to monitor beaches in the metropolitan area of Recife in order to prevent bathers to trespass the boundaries of the safe region for swimming. Semantic segmentation has gained strength in several computer vision tasks. Usually, the metaarchitecture of a semantic segmentation network consists of two modules: encoder (backbone) and decoder. This work does a study combining a set of semantic segmentation networks, Unet, Xnet, LinkNet and Unet++ with the pretrained backbones VGG16 and VGG19, to detect swimmners in beach images. We have used our own dataset, made by several images taken at the Boa Viagem beach, RecifeBrazil. The algorithms are evaluated with MIoU metric regarding the entire image scene and just in the water area. The best MIoU regarding all image was 80.87best MIoU in detecting swimmers at the beach was 85.56obtained by the LinkNet algorithm with both VGG16 and VGG19 backbones.Item Representação virtual para segurança de espaços através de detecção de objetos, pessoas e suas relações(2022-10-07) Torres, Lucas Amorim Vasconcelos; Simões, Francisco Paulo Magalhães; http://lattes.cnpq.br/4321649532287831; http://lattes.cnpq.br/8237338186784482The detection of risk situations is something that has been improved every year. This work presents the prototype of a system for monitoring the risk of accidents in industrial environments based on tracking objects and people using computer vision. In this work, visualization tools are used in virtual environments to detect collisions, verifying when a large object is close to colliding with people. The central idea is to make a spatialanalysis of a tractor, or some object similar to this type of vehicle, and of the people who transit in that place. Through this, it is possible to create visualization methods so that the end user, whether a work safety inspector or an industry 4.0 system, can understand what is happening in the surroundings of the place and the relationships between objects. In addition to viewing distances, the prototype allows changing the distances considered safe between objects and people, making it possible to also test different types of tool configuration.Item Avaliação de plataformas para o reconhecimento de placas veiculares brasileiras(2021-12-14) Amaral, Carlos Ivan Santos do; Garrozi, Cícero; http://lattes.cnpq.br/0488054917286587; http://lattes.cnpq.br/8099840025648951With the growing number of private vehicles in Brazil, better methods for managing and inspecting the vehicle fleet is becoming increasingly necessary. License plates (LP) are unique and mandatory objects with the purpose of identifying the vehicle as well as its owner. It is recommended that the efficient collection of information on license plates be performed by automated systems for LP detection and recognition. These systems are fundamental for the supervision and management of different activities related to vehicle traffic. In this regard, this paper presents a study that identifies methods for LP detection and recognition with algorithms based on machine learning and deep learning. To produce this experiment, we succeeded in collecting an image bank of vehicles in toll plazas that are located in the municipality of Cabo de Santo Agostinho - PE and provide access to the Governador Eraldo Gueiros Port Industrial Complex - SUAPE. The objective of this work was to provide a comparison between Microsoft Azure's computer vision service for LP object detection in conjunction with Google Vision's Optical Character Recognition (OCR) services with the YOLO v4 Deep Learning algorithm. The result of the experiment showed that under similar configuration conditions in both models studied, YOLO v4 performed better, achieving a 92% precision rate in license plate detection and recognition.Item Comparação de algoritmos de reconhecimento de gestos aplicados à sinais estáticos de Libras(2019-07-12) Cruz, Lisandra Sousa da; Cordeiro, Filipe Rolim; Macário Filho, Valmir; http://lattes.cnpq.br/4346898674852080; http://lattes.cnpq.br/4807739914511076; http://lattes.cnpq.br/2111589326272463Brazilian Sign Language (BSL) has been created in order to cope with a necessity of a non-verbal communication for the deafs, which during a long time were indoctrinated to learn the Brazilian Portuguese as their first language. Nowadays, the BSL is the Brazil’s second official language and first deaf’s language, as well as the Portuguese for the listener. Nevertheless, even with large recognition, the Brazil’s second official language is not known by the majority of the Brazilian population. The inclusion process aims to allow equality for the impaired, such that the deficiency does not become an impediment factor for living together in society. With the technology arrival and the Artificial Inteligence (AI) advances, it was created technologic artifices to allow inclusion. In the AI, the pattern recognition is one of more approached subthemes in the present, and it is widely applied for the gesture classification of many sign languages in literature. This research has, as key task, the identification of the hands that form a certain BSL gesture and, thus, the recognition of the class it belongs to. Based on American Sign Language (ASL) classification, the Feature Fusion-based Convolutional Neural Network (FFCNN), an extended network from Convolutional Neural Network (CNN), obtained the best accuracy in comparison to other networks, such as Visual Geometry Group (VGG). Therefore, based on this scenario, this work applies the FFCNN to BSL static gestures to verify whether the FFCNN obtain the best accuracy as well as obtained in ASL or not. In order to achieve the goal, this work compares three classifiers: the Visual Geometry Group (VGG), a CNN with variation of 13 and 16 layers, the FFCNN, and a Multi Layer Perceptron network used in recognition of BSL static gestures in literature. The algorithms were applied in a BSL dataset with 9,600 images of 40 signals. The results demonstrate that VGG with 16 layers obtained the best accuracy regarding the described models in this work, corresponding to 99,45%.Item Estudo de viabilidade de sistemas de detecção de armamentos em tempo real em linhas de ônibus urbanos(2021-12-09) Lima Junior, Cícero Pereira de; Silva, Douglas Véras e; http://lattes.cnpq.br/2969243668455081; http://lattes.cnpq.br/9901763283774954Surveillance systems are fundamental on preventing armed robberys on public busses. However, to be operated in real-time theses systems demand an unrealistic amount of people. The usage of computer vision and deep learning technics raises as a way to automate parts or even the whole surveillance process, from the weapons detection to the alarm triggering. For this process to be accomplished efficiently, allowing authorities to take more effective actions, the system needs to be able to handle a growing security cameras demand. Thus, this work analyses a bus line weapon detection system viabillity. Through simulation, this work evaluated the perfomance of YOLO algorithm, in its fourth version, on a client-server model under a growing security camera demand. The server is composed of a Tesla V80 GPU with a 12GB memory, Intel Xeon dual core processor, 61GB RAM memory and 200GB disk space. Finally, from the gathered results, its observable that the application presents a detection time increase after having introduced 16 virtual users (cameras), also the average response time cannot be considered as real-time, on bus security context.