Forgetting noisy labels: post-training mitigation via machine unlearning

Nenhuma Miniatura Disponível

Data

2025-08-06

Orientação Docente

Lattes da Orientação Docente

Título da Revista

ISSN da Revista

Título de Volume

Editor

Resumo

Noisy labels remain a critical challenge for training deep neural networks, often classifying generalization performance due to incorrect label memorization. Approaches to mitigating label noise typically require complete retraining after identifying the noise samples, which can be computationally expensive, especially in large-scale datasets. This work investigates the use of Machine Unlearning to deal with noise labels in post-training scenarios efficiently. We perform extensive experiments on synthetic and real-world materials noisy datasets including CIFAR-10, CIFAR-100 and Food101-N, evaluating various noise types such as symmetric, asymmetric, instance-dependent, and open set noise. Our results demonstrate that MU, particularly through the SalUn method, achieves accuracy comparable to full recycling, significantly reducing computing time. Furthermore, we analyze the impact of fun learning on different fractions of noise samples, showing that partial unlearning can already lead to substantial improvements. findings highlight the potential of Machine Unlearning as a practical and scalable solution to mitigate training with noisy labels.

Resumo em outro idioma

Descrição

Referência

SANTANA, João Lucas Pinto de. Forgetting noisy labels: post-training mitigation via machine unlearning. 2025. 30 f. Trabalho de Conclusão de Curso (Bacharelado em Ciência da Computação) – Departamento de Computação, Universidade Federal Rural de Pernambuco, Recife, 2025.

Identificador dARK

Avaliação

Revisão

Suplementado Por

Referenciado Por

Licença Creative Commons

Exceto quando indicado de outra forma, a licença deste item é descrita como openAccess