Berikutnya


Distribution Awareness for AI System Testing

0 Tampilan
Makestube
17
Diterbitkan di 07/13/23 / Di Orang & Blog

As Deep Learning (DL) is continuously adopted in many safety critical applications, its quality and reliability start to raise concerns. Similar to the traditional software development process, testing the DL software to uncover its defects at an early stage is an effective way to reduce risks after deployment. Although recent progress has been made in designing novel testing techniques for DL software, the distribution of generated test data is not taken into consideration. It is therefore hard to judge whether the identified errors are indeed meaningful errors to the DL application. Therefore, we propose a new OOD-guided testing technique which aims to generate new unseen test cases relevant to the underlying DL system task. Our results show that this technique is able to filter up to 55.44% of error test case on CIFAR-10 and is 10.05% more effective in enhancing robustness.

David Berend (Nanyang Technological University, Singapore),

* IEEE Digital Library: https://www.computer.org/csdl/....proceedings-article/



Created with Clowdr: https://clowdr.org/

Menampilkan lebih banyak
0 Komentar sort Sortir dengan

Berikutnya