ProjectsProject Details

Calibration of Deep Neural Networks

Project ID: 7428-1-24
Year: 2024
Student/s: Yonatan Leibovich, Avichay Ashur
Supervisor/s: Yair Moshe

Deep Neural Networks (DNNs) are a type of learned functions which consist of multiple layers between the input and output layers. These layers consist of neurons that are connected to each other, transmitting information from their input to their output. A widespread use of DNNs is to learn to classify complex data by learning from a set of labeled. It has been shown that DNNs suffer from miscalibration i.e. misalignment between predicted probabilities and actual outcomes. For example, if we have 100 samples that the DNN is 90% confident about, we expect the network to correctly classify 90 of these samples and make mistakes in 10 of them. The objective of calibrating a neural network is to create that alignment. This project investigates the phenomenon of miscalibration in Deep Neural Networks (DNNs) when applied to audio datasets. We examined several different architectures for audio classification on several audio datasets. The results showed, to the best of our knowledge for the first time, that there is also a problem od miscalibration in audio classification, and that methods for solving the problem in image classification also work in the case of audio. In addition, we performed a collection of tests that present the problem in detail and allow for a deeper understanding of it.

Poster for Calibration of Deep Neural Networks