Project DetailsHyperspectral imaging (HSI) enables material identification using detailed spectral information, but its performance is strongly affected by illumination conditions, background variability, and nonlinear spectral mixing. A recent human-inspired framework combines physical modeling with machine learning and achieves robust chemical identification by learning across multiple hyperspectral scenes.
In this work, we build on these physics-guided ideas but study a different operating setting. Instead of training a model across many hyperspectral cubes, we propose a scene-adaptive framework that is trained using only a single hyperspectral cube and operates at the pixel level. This setting is motivated by practical deployment constraints, where capturing and storing multiple hyperspectral cubes is time consuming and requires significant memory and computational resources.
Physics-guided data augmentation is used to generate synthetic training samples directly from the scene using laboratory reference spectra and a nonlinear intimate mixing model. The proposed method uses a two-stage neural architecture, where a one-dimensional U-Net reconstruction network reduces background and illumination effects, followed by a classification network that identifies materials using both the original and reconstructed spectra.
The method is evaluated on hyperspectral scenes containing thin contamination layers of polystyrene, silicon, and sugar on different background surfaces. Despite the very limited training data, the proposed approach achieves 90% pixel-level classification accuracy, with high precision and low false-alarm rates, and shows strong spatial consistency across the scene. These results demonstrate that accurate hyperspectral material identification is possible under strong data limitations, offering a practical and computationally efficient solution for cost-sensitive and time constrained hyperspectral deployments.
