Deep Spectral-Spatial Transformer for Robust Hyperspectral Image Segmentation in Varying Field Conditions
Аннотация
Hyperspectral imaging serves as a powerful tool for environmental studies, enabling the capture of significant properties in the objects being analyzed. These hidden properties may not be discernible through traditional RGB analysis, thus enhancing the distinguishability of target classes. Machine learning (ML) algorithms are widely utilized for processing hyperspectral data in tasks such as image classification or segmentation. Despite the rich hyperspectral feature space, ML-based approaches often struggle with the challenges posed by varying outdoor lighting conditions, leading to instability in the developed solutions. Furthermore, traditional segmentation methods designed for RGB images need to be adapted to ensure robust performance when applied to hyperspectral images (HSI). This work proposes a robust method for the classification of HSI in the outdoor environment under varying lighting conditions. A calibration procedure for hyperspectral sensor sensitivity has been proposed, ensuring improved quality of input data and, consequently, enhanced performance of models in solving hyperspectral image classification tasks. The paper also proposes a novel architecture, the Deep Spectral Spatial Transformer (DSST), which is employed as the classifier. This architecture leverages a deeper feature extractor to ensure robustness in various lightning conditions. Among these, standardization within individual channels yielded the best results. The applicability of the proposed approach is validated in the weed detection task in an agricultural field. The data collected were captured using a pushbroom hyperspectral sensor. A series of comparative experiments were conducted on a range of classification algorithms, encompassing both traditional ML algorithms and contemporary neural network architectures developed in recent years. The proposed DSST architecture demonstrated superior performance metrics, with values of 0.911 and 0.907 on F1-Score and Accuracy, respectively.
Похожие публикации
сотрудничества и партнерства