Airfield surveillance radars (ASR) face increased challenge of both detecting and classifying non-cooperative airborne targets. Drones are smaller, more diverse, and often operate amongst clutter near the horizon. Traditional radar signal processing is aimed at detecting larger cooperative aircraft that announce their identity and fit within distinctive categories. This is achieved using banks of linear time-invariant processing, thus neglecting any non-linear relationships that may exist between the reflected signal and detected object's properties. Here, we leverage on Recurrent neural networks (RNNs), such as long-short-term memory (LSTM), to learn from sequences of radar data and generate a nonlinear output features to learn target classes. To date, deep learning has not yet been fully investigated with ASR for object classification. Here, we show that a novel RNN architecture combined with a normalised representation of the analytic radar signal can perform classification tasks. We found that an LSTM layer can discover features from the short running time span of each scan of a target. By concatenating these found features into a new sequence depicting the track over multiple scans, an additional LSTM layer can learn to classify objects. Training of the network was improved by fitting the network to multiple output labels that describe the object. This shows that neural networks can approximate radar linear processing, while also performing nonlinear processing to derive the overall classification. Responses from this architecture demonstrate the ability of deep learning to perform object classification using airfield surveillance radar data. This proposed method can be used as a starting point to explore how explainable the responses and trained model are.
Deep Learning for Radar Classification
04.06.2024
1642744 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch