The national airspace has evolved over many years to accommodate increased traffic demand [1] while simultaneously maintaining one of the safest forms of transportation [2], [3]. One of the reasons for this success is the ability of the system and the operators to adapt and accommodate to situations that routinely disrupt optimal operations. These situations may include: adverse weather, delays, early arrivals, equipment outages, and other factors that are outside the operators ability to control. These factors can lead to states where automation is unable to properly handle these issues and therefore air traffic controllers and pilots have to intervene, ultimately increasing communication between operators resulting in higher workload. As controller workload increases to handle sub-optimal operating conditions this can be viewed as an increase in complexity. The reasoning for this is because humans are now required to make tactical decisions in response to external factors, resulting in a departure from the strategic plan where operations would be more efficiently managed. Human operators control airspace complexity under rigid regulations that are constantly changing. The airspace is divided into sectors and the number of aircraft assigned to each controller is limited for safe handling. There has been past work that devised airspace complexity metrics in commercial aviation and related these metrics to controller workload (e.g., [4],[5]). The upper bounds on the system load are pre-determined. Such bounds on complexity make for a safe system, but the system cannot scale and adapt to autonomous, dense, and heterogeneous traffic, including the many types of Unmanned Aerial Vehicles (UAVs) envisioned to be added to the operations. We hypothesize that, as traffic density and heterogeneity grow, and other key metrics change, there will be phase transitions at which the way traffic should be managed changes significantly [6]. We offer a method for in-time detection of contributing factors that lead to phase transitions, characterized by increased complexity. To the best of our knowledge, there is no tool similar to our proposed effort that identifies such contributing factors or precursor patterns. To define the scope we are proposing to measure complexity from the viewpoint of the Terminal Radar Approach Control Facilities (TRACON) controller’s perspective. In particular we are analyzing arrivals into KSFO. With safety as the top concern for airspace operators, it is important to recognize that as density and heterogeneity grow, the focus of the system will change. Times of the day when the airspace has low density and heterogeneity, the flights will follow more efficient paths where the aircraft move on established routes that are more or less directly to the destination. However, when density and heterogeneity increases, the system will begin changing focus to avoiding conflicts and collisions and route the flights in a more flexible way. Higher flexibility requires more communication and coordination between controllers and pilots which the current automation is unable to handle. This paper proposes a novel approach that monitors airspace complexity at multiple scales, uses a Machine Learning-based tool that predicts when operations will transition to a regime of greater complexity, and identifies actions that can reduce the complexity while still maintaining efficient and safe operations. We demonstrate our proposed approach using data from multiple complementary sources. This includes, but is not limited to: historical aircraft surveillance data from NASA’s Sherlock Data Warehouse [7], METAR weather data, and airport configuration data from Aviation System Performance Metrics (ASPM). The surveillance data flight paths are sampled at a variable sample rate — increasing as the aircraft approaches the airport. This is due to how Sherlock manages flight track stitching between different radar facilities which have different sampling rates. The weather and performance data are logged at defined intervals throughout the day at a courser refresh rate. In addition to the logged data and metrics, we leverage pre-defined Standard Terminal Arrival Routes (STARs) procedures to characterize the path of each flight. Each flight files for one of these routes in the flight plan well before entering the terminal airspace, and approximately follows the route until it leaves the STAR, typically on the final fix of a runway transition. However, most flights do not always fly the full STAR procedure to completion [8], but the majority do adhere to the fixes within the common route of the procedure. Our approach leverages fixes in the common route of each of the STARs to build a reference path to the airport. This allows us to characterize the flight paths in what we are defining as the “maneuvering area” (the airspace between the STAR and before the flight is lined up on the runway’s final approach) to determine how off nominal the flights are to calculate its complexity score. Determining airspace complexity is a concept that does not have a concrete answer. In designing this metric, we consider what increases the workload for the air traffic controllers. Consequently more specialized vectoring maneuvers results in higher workload. Accordingly, we start with a theory: each flight has a direct path it takes from the STAR’s common route to the final approach’s outer marker fix for the flight’s landing runway. It is important to note that the direct path is only used as a reference. If the majority of the flights have a large consistent offset as compared to other routes it does not necessarily mean that those flights have higher complexity. We are merely building a distribution based on this direct path for that particular STAR and runway pair to determine the normal mode of operations for that route. Flights that are in the upper tail of these distributions will result in higher complexity scores and flights that fly in the median will represent the normal mode of operations and therefore will have lower complexity scores. Since flights following each STAR route take different paths to the airport, we have a different distribution for each STAR route and therefore can model these distributions to compute a complexity score from their respective normalized distributions. To evaluate the effectiveness of our proposed airspace complexity metric we will compare against an established approach based on trajectory clustering [9]. This unsupervised learning technique consists of the following steps: (1) identify the general maneuvering areas (waypoints) by performing $\kappa$-means or DBSCAN clustering on locations where aircraft frequently turn based on the surveillance radar track data, (2) map flight trajectories onto sequences of waypoints, and (3) cluster the sequences based on their common subsequences. From a high-level perspective, this baseline model learns nominal operations in the airspace through the sequence of waypoints that are representative of where aircraft change direction and defines deviations from the nominal operations as “complex.” Therefore, more deviations from the nominal operations correspond to higher complexity values. For our validation, we re-implemented this technique and tune model hyper-parameters to correctly detect waypoints for the arrival traffic into the San Francisco bay area. We will compute the complexity measure over a one-year period using our proposed technique as well as the baseline. Our validation will be based on each technique’s ability to detect a set of undesirable outcomes (e.g., go-arounds, holding patterns, average time in the airspace, etc.). Since our current complexity metric is derived from the offset from the direct reference path, it’s important to understand what causes these offsets. In many of the flights with high offset distance, flights performing holding patterns and S turns can be observed. These maneuvering tactics are utilized to add distance between the aircraft and the destination runway to prevent multiple flights from having conflicting arrival times. In order to predict a rise in complexity (or the precursor to complexity), it’s necessary to be able to identify these potential conflicts (which in turn, result in higher offsets). To do this, we define a “representative flight” for each STAR route and runway pair. This flight is approximately the path the flight would take if there was a clear path with no other flights in the airspace — including the time remaining to the airport. We first identify the flights for a given STAR runway pair using the offset to the reference path distributions that fall between the 44-55 percentiles. This yields the flights that conform to the most normal mode of operation. Each of these flights is partitioned based on the percent complete from the entry point into the maneuvering areas from 0\% – 100\% complete. Then for each percent “bin”, we take the median value of the flight’s latitude/longitude coordinates, airspeed, and (non causal) time remaining to the airport to construct a lookup table for each percent complete bin on a given route. As a flight enters the maneuvering area, we can find the estimated arrival time of a flight to the airport by finding the closest point to the representative path’s percent complete bin (relative to the flight’s current position at any snapshot in the airspace) and therefore retrieve the corresponding remaining time left on the “representative path”. We assume that the flight will follow the representative path to completion when deriving these estimates. We can then compare these estimated arrival times against other flights for the same snapshot in time to identify potential conflicts. If more flights are estimated to arrive within a tolerance window than there are runways available, then we have a potential conflict. We can use this derived measure along with other factors expected to add disruption to the operation such as weather and runway configuration changes as an input to machine learning tools to detect precursors that increases in our complexity measure. This novel method will assist in uncovering insights into the contributing factors that lead to increased complexity that may allow for in-time responses to avoid reaching a high complexity state in the airspace.


    Access

    Access via TIB

    Check availability in my library


    Export, share and cite



    Title :

    Monitoring Airspace Complexity and Determining Contributing Factors


    Contributors:
    D. Weckler (author) / B. Matthews (author) / S. Monadjemi (author) / S. Wolfe (author) / N. Oza (author)

    Publication date :

    2022


    Size :

    5 pages


    Type of media :

    Report


    Type of material :

    No indication


    Language :

    English




    Monitoring Airspace Complexity and Determining Contributing Factors

    Daniel Weckler / Bryan Matthews / Shayan Monadjemi et al. | NTRS


    Monitoring Airspace Complexity and Determining Contributing Factors

    Weckler, Daniel I. / Matthews, Bryan L. / Monadjemi, Shayan et al. | AIAA | 2023


    Monitoring Airspace Complexity and Determining Contributing Factors

    Daniel Weckler / Bryan Matthews / Shayan Monadjemi et al. | NTRS | 2023



    Defining Airspace Complexity

    Daniel Weckler / Bryan Matthews / Nikunj Oza et al. | NTRS