A collaboration between researchers from EPFL LUTS and TUM TUM School of Engineering and Design resulted in the publication of a paper entitled “Treating Noise and Anomalies in Vehicle Trajectories From an Experiment With a Swarm of Drones” in IEEE Transactions on Intelligent Transportation Systems. Vishal Mahajan, Manos Barmpounakis, Md. Rakib Alam, Nikolas Geroliminis and Constantinos Antoniou utilise the pNEUMA dataset to investigate lane usage, lane changing and lane choice phenomena utilising the pNEUMA dataset.
- We analyse data errors in the vehicle acceleration and speed time-series.
- We identify and separate errors between noise and anomalies (unrealistic transient peaks)
- A scalable method is proposed to treat these errors using smoothing filters and adaptive regularized ML models
- Our approach can be applied to similar errors in other datasets with minimal effort and fine-tuning
Unmanned aerial systems, known as “drones”, are relatively new in collecting traffic data. Data from drone videography can have potential applications for traffic research. Drones can record the vehicles from their aerial point-of-view and provide their naturalistic driving behavior. Processing raw data from drones to remove noise and anomalies is crucial to ensure that the data are fit for subsequent applications, e.g., the development of traffic flow or crash risk models. This study uses a part of the pNEUMA dataset, a large dataset with almost half a million trajectories captured by a swarm of drones over Athens, Greece. This novel dataset offers an opportunity to analyze the data attributes and treat the noise and outliers in the data. We use a combination of smoothing filters and Extreme Gradient Boosting with adaptive regularization to process the speed and acceleration profiles of the vehicle trajectories in the dataset. Our approach can help prospective data users treat this or similar trajectory datasets alternatively to applying manual thresholds and assist in accelerating research in microscopic traffic analysis.
You can read the paper here.
**The codes of the paper will soon be openly shared and the post will be updated.