Traffic Systems

Detecting Traffic Anomalies Using Only Controller (High Resolution) Data

M. R. Finochio (Denver DOTI),

L. F. Gunderson (GunderFish),

J. P. Gunderson1 (GunderFish)

Abstract: This paper covers the analysis and testing of a new metric for traffic flow analysis. Using only traffic signal event codes the authors developed a new process for assessing traffic flow anomalies in near real time. The initial analysis determines the fundamental characteristics of the underlying data, and demonstrates statistically valid transforms that produce indicators in the traffic flow data suitable for classifying current traffic flow as anomalous or normal. These indicators can be automatically generated in near real time, and can be used by operators in the traffic management center to respond to accidents and weather conditions that disrupt the smooth flow of traffic. This work was done under an Advanced Transportation Congestion Management Technology Deployment (ATCMTD) grant.

1 Introduction

This project focused on extracting the ongoing state of intersections, using Event Code data provided by the traffic controller devices, without relying on any external data sources such as traffic volume. The Event Codes used are those described in (Sturdevant, 2012). The goal was to build a sufficiently nuanced model of the intersection status to enable an automated system to detect traffic flow anomalies and notify the operators in a Traffic Management Center (TMC) of potential problems. Since the number of vehicles was not available for the study area, an approach was developed based on empirical changes in traffic flow. Two types of traffic flow changes were identified. The first of these are persistent changes which extend for weeks or months and are caused by large changes in traffic behavior or intersection configuration. The second are transient or anomalous changes, after which the traffic behavior reverts to its “normal” condition. The latter are caused by accidents or other temporary changes to the intersection.

Current performance measurement methodologies require sensors to measure the volume, occupancy, and speed (Day et al, 2014). For this study area, there were often no sensors for the detection of vehicles on phases 2 and 6, the “main” streets. Without that data, proxy values for specific intersections and phases were established, providing near instantaneous values for the intersection state. The combination of proxy values was called a Snapshot. A historic norm for the intersection was created by collecting those Snapshots over a previous period of time. This was called a Fingerprint. A non-parametric comparison of the Fingerprints was used to detect systemic changes and to detect anomalies.

The general flow of this paper follows the flow of the project:

  1. Identify candidate proxy values from Event Code data;
  2. Establish the criteria for the Snapshots;
  3. Define the ‘normal’ values for comparison;
  4. Confirm the stability, separability, and proportionality of the candidates; and,
  5. Confirm the suitability of the anomaly detection.

This work was done under contract to MOST Programming, Inc. as part of a project with the City and County of Denver, Department of Transportation and Infrastructure. Our team had the primary responsibility for the design of the mathematical models needed to create the anomaly detection system. The software development and deployment was handled by the MOST team.

1Corresponding Author jpgunderson@gunderfish.com

For a copy of our 25 page white paper on traffic controller analysis: