How to Evaluate Proving Grounds for Self-Driving? A Quantitative Approach

Rui Chen, Mansur Arief, Weiyang Zhang, Ding Zhao

In IEEE Transactions on Intelligent Transportation Systems (T-ITS) (In Review), 2019

Abstract

Testing ground has been a critical component in testing and validation for Connected and Automated Vehicles, or CAV. Although quite a few world-class testing facilities have been under construction over the years, the evaluation of testing grounds themselves as testing approaches has rarely been studied. In this paper, we investigate the effectiveness of CAV testing grounds by its capability to recreate real-world traffic scenarios. We extract typical use cases from naturalistic driving events leveraging non-parametric Bayesian learning techniques. Then, we contribute to a generative sample-based optimization approach to assess the compatibility between traffic scenarios and testing ground road structure. We evaluate the effectiveness of our approach with three CAV testing facilities: Mcity, Almono (Uber ATG), and Kcity. Experiments show that our approach is effective in evaluating the capability of a given CAV testing ground to accommodate real-world driving scenarios.