GTM-WDZTTQ6
Skip to main content
  1. Home >
  2. About Fujitsu >
  3. News & Resources >
  4. News >
  5. Press Releases >
  6. 2018>
  7. Fujitsu Laboratories of America Introduces Reinforcement Learning based Traffic Signal Control for Smart Cities with Real-Time Real-World Simulation Technologies

Fujitsu Laboratories of America Introduces Reinforcement Learning based Traffic Signal Control for Smart Cities with Real-Time Real-World Simulation Technologies

Machine learning-based technology helps cities efficiently reduce traffic congestion and simulate various traffic control strategies

Fujitsu Laboratories of America Inc.,Fujitsu Laboratories of America, Inc.

Santa Clara, CA, October 09, 2018 – At the Fujitsu Laboratories Advanced Technology Symposium, Fujitsu Laboratories of America today announced the availability of new technologies to intelligently control, manage, simulate, and evaluate traffic for smart cities. The newly developed platform utilizes machine learning, crowd source information and third party services to enable two novel features. The first is a reinforcement learning-based traffic signal control solution, which when compared with existing widely-used sensor based approaches, can greatly reduce vehicle and pedestrian waiting time at intersections, significantly decreasing traffic congestion. The second novel feature is real-time real-world traffic simulation. By using this technology, users can cost-effectively, and in a matter of minutes, evaluate various traffic control strategies, including reinforcement learning with real world mapping and realistic traffic generation.
“Effective traffic management is key to the smooth functioning of rapidly urbanizing countries and smart cities of the future,” said Kiyoshi Sakai, CEO, Fujitsu Laboratories of America. “Our advanced AI technologies will improve overall quality of life by speeding up commute times, reducing air pollution and enhancing productivity of citizens and society.”
The reinforcement learning-based traffic signal control solution [1] considers not only motorized traffic, but also non-motorized traffic, by dynamically monitoring and collecting vehicle and pedestrian queue lengths at each intersection. The reinforcement learning algorithm is implemented at each intersection, where the machine learning agent interacts with the environment to learn optimal control actions to minimize the length of waiting queues for both vehicle and pedestrian traffic. The optimal actions determined by the agents obey the real-world constraints such as traffic rules. The agents at neighboring intersections exchange observations to achieve the optimal schedule for the entire system.
The real-time real-world traffic simulation technology allows users to download any real-world map from the OpenStreetMap website [2], and then automatically convert the map into a simulator (which is built on top of SUMO [3]), generate all the configurations files, and create traffic for the simulation. Users can run multiple traffic signal control algorithms, including the reinforcement learning-based solution, and evaluate the performance of each algorithm. For traffic generation, the simulator can automatically obtain real-time traffic congestion levels from the HERE Traffic Service [4] and apply this information to anticipate future traffic patterns.

Figure 1 arrow Figure 2
Download the real-world map from OpenStreetMap for the target area Automatically convert the map to configure the traffic signals, retrieve real-time traffic data from HERE service to generate realistic traffic, and run simulations to evaluate performance of different traffic signal control algorithms

We evaluate the performance of the reinforcement learning-based solution in the selected 9 cities and the simulation results show that reinforcement learning-based solution greatly outperforms existing algorithms (fixed-time and sensor-based) in many key performance metrics such as vehicle and pedestrians waiting time, queue length, vehicle emissions, fuel consumption, and noise pollution.

Figure 3
References:

[1] Ying Liu, Lei Liu, and Wei-Peng Chen, "Intelligent traffic light control using distributed multi-agent Q learning," 20th IEEE International Conference on Intelligent Transportation Systems  (ITSC 2017), pp. 1786-1793, Yokohama, Japan, Oct. 16-19, 2017.
[2] OpenStreetMap: https://www.openstreetmap.org/
[3] SUMO: Simulation of Urban Mobility: http://sumo.dlr.de/index.html
[4] HERE: https://www.here.com/en

About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company, offering a full range of technology products, solutions, and services. Approximately 140,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE: 6702) reported consolidated revenues of 4.1 trillion yen (US $39 billion) for the fiscal year ended March 31, 2018. For more information, please see www.fujitsu.com.

About Fujitsu Laboratories of America

Fujitsu Laboratories of America, Inc. (FLA) is a wholly owned subsidiary of Fujitsu Laboratories Ltd. (Japan), focusing on research in AI, networking technologies, API management, and software development and solutions for several industries. Conducting research in an open environment, FLA contributes to the global research community and the IT industry. FLA is headquartered in Sunnyvale, CA. For more information, please see: www.fujitsu.com/us/about/businesspolicy/tech/rd/

Press Contact

Fujitsu Laboratories of America, Inc.
flats@us.fujitsu.com


Fujitsu, the Fujitsu logo and “shaping tomorrow with you” are trademarks or registered trademarks of Fujitsu Limited in the United States and other countries. Other company or product names mentioned herein are trademarks or registered trademarks of their respective owners. Information provided in this press release is accurate at time of publication and is subject to change without advance notice.

Date: October 09, 2018
City: Santa Clara, CA
Company: Fujitsu Laboratories of America, Inc.