Hi all,

I'm part of a research group at UC Berkeley focusing on the application of
reinforcement learning to autonomous vehicles. We're trying to interface
SUMO with OpenAi's RLLab reinforcement learning package.

In order to to facilitate running simulations with autonomous vehicles
repeatedly, updating their controllers according to a new policy, we are
trying to design a wrapper for sumo using the traci library.

Having just started the design and implementation of our wrapper, we had a
couple of questions:

   1. The simulation seems to be running rather slowly when traci is
   attached (even though it's not actually issuing any commands, just calling
   simulationStep()). What is the bottleneck? My guess would be the traci-sumo
   connection, rather than sumo, or traci individually. Is there some way to
   alleviate this issue?
   2. Each simulation should be started from an identical start point. Is
   there a way to restart the simulation without closing and reopening traci?
   3. One possible solution to 2) that we have considered is using traci to
   reset each car's position and speed individually to the original state.
   This however, would not reset SUMO's time/step counters. Would increasing
   end time of the simulation to the maximum value (somewhere on the order of
   10^15) have any unforeseen consequences as the numbers get very large?

We're really excited to get working with SUMO and would love some guidance.
Thank you!

Cheers,

Kanaad Parvate

UC Berkeley EECS '19
ᐧ
------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
sumo-user mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/sumo-user

Reply via email to