Hi all, I'm observing rather strong variations (up to +/- 15%) of performance with respect to the random seed fed to the simulator. I work on traffic light optimization and my performance is measured in terms of average waiting time over a period of 10 minutes after a 5 minutes installment period. I'm worried for two reasons : 1) it may mean that the impact of good vs bad TL programs is indistinguishable from natural simulation noise 2) it will force me to average my objective function over several simulation seeds, say 10, which will make optimization 10 times slower.. Hence a few questions for you expert SUMOers :
1) Is this amplitude of variation normal or indicative of a problem ? 2) Is my timeline (5+10 minutes) sufficient ? 3) I use edge data aggregation to one big chunk (begin=300, end = 900) to sum waiting time over all edges. Is this a proper way of performing TL performance assessment ? Thanks, Yann ------------------------------------------------------------------------------ WatchGuard Dimension instantly turns raw network data into actionable security intelligence. It gives you real-time visual feedback on key security issues and trends. Skip the complicated setup - simply import a virtual appliance and go from zero to informed in seconds. http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk _______________________________________________ sumo-user mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/sumo-user
