What is the current best way to evaluate changes in the AI? For whoever is working with AI-related code, how do you evaluate your changes?
I have a few ideas and changes that _seem_ to make some improvement, but im missing some hard data to corroborate those improvements. On a related subject (as proper AI-testing many times rely on removing the randomness of the system) , what was the reasoning behind the removal of the PseudoRandom interface in rev.6988, and make all implementations inherit from Java.util.random? The use of the interface is much preferable as it makes it clear that several implementations can be used in a given method call (thus making the code more readable), and avoids unforeseen consequences when using those other implementations, as they do not have inherited methods that may give different results from those intended (or even desirable). The book "Effective Java" (great book, by the way) in several points stresses these points (favoring composition over inheritance, favoring interfaces over abstract classes/subclasses), giving as an example the process of integration of Hashtable and Vector into the Collections Framework, which brought several problems, both of security and breaking of encapsulation. -- -------------------------------------------------------------- Pedro Rodrigues
------------------------------------------------------------------------------ Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery and much more. Keep your Java skills current with LearnJavaNow - 200+ hours of step-by-step video tutorials by Java experts. SALE $49.99 this month only -- learn more at: http://p.sf.net/sfu/learnmore_122612
_______________________________________________ Freecol-developers mailing list Freecol-developers@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/freecol-developers