On Thu, 17 Jan 2013 01:45:44 PM Pedro Rodrigues wrote:
> On Thu, Jan 17, 2013 at 12:34 PM, Michael T. Pope wrote:
> > On Thu, 17 Jan 2013 01:45:19 AM Pedro Rodrigues wrote:
> > > Would it be possible to share those scripts? It would be a good starting
> > > point for my own changes.
> >
> > I was
Present.
I sent a patch to make this a bit better and never commited that.
it is time to finalize and commit.
Regards.
PaoloB
On Wed, Jan 23, 2013 at 12:28 PM, Michael T. Pope wrote:
> On Thu, 17 Jan 2013 11:04:03 PM Michael T. Pope wrote:
>> On Thu, 17 Jan 2013 01:45:19 AM Pedro Rodrigues wr
On Thu, 17 Jan 2013 11:04:03 PM Michael T. Pope wrote:
> On Thu, 17 Jan 2013 01:45:19 AM Pedro Rodrigues wrote:
> > Would it be possible to share those scripts? It would be a good starting
> > point for my own changes.
>
> I was afraid you were going to ask that. They are pretty embarassing ATM.
On Thu, Jan 17, 2013 at 12:34 PM, Michael T. Pope wrote:
> On Thu, 17 Jan 2013 01:45:19 AM Pedro Rodrigues wrote:
> > Would it be possible to share those scripts? It would be a good starting
> > point for my own changes.
>
> I was afraid you were going to ask that. They are pretty embarassing ATM
On Thu, 17 Jan 2013 01:45:19 AM Pedro Rodrigues wrote:
> Would it be possible to share those scripts? It would be a good starting
> point for my own changes.
I was afraid you were going to ask that. They are pretty embarassing ATM.
I will clean up and document them shortly.
> Regarding the star
Darn, Sourceforge still doesnt use the "Reply-To:" field for the mailing
lists.
Sorry for the double post, Pope.
On Wed, Jan 16, 2013 at 10:18 PM, Michael T. Pope wrote:
> On Wed, 16 Jan 2013 07:28:13 PM Pedro Rodrigues wrote:
> > What is the current best way to evaluate changes in the AI?
> >
>
On Wed, 16 Jan 2013 07:28:13 PM Pedro Rodrigues wrote:
> What is the current best way to evaluate changes in the AI?
>
> For whoever is working with AI-related code, how do you evaluate your
> changes?
I have a hairy collection of scripts that automatically runs games with 7 AIs
+ 1 observer fro
What is the current best way to evaluate changes in the AI?
For whoever is working with AI-related code, how do you evaluate your
changes?
I have a few ideas and changes that _seem_ to make some improvement, but
im missing some hard data to corroborate those improvements.
On a related subject (