That is difficult. Usually the "central" win rate is optimistic, and the win
rate over all games is pessimistic. So those numbers should give you an idea of
the range of the win rate you can expect with current estimated optimal
parameters.
But it might be possible to have narrower estimates. T
Hi!
On Sat, Jul 07, 2012 at 11:52:21PM -0700, Michael Williams wrote:
> Are the optimized values the "Mean" column on the "Max" tab? How does
> one get them out? Copy to clipboard only works for a single cell at a
> time. I'm on Windows.
I'm wondering how to determine whether the parameter
On 21 mars 2013, at 09:19, Chin-Chang Yang wrote:
> I am not sure which script can reproduce the experiments.
To answer this question more precisely: the scripts that produced all the plots
in the paper are in the:
LaTeX/2008-06-02-CLOP
directory.
Rémi
__
@dvandva.org] On Behalf Of Remi Coulom
> Sent: Wednesday, March 6, 2013 5:37 PM
> To: computer-go@dvandva.org
> Subject: Re: [Computer-go] CLOP: Confident Local Optimization for Noisy
> Black-Box Parameter Tuning
>
> Yes. f(x) is not the output. The output is either 0 or 1, and f(x) i
@dvandva.org
Subject: Re: [Computer-go] CLOP: Confident Local Optimization for Noisy
Black-Box Parameter Tuning
Yes. f(x) is not the output. The output is either 0 or 1, and f(x) is the
probability of 1.
Rémi
On 6 mars 2013, at 09:04, Chin-Chang Yang wrote:
> Thank you, Olivier.
>
>
By the way I think that these testcases are much more relevant for noisy
optimization than the BBOB noisy optimization,
in which noise is not the main issue. But maybe there is a lot of room for
debates around that, and it becomes far from computer-go :-)
Olivier
___
Yes. f(x) is not the output. The output is either 0 or 1, and f(x) is the
probability of 1.
Rémi
On 6 mars 2013, at 09:04, Chin-Chang Yang wrote:
> Thank you, Olivier.
>
> Let the observable function value be o(x). It can be defined as:
>
> o(x) = 1, with probability f(x);
> o(x) = 0, with
Thank you, Olivier.
Let the observable function value be o(x). It can be defined as:
o(x) = 1, with probability f(x);
o(x) = 0, with probability (1 - f(x)).
where f(x) = 1 / (1 + e(-r(x))) has been defined in the paper. Also, we can
see that the expected value is f(x).
Did I get this correct?
B
It's a Bernoulli noise.
define f (x) = 1/ (1 + e(-r(x)) )
and the objective function at x is 1 with probability f(x).
So the expected value at x is f(x), but the values you get are noisy.
Best regards,
Olivier
> Since the functions are not noise-free, they should be defined in terms
>> of some
It's a Bernoulli noise.
define f (x) = 1/ (1 + e(-r(x)) )
and the objective function at x is 1 with probability f(x).
So the expected value at x is f(x), but the values you get are noisy.
Best regards,
Olivier
2013/3/6 Chin-Chang Yang
>
>
> 2013/3/6 Olivier Teytaud
>
>>
>>> The CLOP is for no
2013/3/6 Olivier Teytaud
>
>> The CLOP is for noisy black-box parameter tuning. However, your test
>> functions (LOG, FLAT, POWER, ANGLE, and STEP) are noise-free functions as
>> shown in Table 1. It is very difficult to prove that CLOP can work very
>> well on noisy functions.
>>
>
> Waow :-) th
>
>
> The CLOP is for noisy black-box parameter tuning. However, your test
> functions (LOG, FLAT, POWER, ANGLE, and STEP) are noise-free functions as
> shown in Table 1. It is very difficult to prove that CLOP can work very
> well on noisy functions.
>
Waow :-) that would be a very strange noisy
2013/3/6 Chin-Chang Yang
>
> Hi,
>
> I'm considering CLOP to be one of the compared optimizer in
> RobustOptimizer https://github.com/ChinChangYang/RobustOptimizer/issues/68.
> However, I have some questions to your experiment.
>
> The CLOP is for noisy black-box parameter tuning. However, your t
Hi,
I'm considering CLOP to be one of the compared optimizer in RobustOptimizer
https://github.com/ChinChangYang/RobustOptimizer/issues/68. However, I
have some questions to your experiment.
The CLOP is for noisy black-box parameter tuning. However, your test
functions (LOG, FLAT, POWER, ANGLE, a
After all, I found a little time to try it. Unfortunately it does not work. I
would have to implement my own self-made copy function like explained on that
web page:
http://www.qtcentre.org/threads/11090-Copy-row%28s%29-from-QTableWidget
I added it to the TODO list.
Rémi
On 8 juil. 2012, at 12:
If you can edit the source code and re-compile, you can try replacing:
Qt::ItemIsEnabled
by
Qt::ItemIsEnabled | Qt::ItemIsSelectable
in MainWindow.cpp
I don't have time to test or prepare a new version, sorry.
Rémi
On 8 juil. 2012, at 08:52, Michael Williams wrote:
> Are the optimized values t
Are the optimized values the "Mean" column on the "Max" tab? How does
one get them out? Copy to clipboard only works for a single cell at a
time. I'm on Windows.
On Thu, Sep 1, 2011 at 3:01 AM, Rémi Coulom wrote:
> Hi,
>
> This is a draft of the paper I will submit to ACG13.
>
> Title: CLOP:
I've been trying the CLOP software out, and I've written some glue code
to support using it with Gomill.
This uses Gomill as the 'twogtp' back-end, and combines the CLOP
settings and the engine configuration into a single configuration file
(rather than putting the latter in the connection script)
Hi,
This is a draft of the paper I will submit to ACG13.
Title: CLOP: Confident Local Optimization for Noisy Black-Box Parameter Tuning
Abstract: Artificial intelligence in games often leads to the problem of
parameter tuning. Some heuristics may have coefficients, and they should be
tuned to
19 matches
Mail list logo