Re: [Computer-go] AlphaGo Zero SGF - Free Use or Copyright?

2017-11-02 Thread Pierce T. Wetter III
Pardon my cynicism but I think Germany just guaranteed that other countries 
will develop self driving cars first and Germany will end up adapting someone 
elses solution after they’ve test driven it on _their_ citizens. Which may be 
their intent...

All of the self-driving car “knowledge" will be fuzzy. At best this rule makes 
lawyers rich.

On Oct 30, 2017, 11:36 PM -0700, Robert Jasiek , wrote:
> On 30.10.2017 19:22, Pierce T. Wetter III wrote:
> > this car and this child
>
> In Germany, an ethics commission has written ethical guidelines for
> self-driving cars with also the rule to always prefer avoiding
> casualties of human beings.
>
> --
> robert jasiek
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo Zero SGF - Free Use or Copyright?

2017-10-30 Thread Pierce T. Wetter III
I would argue that if I was an engineer for a hypothetical autonomous car 
manufacturer, that it would be critically important to keep a running circular 
buffer of all the inputs over time for the car. Sort of like how existing cars 
have Dash Cams that continuously record to flash, but only keep the video if 
you tell it to or it detects major G forces.

To your point, I’m not sure the car would necessarily be able to tell tree from 
child, tree might be “certain large obstacle” and child is “smaller large 
obstacle”. So that would give them the same utility function -1000. But utility 
functions are rarely so straightforward in a neural network as you suppose.

I think it would take differential analysis (A term I just made up) to 
determine the utility function, which is why having a continuous log of all the 
input streams is necessary.

On Oct 30, 2017, 3:45 PM -0700, Álvaro Begué , wrote:
> In your hypothetical scenario, if the car can give you as much debugging 
> information as you suggest (100% tree is there, 95% child is there), you can 
> actually figure out what's happening. The only other piece of information you 
> need is the configured utility values for the possible outcomes.
>
> Say the utility of hitting a tree is -1000, the utility of hitting a child is 
> -5000 and the utility of not hitting anything is 0. A rational agent 
> maximizes the expected value of the utility function. So:
>  - Option A: Hit the tree. Expected utility = -1000.
>  - Option B: Avoid the tree, possibly hitting the child, if there is a child 
> there after all. Expected utility: 0.95 * (-5000) + 0.05 * 0 = -4750.
>
> So the car should pick option A. If the configured utility function is such 
> that hitting a tree and hitting a child have the same value, the lawyers 
> would be correct that the programmers are endangering the public with their 
> bad programming.
>
> Álvaro.
>
>
>
> > On Mon, Oct 30, 2017 at 2:22 PM, Pierce T. Wetter III 
> >  wrote:
> > > Unlike humans, who have these pesky things called rights, we can abuse 
> > > our computer programs to deduce why they made decisions. I can see a 
> > > future where that has to happen. From my experience in trying to best the 
> > > stock market with an algorithm I can tell you that you have to be able to 
> > > explain why something happened, or the CEO will rest control away from 
> > > the engineers.
> > >
> > > Picture a court case where the engineers for an electric car are called 
> > > upon to testify about why a child was killed by their self driving car. 
> > > The fact that the introduction of the self-driving car has reduced the 
> > > accident rate by 99% doesn’t matter, because the court case is about this 
> > > car and this child. The 99% argument is for the closing case, or for the 
> > > legislature, but it’s early yet.
> > >
> > > The Manufacturer throws up their arms and says “we dunno, sorry”.
> > >
> > > Meanwhile, the plaintiff has hired someone who has manipulated the inputs 
> > > to the neural net, and they’ve figured out that the car struck the child, 
> > > because the car was 100% sure the tree was there, but it could only be 
> > > 95% sure the child was there. So it ruthlessly aimed for the lesser 
> > > probability.
> > >
> > > The plaintiff’s lawyer argues that a human would have rather hit a tree 
> > > than a child.
> > >
> > > Jury awards $100M in damages to the plaintiffs.
> > >
> > > I would think it would be possible to do “differential” analysis on AGZ 
> > > positions to see why AGZ made certain moves. Add an eye to a weak group, 
> > > etc. Essentially that’s what we’re doing with MCTS, right?
> > >
> > > It seems like a fun research project to try to build a system that can 
> > > reverse engineer AGZ, and not only would it be fun, but its a moral 
> > > imperative.
> > >
> > > Pierce
> > >
> > >
> > > ___
> > > Computer-go mailing list
> > > Computer-go@computer-go.org
> > > http://computer-go.org/mailman/listinfo/computer-go
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo Zero SGF - Free Use or Copyright?

2017-10-30 Thread Pierce T. Wetter III
Unlike humans, who have these pesky things called rights, we can abuse our 
computer programs to deduce why they made decisions. I can see a future where 
that has to happen. From my experience in trying to best the stock market with 
an algorithm I can tell you that you have to be able to explain why something 
happened, or the CEO will rest control away from the engineers.

Picture a court case where the engineers for an electric car are called upon to 
testify about why a child was killed by their self driving car. The fact that 
the introduction of the self-driving car has reduced the accident rate by 99% 
doesn’t matter, because the court case is about this car and this child. The 
99% argument is for the closing case, or for the legislature, but it’s early 
yet.

The Manufacturer throws up their arms and says “we dunno, sorry”.

Meanwhile, the plaintiff has hired someone who has manipulated the inputs to 
the neural net, and they’ve figured out that the car struck the child, because 
the car was 100% sure the tree was there, but it could only be 95% sure the 
child was there. So it ruthlessly aimed for the lesser probability.

The plaintiff’s lawyer argues that a human would have rather hit a tree than a 
child.

Jury awards $100M in damages to the plaintiffs.

I would think it would be possible to do “differential” analysis on AGZ 
positions to see why AGZ made certain moves. Add an eye to a weak group, etc. 
Essentially that’s what we’re doing with MCTS, right?

It seems like a fun research project to try to build a system that can reverse 
engineer AGZ, and not only would it be fun, but its a moral imperative.

Pierce

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go