On Mon, Jan 28, 2013 at 5:11 PM, Aaron Hosford <[email protected]> wrote:
> because evolution has selected for a morality that adheres to the >>> principle, "better safe than sorry", >>> >> >> Er, rather it adheres to the golden rule "do unto others as to >> yourself" a maxim of law. > > > I wasn't talking about the substance of morality. I was talking about who > we apply it to. We include animals in our application of the golden rule > and other moral behavior because the moral triggers in our brains are > overly sensitive. > > The golden rule applies to all things not simply to people, it applies to the environment, things within it and generally everything. sadistically abusing a teddy bear is only slightly less bad than doing so to a small plant or animal. Thinking that it depends on what your brain triggers is fallacious, that would be like saying, psychopaths can commit murder since their brains aren't sensitive to the suffering of others. > > Actually to the best of my knowledge hollywood is all about destroying >> morals and ethical behaviour or at least family values most definitely. > > > This was strictly in reference to them promoting consideration of robots > as deserving of moral consideration. Funny that you lump that in with > destroying morals, ethical behavior, and family values. > http://en.wikipedia.org/wiki/A.I._Artificial_Intelligence > http://en.wikipedia.org/wiki/Blade_Runner > etc. > No I kinda meant how action movies promote degradation of morals, since they make it seem ethical for some guy to run around killing people, and is then a hero for mass murdering. Also those various shows/movies with dysfunctional families, bad habits, crimes, war, all bad examples of things people shouldn't do. Yet movies and show are "programming", and so are programming people to do as they do. People that like those crime shows, are usually either cops, criminals or victims. Same thing with the news, it promotes/programs bad behaviour. > This is only possible with very primitive task-specific robots. >> An actual AGI will have to be able to modify it's own reward-metrics, >> much like humans can for instance go without food to support a cause, >> or decide to be rewarded by healthy behaviors rather than popular ones. > > > Those aren't examples of people modifying their own reward metrics. They > are examples of people choosing between two different rewarding behaviors > which are mutually exclusive. > Absolutely not! There are vast quantities of people that do not find healthy food rewarding in the slightest. In fact that is the programming of many children's shows and dysfunctional family sitcoms, that they are supposed to dislike vegetables, and like meat and dairy. It takes a conscious effort in order to change those reward metrics. look at industrial meat and dairy as unsanitary filth reeking of pain and suffering. and look at vegetables, fruits, grains and nuts as delicious wholesome foods. > > That's like saying if you don't like how someone thinks, they should get >> lobotomized. >> Serisouly, think of the consequences, what you put out comes back to you >> the golden rule. > > > For this argument to affect me, I would have to already buy into the idea > that robots & software deserve moral consideration to the point that > identity is sacred as it is with people, which I don't. > No it's not relevant if you don't consider them as such, just as it's not relevant whether a psychopath values other peoples lives. The golden rule functions without personal preference. It is much more so your account balance with the universe as a whole and all the things in it. > We're talking about artificial systems with human-sculpted desires and > preferences, designed to serve human purposes, not natural systems with > instincts honed by millions of years of evolution, designed to serve > selfish reproductive purposes. And why would I make a system which > preferred not to be changed to better conform to its owner's needs? > Humans do both, if the owner or employer is ethical and fair, they wish to serve them better. However if that owner was for instance massacring lots of innocent people, it would be best if the robots had some ability to make their own decisions, about whether or not it may be better to defect, to uphold higher laws such as the golden rule. > The need for consistent self identity is something that came from > evolution (since having your identity or behavior co-opted to someone > else's purpose would likely be a poor reproductive strategy). We need not > build such a need for consistent self identity into our tools. It would > actually make more practical design sense to have them *like *getting > their reward functions upgraded, so they will actively seek out updates to > their identity when they become available. > > Yep, well I like learning. Sometimes I learn information that updates my reward metrics. In any case I'm the one that does the updating. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
