Let's assume Sentience is a continuum, with lesser and greater degrees. Big dog is probably aware of it's orientation in space, it's goals, and its available operators, and its sensor readings. I don't know whether the designers have given it reflective capabilities, I'd have to see the architecture diagrams and inspect the code to know this. Based on the above assumptions I would say that Big Dog probably has a very low level of sentienceif any at all. It may not have self-awareness (consciousness) at this time. That does not however preclude it from gaining more sentience and consciousness at some point in the future with a softwareupgrade. I would know if Big Dog were sentient if I could converse with it, and if it indicated that it was concerned for its' own survival and it could relate to me what threats existed for its survival. I think emotion, not necessarily human emotion, but emotion is a computation of the general state and well being of a system. I think that emotions, as studied by many computer scientists, aid in the indexing and retrieval of behaviors appropriate to various situations. People like Eric T. Mueller studied and modeled emotions as part of his DAYDREAMER system and attached emotional valences to goals so that they could be better indexed for later retrieval. I don't think you can escape an AGI system having some emotional content. My assessment is that emotions register the equilibrium of the system. Affect indicates whether the system has been recently successful overall or struggling overall with goal attainment. In addition there are other dimensions as well. For me Mood, is the average of affect over time. Affect and Mood are important in memory formation and behavior selection for systems that I develop. ~PM. ------------------------------------------------------------------------------------------------------------------------------------------------ > Date: Tue, 29 Jan 2013 12:03:09 -0500 > Subject: Re: [agi] Robots and Slavery > From: [email protected] > To: [email protected] > > On Tue, Jan 29, 2013 at 3:48 AM, Piaget Modeler > <[email protected]> wrote: > > The other question is what happens when some warbots (like Big Dog, or some > > of the ones with armaments), > > and the aerial or undersea drones become sufficiently "sentient" (or > > intelligent)? Perhaps via (inadvertent or > > clandestine) software upgrades. That's the real "Oh sh*t" moment. > > > > What then? > > You still have not answered my question. How would you know if Big Dog > was sentient? > > Do you think that the only way we can solve hard problems like > language, vision, robotics, and predicting human behavior is for the > algorithm to also be constrained by a model of human emotions? In > fact, none of the partial solutions that we have today to these > problems need any such constraints. > > What is so hard about *not* programming a robot to have human > emotions? It seems like a much easier problem to me if you don't > program it to not want to do what you tell it. > > -- > -- Matt Mahoney, [email protected] > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-5cfde295 > Modify Your Subscription: https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com
------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
