Does not look like there is a nice formatting option. They do have an enable 
conversations setting, but I do not thing that provides formatting and 
indentation. If I have some free time -- which I have very little of 
unfortunately -- I will look.




________________________________
 From: Terren Suydam <terren.suy...@gmail.com>
To: everything-list@googlegroups.com 
Sent: Friday, September 5, 2014 4:02 PM
Subject: Re: Fwd: The Machine Intelligence Research Institute Blog
 


I left Yahoo mail five years ago because they do such a terrible job of 
engineering. I have embraced the Google. Thanks for whatever you can do. 
Usually email clients offer a couple of modes for how to include the original 
email... Is there a different mode you can try?
Terren
On Sep 5, 2014 5:50 PM, "'Chris de Morsella' via Everything List" 
<everything-list@googlegroups.com> wrote:

Terren - You should forward your concerns to the folks who code the yahoo 
webmail client... when I am at work I use its webmail client, which does a poor 
job of threading a conversation. Will try to remember that and put in manual 
'>>' marks to show what I am replying to.
>
>
>
>________________________________
> From: Terren Suydam <terren.suy...@gmail.com>
>To: everything-list@googlegroups.com 
>Sent: Friday, September 5, 2014 12:47 PM
>Subject: Re: Fwd: The Machine Intelligence Research Institute Blog
> 
>
>
>Chris, is there a way you can improve your email client?  Sometimes your 
>responses are very hard to detect because they're at the same indentation and 
>font as the one you are reply to, as below. Someone new to the conversation 
>would have no way of knowing that Brent did not write that entire thing, as 
>you didn't sign your name.
>
>
>Thanks, Terren
>
>
>
>
>
>
>On Fri, Sep 5, 2014 at 2:15 PM, 'Chris de Morsella' via Everything List 
><everything-list@googlegroups.com> wrote:
>
>
>>
>>
>>
>>
>>________________________________
>> From: meekerdb <meeke...@verizon.net>
>>To: EveryThing <everything-list@googlegroups.com> 
>>Sent: Friday, September 5, 2014 9:47 AM
>>Subject: Fwd: The Machine Intelligence Research Institute Blog
>> 
>>
>>
>>For you who are worried about the threat of artificial intelligence, MIRI 
>>seems to make it their main concern.  Look up their website and subscribe.  
>>On my list of existential threats it comes well below natural stupidity.
>>
>>
>>On mine as well... judging by how far the google car still has to go before 
>>it does not drive straight into that pothole or require that its every route 
>>be very carefully mapped down to the level of each single driveway. Real 
>>world AI is still mired in the stubbornly, dumb as sand nature of our silicon 
>>based deterministic logic gate architecture.
>>Much higher chance that we will blow ourselves up in some existentially 
>>desperate final energy war, or so poison our earth's biosphere that systemic 
>>collapse is triggered and the deep ocean's flip into an anoxic state favoring 
>>the hydrogen sulfide producing microorganisms that are poisoned by oxygen, 
>>resulting in another great belch of poisonous (to animals and plants) 
>>hydrogen sulfide into the planet's atmosphere -- as occurred during the great 
>>Permian extinction.
>>Speaking of which has anyone read the recent study that concludes the current 
>>anthropocene boundary layer extinction rate is more than one thousand times 
>>the average extinction level that prevailed from the last great extinction 
>>(Jurassic) until now. See: Extinctions during human era one thousand times 
>>more than before
>>
>>Brent
>>
>> 
>>
>>
>>-------- Original Message -------- 
>>Subject: The Machine Intelligence Research Institute Blog 
>>Date: Fri, 05 Sep 2014 12:07:00 +0000 
>>From: Machine Intelligence Research Institute » Blog <b...@intelligence.org> 
>>To: meeke...@verizon.net 
>>
>>
>>The Machine Intelligence Research Institute Blog  
>> 
>>________________________________
>> 
>>John Fox on AI safety 
>>Posted: 04 Sep 2014 12:00 PM PDT
>> John Fox is an interdisciplinary scientist with theoretical interests in AI 
>> and computer science, and an applied focus in medicine and medical software 
>> engineering. After training in experimental psychology at Durham and 
>> Cambridge Universities and post-doctoral fellowships at CMU and Cornell in 
>> the USA and UK (MRC) he joined the Imperial Cancer Research Fund (now Cancer 
>> Research UK) in 1981 as a researcher in medical AI. The group’s research was 
>> explicitly multidisciplinary and it subsequently made significant 
>> contributions in basic computer science, AI and medical informatics, and 
>> developed a number of successful technologies which have been commercialised.
>>In 1996 he and his team were awarded the 20th Anniversary Gold Medal of the 
>>European Federation of Medical Informatics for the development of PROforma, 
>>arguably the first formal computer language for modeling clinical decision 
>>and processes. Fox has published widely in computer science, cognitive 
>>science and biomedical engineering, and was the founding editor of the 
>>Knowledge Engineering Review  (Cambridge University Press). Recent 
>>publications include a research monograph Safe and Sound: Artificial 
>>Intelligence in Hazardous Applications (MIT Press, 2000) which deals with the 
>>use of AI in safety-critical fields such as medicine.
>>Luke Muehlhauser: You’ve spent many years studying AI safety issues, in 
>>particular in medical contexts, e.g. in your 2000 book with Subrata Das, Safe 
>>and Sound: Artificial Intelligence in Hazardous Applications. What kinds of 
>>AI safety challenges have you focused on in the past decade or so?
>>________________________________
>> 
>>John Fox: From my first research job, as a post-doc with AI founders Allen 
>>Newell and Herb Simon at CMU, I have been interested in computational 
>>theories of high level cognition. As a cognitive scientist I have been 
>>interested in theories that subsume a range of cognitive functions, from 
>>perception and reasoning to the uses of knowledge in autonomous 
>>decision-making. After I came back to the UK in 1975 I began to combine my 
>>theoretical interests with the practical goals of designing and deploying AI 
>>systems in medicine.
>>Since our book was published in 2000 I have been committed to testing the 
>>ideas in it by designing and deploying many kind of clinical systems, and 
>>demonstrating that AI techniques can significantly improve quality and safety 
>>of clinical decision-making and process management. Patient safety is 
>>fundamental to clinical practice so, alongside the goals of building systems 
>>that can improve on human performance, safety and ethics have always been 
>>near the top of my research agenda.
>>________________________________
>> 
>>Luke Muehlhauser: Was it straightforward to address issues like safety and 
>>ethics in practice?
>>________________________________
>> 
>>John Fox: While our concepts and technologies have proved to be clinically 
>>successful we have not achieved everything we hoped for. Our attempts to 
>>ensure, for example, that practical and commercial deployments of AI 
>>technologies should explicitly honor ethical principles and carry out active 
>>safety management have not yet achieved the traction that we need to achieve. 
>>I regard this as a serious cause for concern, and unfinished business in both 
>>scientific and engineering terms.
>>The next generation of large-scale knowledge based systems and software 
>>agents that we are now working on will be more intelligent and will have far 
>>more autonomous capabilities than current systems. The challenges for human 
>>safety and ethical use of AI that this implies are beginning to mirror those 
>>raised by the singularity hypothesis. We have much to learn from singularity 
>>researchers, and perhaps our experience in deploying autonomous agents in 
>>human healthcare will offer opportunities to ground some of the singularity 
>>debates as well.
>>________________________________
>> 
>>Luke: You write that your “attempts to ensure… [that] commercial deployments 
>>of AI technologies should… carry out active safety management” have not yet 
>>received as much traction as you would like. Could you go into more detail on 
>>that? What did you try to accomplish on this front that didn’t get adopted by 
>>others, or wasn’t implemented?
>>________________________________
>> 
>>John: Having worked in medical AI from the early ‘seventies I have always 
>>been keenly aware that while AI can help to mitigate the effects of human 
>>error there is a potential downside too. AI systems could be programmed 
>>incorrectly, or their knowledge could prescribe inappropriate practices, or 
>>they could have the effect of deskilling the human professionals who have the 
>>final responsibility for their patients. Despite well-known limitations of 
>>human cognition people remain far and away the most versatile and creative 
>>problem solvers on the planet.
>>In the early ‘nineties I had the opportunity to set up a project whose goal 
>>was to establish a rigorous framework for the design and implementation of AI 
>>systems for safety critical applications. Medicine was our practical focus 
>>but the RED project1 was aimed at the development of a general architecture 
>>for the design of autonomous agents that could be trusted to make decisions 
>>and carry out plans as reliably and safely as possible, certainly to be as 
>>competent and hence as trustworthy as human agents in comparable tasks. This 
>>is obviously a hard problem but we made sufficient progress on theoretical 
>>issues and design principles that I thought there was a good chance the 
>>techniques might be applicable in medical AI and maybe even more widely.
>>I thought AI was like medicine, where we all take it for granted that medical 
>>equipment and drug companies have a duty of care to show that their products 
>>are effective and safe before they can be certificated for commercial use. I 
>>also assumed that AI researchers would similarly recognize that we have a 
>>“duty of care” to all those potentially affected by poor engineering or 
>>misuse in safety critical settings but this was naïve. The commercial tools 
>>that have been based on the technologies derived from AI research have to 
>>date focused on just getting and keeping customers and safety always takes a 
>>back seat.
>>In retrospect I should have predicted that making sure that AI products are 
>>safe is not going to capture the enthusiasm of commercial suppliers. If you 
>>compare AI apps with drugs we all know that pharmaceutical companies have to 
>>be firmly regulated to make sure they fulfill their duty of care to their 
>>customers and patients. However proving drugs are safe is expensive and also 
>>runs the risk of revealing that your new wonder-drug isn’t even as effective 
>>as you claim! It’s the same with AI.
>>I continue to be surprised how optimistic software developers are – they 
>>always seem to have supreme confidence that worst-case scenarios wont happen, 
>>or that if they do happen then their management is someone else’s 
>>responsibility. That kind of technical over-confidence has led to countless 
>>catastrophes in the past, and it amazes me that it persists.
>>There is another piece to this, which concerns the roles and responsibilities 
>>of AI researchers. How many of us take the risks of AI seriously so that it 
>>forms a part of our day-to-day theoretical musings and influences our 
>>projects? MIRI has put one worst case scenario in front of us – the 
>>possibility that our creations might one day decide to obliterate us – but so 
>>far as I can tell the majority of working AI professionals either see safety 
>>issues as irrelevant to the pursuit of interesting scientific questions or, 
>>like the wider public, that the issues are just science fiction.
>>I think experience in medical AI trying to articulate and cope with human 
>>risk and safety may have a couple of important lessons for the wider AI 
>>community. First we have a duty of care that professional scientists cannot 
>>responsibly ignore. Second, the AI business will probably need to be 
>>regulated, in much the same way as the pharmaceutical business is. If these 
>>propositions are correct then the AI research community would be wise to 
>>engage with and lead on discussions around safety issues if it wants to 
>>ensure that the regulatory framework that we get is to our liking!
>>________________________________
>> 
>>Luke: Now you write, “That kind of technical over-confidence has led to 
>>countless catastrophes in the past…” What are some example “catastrophes” 
>>you’re thinking of?
>>________________________________
>> 
>>John:
>>Psychologists have known for years that human decision-making is flawed, even 
>>if amazingly creative sometimes, and overconfidence is an important source of 
>>error in routine settings. A large part of the motivation for applying AI in 
>>medicine comes from the knowledge that, in the words of the Institute of 
>>Medicine, “To err is human” and overconfidence is an established cause of 
>>clinical mistakes.2
>>Over-confidence and its many relatives (complacency, optimism, arrogance and 
>>the like) have a huge influence on our personal successes and failures, and 
>>our collective futures. The outcomes of the US and UK’s recent adventures 
>>around the world can be easily identified as consequences of overconfidence, 
>>and it seems to me that the polarized positions about global warming and 
>>planetary catastrophe are both expressions of overconfidence, just in 
>>opposite directions.
>>________________________________
>> 
>>Luke: Looking much further out… if one day we can engineer AGIs, do you think 
>>we are likely to figure out how to make them safe?
>>________________________________
>> 
>>John: History says that making any technology safe is not an easy business. 
>>It took quite a few boiler explosions before high-pressure steam engines got 
>>their iconic centrifugal governors. Ensuring that new medical treatments are 
>>safe as well as effective is famously difficult and expensive. I think we 
>>should assume that getting to the point where an AGI manufacturer could 
>>guarantee its products are safe will be a hard road, and it is possible that 
>>guarantees are not possible in principle. We are not even clear yet what it 
>>means to be “safe”, at least not in computational terms.
>>It seems pretty obvious that entry level robotic products like the robots 
>>that carry out simple domestic chores or the “nursebots” that are being 
>>trialed for hospital use, have such a simple repertoire of behaviors that it 
>>should not be difficult to design their software controllers to operate 
>>safely in most conceivable circumstances. Standard safety engineering 
>>techniques like HAZOP3 are probably up to the job I think, and where software 
>>failures simply cannot be tolerated software engineering techniques like 
>>formal specification and model-checking are available.
>>There is also quite a lot of optimism around more challenging robotic 
>>applications like autonomous vehicles and medical robotics. Moustris et al.4 
>>say that autonomous surgical robots are emerging that can be used in various 
>>roles, automating important steps in complex operations like open-heart 
>>surgery for example, and they expect them to become standard in – and to 
>>revolutionize the practice of – surgery. However at this point it doesn’t 
>>seem to me that surgical robots with a significant cognitive repertoire are 
>>feasible and a human surgeon will be in the loop for the foreseeable future.
>>________________________________
>> 
>>Luke: So what might artificial intelligence learn from natural intelligence?
>>________________________________
>> 
>>As a cognitive scientist working in medicine my interests are co-extensive 
>>with those of scientists working on AGIs. Medicine is such a vast domain that 
>>practicing it safely requires the ability to deal with countless clinical 
>>scenarios and interactions and even when working in a single specialist 
>>subfield requires substantial knowledge from other subfields. So much so that 
>>it is now well known that even very experienced humans with a large clinical 
>>repertoire are subject to significant levels of error.5 An artificial 
>>intelligence that could be helpful across medicine will require great 
>>versatility, and this will require a general understanding of medical 
>>expertise and a range of cognitive capabilities like reasoning, 
>>decision-making, planning, communication, reflection, learning and so forth.
>>If human experts are not safe is it well possible to ensure that an AGI, 
>>however sophisticated, will be? I think that it is pretty clear that the 
>>range of techniques currently available for assuring system safety will be 
>>useful in making specialist AI systems reliable and minimizing the likelihood 
>>of errors in situations that their human designers can anticipate. However, 
>>AI systems with general intelligence will be expected to address scenarios 
>>and hazards that are beyond us to solve currently and often beyond designers 
>>even to anticipate. I am optimistic but at the moment I don’t see any 
>>convincing reason to believe that we have the techniques that would be 
>>sufficient to guarantee that a clinical super-intelligence is safe, let alone 
>>an AGI that might be deployed in many domains.
>> 
>>________________________________
>> 
>>Luke: Thanks, John!
>>________________________________
>> 
>>      1. Rigorously Engineered Decisions
>>      2. Overconfidence in major disasters: 
>>• D. Lucas. Understanding the Human Factor in Disasters. Interdisciplinary 
>>Science Reviews. Volume 17 Issue 2 (01 June 1992), pp. 185-190.
>>• “Nuclear safety and security.
>>Psychology of overconfidence:
>>• Overconfidence effect.
>>• C. Riordan. Three Ways Overconfidence Can Make a Fool of You Forbes 
>>Leadership Forum.
>>Overconfidence in medicine:
>>• R. Hanson. Overconfidence Erases Doc Advantage. Overcoming Bias, 2007.
>>• E. Berner, M. Graber. Overconfidence as a Cause of Diagnostic Error in 
>>Medicine. The American Journal of Medicine. Volume 121, Issue 5, Supplement, 
>>Pages S2–S23, May 2008.
>>• T. Ackerman. Doctors overconfident, study finds, even in hardest cases. 
>>Houston Chronicle, 2013.
>>General technology example:
>>• J. Vetter, A. Benlian, T. Hess. Overconfidence in IT Investment Decisions: 
>>Why Knowledge can be a Boon and Bane at the same Time. ICIS 2011 Proceedings. 
>>Paper 4. December 6, 2011.
>>      3. Hazard and operability study
>>      4. Int J Med Robotics Comput Assist Surg 2011; 7: 375–39
>>      5. A. Ford. Domestic Robotics – Leave it to Roll-Oh, our Fun loving 
>> Retrobot. Institute for Ethics and Emerging Technologies, 2014.
>>The post John Fox on AI safety appeared first on Machine Intelligence 
>>Research Institute. 
>>Daniel Roy on probabilistic programming and AI 
>>Posted: 04 Sep 2014 08:03 AM PDT
>> Daniel Roy is an Assistant Professor of Statistics at the University of 
>> Toronto. Roy earned an S.B. and M.Eng. in Electrical Engineering and 
>> Computer Science, and a Ph.D. in Computer Science, from MIT.  His 
>> dissertation on probabilistic programming received the department’s George M 
>> Sprowls Thesis Award.  Subsequently, he held a Newton International 
>> Fellowship of the Royal Society, hosted by the Machine Learning Group at the 
>> University of Cambridge, and then held a Research Fellowship at Emmanuel 
>> College. Roy’s research focuses on theoretical questions that mix computer 
>> science, statistics, and probability.
>>Luke Muehlhauser: The abstract of Ackerman, Freer, and Roy (2010) begins:
>>As inductive inference and machine learning methods in computer science see 
>>continued success, researchers are aiming to describe even more complex 
>>probabilistic models and inference algorithms. What are the limits of 
>>mechanizing probabilistic inference? We investigate the computability of 
>>conditional probability… and show that there are computable joint 
>>distributions with noncomputable conditional distributions, ruling out the 
>>prospect of general inference algorithms.
>>In what sense does your result (with Ackerman & Freer) rule out the prospect 
>>of general inference algorithms?
>>________________________________
>> 
>>Daniel Roy: First, it’s important to highlight that when we say 
>>“probabilistic inference” we are referring to the problem of computing 
>>conditional probabilities, while highlighting the role of conditioning in 
>>Bayesian statistical analysis.
>>Bayesian inference centers around so-called posterior distributions. From a 
>>subjectivist standpoint, the posterior represents one’s updated beliefs after 
>>seeing (i.e., conditioning on) the data. Mathematically, a posterior 
>>distribution is simply a conditional distribution (and every conditional 
>>distribution can be interpreted as a posterior distribution in some 
>>statistical model), and so our study of the computability of conditioning 
>>also bears on the problem of computing posterior distributions, which is 
>>arguably one of the core computational problems in Bayesian analyses.
>>Second, it’s important to clarify what we mean by “general inference”. In 
>>machine learning and artificial intelligence (AI), there is a long tradition 
>>of defining formal languages in which one can specify probabilistic models 
>>over a collection of variables. Defining distributions can be difficult, but 
>>these languages can make it much more straightforward.
>>The goal is then to design algorithms that can use these representations to 
>>support important operations, like computing conditional distributions. 
>>Bayesian networks can be thought of as such a language: You specify a 
>>distribution over a collection of variables by specifying a graph over these 
>>variables, which breaks down the entire distribution into “local” conditional 
>>distributions corresponding with each node, which are themselves often 
>>represented as tables of probabilities (at least in the case where all 
>>variables take on only a finite set of values). Together, the graph and the 
>>local conditional distributions determine a unique distribution over all the 
>>variables.
>>An inference algorithms that support the entire class of all finite, 
>>discrete, Bayesian networks might be called general, but as a class of 
>>distributions, those having finite, discrete Bayesian networks is a rather 
>>small one.
>>In this work, we are interested in the prospect of algorithms that work on 
>>very large classes of distributions. Namely, we are considering the class of 
>>samplable distributions, i.e., the class of distributions for which there 
>>exists a probabilistic program that can generate a sample using, e.g., 
>>uniformly distributed random numbers or independent coin flips as a source of 
>>randomness. The class of samplable distributions is a natural one: indeed it 
>>is equivalent to the class of computable distributions, i.e., those for which 
>>we can devise algorithms to compute lower bounds on probabilities from 
>>descriptions of open sets. The class of samplable distributions is also 
>>equivalent to the class of distributions for which we can compute 
>>expectations from descriptions of bounded continuous functions.
>>The class of samplable distributions is, in a sense, the richest class you 
>>might hope to deal with. The question we asked was: is there an algorithm 
>>that, given a samplable distribution on two variables X and Y, represented by 
>>a program that samples values for both variables, can compute the conditional 
>>distribution of, say, Y given X=x, for almost all values for X? When X takes 
>>values in a finite, discrete set, e.g., when X is binary valued, there is a 
>>general algorithm, although it is inefficient. But when X is continuous, 
>>e.g., when it can take on every value in the unit interval [0,1], then 
>>problems can arise. In particular, there exists a distribution on a pair of 
>>numbers in [0,1] from which one can generate perfect samples, but for which 
>>it is impossible to compute conditional probabilities for one of the 
>>variables given the other. As one might expect, the proof reduces the halting 
>>problem to that of conditioning a specially crafted distribution.
>>This pathological distribution rules out the possibility of a general 
>>algorithm for conditioning (equivalently, for probabilistic inference). The 
>>paper ends by giving some further conditions that, when present, allow one to 
>>devise general inference algorithms. Those familiar with computing 
>>conditional distributions for finite-dimensional statistical models will not 
>>be surprised that conditions necessary for Bayes’ theorem are one example.
>>
>>________________________________
>> 
>>Luke: In your dissertation (and perhaps elsewhere) you express a particular 
>>interest in the relevance of probabilistic programming to AI, including the 
>>original aim of AI to build machines which rival the general intelligence of 
>>a human. How would you describe the relevance of probabilistic programming to 
>>the long-term dream of AI?
>>________________________________
>> 
>>Daniel: If you look at early probabilistic programming systems, they were 
>>built by AI researchers: De Raedt, Koller, McAllester, Muggleton, Pfeffer, 
>>Poole, Sato, to name a few. The Church language, which was introduced in 
>>joint work with Bonawitz, Mansinghka, Goodman, and Tenenbaum while I was a 
>>graduate student at MIT, was conceived inside a cognitive science laboratory, 
>>foremost to give us a language rich enough to express the range of models 
>>that people were inventing all around us. So, for me, there’s always been a 
>>deep connection. On the other hand, the machine learning community as a whole 
>>is somewhat allergic to AI and so the pitch to that community has more often 
>>been pragmatic: these systems may someday allow experts to conceive, 
>>prototype, and deploy much larger probabilistic systems, and at the same 
>>time, empower a much larger community of nonexperts to use probabilistic 
>>modeling techniques to understand their data. This is the basis for
 the DARPA PPAML program, which is funding 8 or so teams to engineer scalable 
systems over the next 4 years.
>>From an AI perspective, probabilistic programs are an extremely general 
>>representation of knowledge, and one that identifies uncertainty with 
>>stochastic computation. Freer, Tenenbaum, and I recently wrote a book chapter 
>>for the Turing centennial that uses a classical medical diagnosis example to 
>>showcase the flexibility of probabilistic programs and a general QUERY 
>>operator for performing probabilistic conditioning. Admittedly, the book 
>>chapter ignores the computational complexity of the QUERY operator, and any 
>>serious proposal towards AI cannot do this indefinitely. Understanding when 
>>we can hope to efficiently update our knowledge in light of new observations 
>>is a rich source of research questions, both applied and theoretical, 
>>spanning not only AI and machine learning, but also statistics, probability, 
>>physics, theoretical computer science, etc.
>>________________________________
>> 
>>Luke: Is it fair to think of QUERY as a “toy model” that we can work with in 
>>concrete ways to gain more general insights into certain parts of the 
>>long-term AI research agenda, even though QUERY is unlikely to be directly 
>>implemented in advanced AI systems? (E.g. that’s how I think of AIXI.)
>>________________________________
>> 
>>Daniel: I would hesitate to call QUERY a toy model. Conditional probability 
>>is a difficult concept to master, but, for those adept at reasoning about the 
>>execution of programs, QUERY demystifies the concept considerably. QUERY is 
>>an important conceptual model of probabilistic conditioning.
>>That said, the simple guess-and-check algorithm we present in our Turing 
>>article runs in time inversely proportional to the probability of the 
>>event/data on which one is conditioning. In most statistical settings, the 
>>probability of a data set decays exponentially towards 0 as a function of the 
>>number of data points, and so guess-and-check is only useful for reasoning 
>>with toy data sets in these settings. It should come as no surprise to hear 
>>that state-of-the-art probabilistic programming systems work nothing like 
>>this.
>>On the other hand, QUERY, whether implemented in a rudimentary fashion or 
>>not, can be used to represent and reason probabilistically about arbitrary 
>>computational processes, whether they are models of the arrival time of spam, 
>>the spread of disease through networks, or the light hitting our retinas. 
>>Computer scientists, especially those who might have had a narrow view of the 
>>purview of probability and statistics, will see a much greater overlap 
>>between these fields and their own once they understand QUERY.
>>To those familiar with AIXI, the difference is hopefully clear: QUERY 
>>performs probabilistic reasoning in a model given as input. AIXI, on the 
>>other hand, is itself a “universal” model that, although not computable, 
>>would likely predict (hyper)intelligent behavior, were we (counterfactually) 
>>able to perform the requisite probabilistic inferences (and feed it enough 
>>data). Hutter gives an algorithm implementing an approximation to AIXI, but 
>>its computational complexity still scales exponentially in space. AIXI is 
>>fascinating in many ways: If we ignore computational realities, we get a 
>>complete proposal for AI. On the other hand, AIXI and its approximations take 
>>maximal advantage of this computational leeway and are, therefore, ultimately 
>>unsatisfying. For me, AIXI and related ideas highlight that AI must be as 
>>much a study of the particular as it of the universal. Which potentially 
>>unverifiable, but useful, assumptions will enable us to efficiently
 represent, update, and act upon knowledge under uncertainty?
>>________________________________
>> 
>>Luke: You write that “AI must be as much a study of the particular as it is 
>>of the universal.” Naturally, most AI scientists are working on the 
>>particular, the near term, the applied. In your view, what are some other 
>>examples of work on the universal, in AI? Schmidhuber’s Gödel machine comes 
>>to mind, and also some work that is as likely to be done in a logic or formal 
>>philosophy department as a computer science department — e.g. perhaps work on 
>>logical priors — but I’d love to hear what kinds of work you’re thinking of.
>>________________________________
>> 
>>Daniel: I wouldn’t equate any two of the particular, near-term, or applied. 
>>By the word particular, I am referring to, e.g., the way that our environment 
>>affects, but is also affected by, our minds, especially through society. More 
>>concretely, both the physical spaces in which most of us spend our days and 
>>the mental concepts we regularly use to think about our daily activities are 
>>products of the human mind. But more importantly, these physical and mental 
>>spaces are necessarily ones that are easily navigated by our minds. The 
>>coevolution by which this interaction plays out is not well studied in the 
>>context of AI. And to the extent that this cycle dominates, we would expect a 
>>universal AI to be truly alien. On the other hand, exploiting the constraints 
>>of human constructs may allow us to build more effective AIs.
>>As for the universal, I have an interest in the way that noise can render 
>>idealized operations computable or even efficiently computable. In our work 
>>on the computability of conditioning that came up earlier in the discussion, 
>>we show that adding sufficiently smooth independent noise to a random 
>>variable allows us to perform conditioning in situations where we would not 
>>have been able to otherwise. There are examples of this idea elsewhere. For 
>>example, Braverman, Grigo, and Rojas study noise and intractability in 
>>dynamical systems. Specifically, they show that computing the invariant 
>>measure characterizing the long-term statistical behavior of dynamical 
>>systems is not possible. The road block is the computational power of the 
>>dynamical system itself. The addition of a small amount of noise to the 
>>dynamics, however, decreases the computational power of the dynamical system, 
>>and suffices to make the invariant measure computable. In a world subject to
 noise (or, at least, well modeled as such), it seems that many theoretical 
obstructions melt away.
>>________________________________
>> 
>>Luke: Thanks, Daniel!
>>The post Daniel Roy on probabilistic programming and AI appeared first on 
>>Machine Intelligence Research Institute. 
>>You are subscribed to email updates from Machine Intelligence Research 
>>Institute » Blog 
>>To stop receiving these emails, you may unsubscribe now. Email delivery 
>>powered by Google 
>>Google Inc., 20 West Kinzie, Chicago IL USA 60610 
>>
>>
>>  
>>             
>>Extinctions during human era one thousand times more th...
>>The gravity of the world's current extinction rate becomes clearer upon 
>>knowing what it was before people came along. A new estimate finds that 
>>species d...  
>>View on www.sciencedaily.com Preview by Yahoo  
>>  
>> 
-- 
>>You received this message because you are subscribed to the Google Groups 
>>"Everything List" group.
>>To unsubscribe from this group and stop receiving emails from it, send an 
>>email to everything-list+unsubscr...@googlegroups.com.
>>To post to this group, send email to everything-list@googlegroups.com.
>>Visit this group at http://groups.google.com/group/everything-list.
>>For more options, visit https://groups.google.com/d/optout.
>>
>>
>> -- 
>>You received this message because you are subscribed to the Google Groups 
>>"Everything List" group.
>>To unsubscribe from this group and stop receiving emails from it, send an 
>>email to everything-list+unsubscr...@googlegroups.com.
>>To post to this group, send email to everything-list@googlegroups.com.
>>Visit this group at http://groups.google.com/group/everything-list.
>>For more options, visit https://groups.google.com/d/optout.
>>
>
-- 
>You received this message because you are subscribed to the Google Groups 
>"Everything List" group.
>To unsubscribe from this group and stop receiving emails from it, send an 
>email to everything-list+unsubscr...@googlegroups.com.
>To post to this group, send email to everything-list@googlegroups.com.
>Visit this group at http://groups.google.com/group/everything-list.
>For more options, visit https://groups.google.com/d/optout.
>
>
>
-- 
>You received this message because you are subscribed to the Google Groups 
>"Everything List" group.
>To unsubscribe from this group and stop receiving emails from it, send an 
>email to everything-list+unsubscr...@googlegroups.com.
>To post to this group, send email to everything-list@googlegroups.com.
>Visit this group at http://groups.google.com/group/everything-list.
>For more options, visit https://groups.google.com/d/optout.
>
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to