Re: [agi] Re: um... ASI problem, thoughts?

2021-09-30 Thread TimTyler
On 2021-09-21 10:04:AM, Matt Mahoney wrote: Quantum operations have to be time reversible. For example, you can flip qubits or swap 2 qubits or conditionally flip or swap qubits depending on another. All of these operations can be run in reverse to get the previous state. Writing to memory is

Re: [agi] AGI discussion group, Sep 10 7AM Pacific: Characterizing and Implementing Human-Like Consciousness

2021-09-14 Thread TimTyler
On 2021-09-09 23:21:PM, Matt Mahoney wrote: It would be existentially dangerous to make AGI so much like humans that we give human rights to competing machines more powerful than us. Not having much in the way of human rights did not prevent slaves from thriving during the era of slavery.

Re: [agi] Re: AGI discussion group, Sep 10 7AM Pacific: Characterizing and Implementing Human-Like Consciousness

2021-09-14 Thread TimTyler
On 2021-09-10 23:25:PM, Matt Mahoney wrote: Evolution only cares about how many offspring you have, but motivation to play and learn skills makes you more likely to survive and have more. I don't think this is a trait we want in machines that are supposed to serve us. The goal is not runaway

[agi] Is the Technological Singularity terminology clear?

2021-04-05 Thread TimTyler
On 2021-03-09 13:28:PM, Ben Goertzel wrote or quoted: Also, "after the singularity" is a logical contradiction. The singularity is the point where the rate of recursive self improvement goes to infinity. It is infinitely far into the future measured in perceptual time or in number of

Re: [agi] Singularity terminology

2021-04-05 Thread TimTyler
On 2021-03-10 09:11:AM, Matt Mahoney wrote: It seem that Good and Vinge do use "singularity" in the mathematical sense, although that actually prevents us from predicting one, as Vinge calls it an "event horizon on the future". In pop physics, an event horizon is different from a singularity.

Re: [agi] There is such a thing as a Free Lunch

2020-09-27 Thread TimTyler
On 2020-09-27 08:50:AM, Matt Mahoney wrote: On Sat, Sep 26, 2020, 10:32 PM TimTyler <mailto:t...@tt1.org>> wrote: On 2020-09-22 12:45:PM, Matt Mahoney wrote: > The no free lunch theorem is based on the false premise that it is > possible to have a uniform probabili

Re: [agi] There is such a thing as a Free Lunch

2020-09-26 Thread TimTyler
On 2020-09-22 12:45:PM, Matt Mahoney wrote: The no free lunch theorem is based on the false premise that it is possible to have a uniform probability distribution over an infinite set. The converse proves Occam's Razor. I don't think that's right. I looked here:

Re: [agi] Formal theory of simplicity

2020-09-04 Thread TimTyler
On 2020-09-04 12:19:PM, Ben Goertzel wrote: The paper addresses what to do about the issue of there not being any single completely satisfactory single metric of simplicity/complexity. It proposes a solution: use an array or such metrics and combine them using pareto optimality. I think that is

Re: [agi] Formal theory of simplicity

2020-09-04 Thread TimTyler
On 2020-09-04 15:24:PM, Matt Mahoney wrote: The paper lacks an experimental results section. So I don't know how this simplicity measure compares to Solomonoff induction. The paper does discuss some simplicity measures, but it is more like a framework for combining simplicity measures.

Re: [agi] Formal theory of simplicity

2020-09-04 Thread TimTyler
On 2020-09-03 20:32:PM, Ben Goertzel wrote: Radical overhaul of my paper on the formal theory of simplicity (now saying a little more about pattern, multisimplicity, multipattern, and the underlying foundations of cognitive hierarchy and heterarchy and their synergy...)

Re: [agi] Formal theory of simplicity

2020-09-03 Thread TimTyler
On 2020-09-03 20:32:PM, Ben Goertzel wrote: Radical overhaul of my paper on the formal theory of simplicity (now saying a little more about pattern, multisimplicity, multipattern, and the underlying foundations of cognitive hierarchy and heterarchy and their synergy...)

Re: [agi] How to Code AGI

2020-04-20 Thread TimTyler
On 2020-04-18 19:50:PM, Matt Mahoney wrote: > A self improving agent recursively creates a more > intelligent version of itself with no external input. It is an odd definition. We live an age of "big data". We have a massive glut of sensors and sensor input. The blind and deaf agent would seem

Re: [agi] Re: My data compressor is rising from the deep

2020-04-13 Thread TimTyler
On 2020-04-11 22:01:PM, Matt Mahoney wrote: Really you should learn C and C++ because that's what most data compression developers use. You need to be able to read their code. This is an AGI mailing list. A lot of developers use Java and Python. Many do so precisely to avoid irrelevant

Re: [agi] Re: My data compressor is rising from the deep

2020-04-11 Thread TimTyler
On 2020-04-06 16:51:PM, Matt Mahoney wrote: I also suggest learning C and C++. Python is interpreted, so very slow. There are Python compilers - e.g. see: https://www.pypy.org/ -- __ |im |yler http://timtyler.org/ -- Artificial General

Re: [agi] The limitations of the validity of compression.

2020-03-30 Thread TimTyler
On 2020-03-21 15:14:PM, Matt Mahoney wrote: A lossless compression contest on video would result in contestants spending 99.% of their efforts on compressing data that the eye and brain throw away, assuming the payoff is the same for both types. Noise is not just white noise, but all the

Re: [agi] OpenAI is not so open.

2020-02-24 Thread TimTyler
On 2020-02-23 20:02:PM, Matt Mahoney wrote: Elon Musk takes Yudkowsky's theory seriously that the first AGI to achieve human level intelligence will launch a singularity. OpenAI founders believe that too, which is why they are racing to be first. Musk worries that their secrecy risks getting

Re: [agi] Re: Test your knowledge of probability theory

2020-02-09 Thread TimTyler
On 2020-02-09 13:19:PM, Matt Mahoney wrote: On Sat, Feb 8, 2020, 5:28 PM Matt Mahoney > wrote: 3. (Bostrom's simulation argument). There is a 1% chance that sometime in the next billion years that we will create a computer simulation of the present

Re: [agi] Re: General Intelligence vs. no-free-lunch theorem

2020-02-09 Thread TimTyler
On 2020-02-07 19:30:PM, Matt Mahoney wrote: On Fri, Feb 7, 2020, 7:22 AM TimTyler <mailto:t...@tt1.org>> wrote: We don't know that "Occam's Razor drives physics". That's a hypothesis, and while we can't get out of our local region and escape from what ap

Re: [agi] Re: General Intelligence vs. no-free-lunch theorem

2020-02-07 Thread TimTyler
On 2020-02-06 10:50:AM, Matt Mahoney wrote: On Wed, Feb 5, 2020, 7:25 PM TimTyler <mailto:t...@tt1.org>> wrote: On 2020-02-05 13:22:PM, Matt Mahoney wrote or quoted: What do you think is the reason Occam's Razor works, if not math? Well, math permits worlds

Re: [agi] Re: General Intelligence vs. no-free-lunch theorem

2020-02-05 Thread TimTyler
On 2020-02-03 13:38:PM, Matt Mahoney wrote: Why does Occam's Razor exist? Because you can't have a uniform distribution over an infinite set. All possible distributions favor short strings over long, or small integers over large. For any string, you have an infinite set of longer and less

Re: [agi] Re: Information MetaCriterion

2019-11-21 Thread TimTyler
On 2019-11-21 11:46:AM, James Bowery wrote: I, quite deliberately, did not mention "Solomonoff Induction" as an information criterion for model selection, precisely because it is not computable.   The point of my conjecture is that there is a very good reason to select "the smallest executable

Re: [agi] Re: Information MetaCriterion

2019-11-21 Thread TimTyler
On 2019-11-21 11:46:AM, James Bowery wrote: The point of my conjecture is that there is a very good reason to select "the smallest executable archive of the data" as your information criterion over the other information criteria -- and it has to do with the weakness of "lossy compression" as

Re: [agi] Re: Ecophagy

2019-11-19 Thread TimTyler
On 2019-11-19 22:20:PM, immortal.discover...@gmail.com wrote: The human body is made of little cells. Our skin, organs, brain, nerves. Bones are really hard though. Is the most optimal form of Earth going to be a nanobot blob if they can't be hard as bone? Yes. They can connect by extending

[agi] Ecophagy

2019-11-19 Thread TimTyler
On 2019-11-08 19:34:PM, Matt Mahoney wrote: self replicating nanotechnology, has the potential to out compete DNA based life. This requires great care because once the technology is cheap, anyone could produce malicious replicators the same way that anyone with a computer can write a virus or

Re: [agi] The Singularity is not near.

2019-11-19 Thread TimTyler
On 2019-11-18 12:45:PM, Matt Mahoney wrote: The premise of the Singularity is that if humans can create smarter than human intelligence (meaning faster or more successful at achieving goals), then so can it, only faster. That will lead to an intelligence explosion because each iteration will

Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread TimTyler
On 2019-11-08 15:58:PM, Matt Mahoney wrote: You can choose to model I/O peripherals as either part of the agent or part of the environment. Likewise for an input delay line. In one case it lowers intelligence and in the other case it doesn't. Thinking about it in computer science terms blurs

Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread TimTyler
On 2019-11-08 17:53:PM, Matt Mahoney wrote: > we can approximate reward as dollars per hour over a set of > real environments of practical value. In that case, it does > matter how well you can see, hear, walk, and lift heavy objects. > Whether you think that's fair or not, it matters for AGI

Re: [agi] Deviations from generality

2019-11-09 Thread TimTyler
On 2019-11-08 20:34:PM, rounce...@hotmail.com wrote: The thing about the adversary controlling the environment around the agent,  his brain is working with the same physics as your feet hitting the floor,  but its not simulatable in a physics system, because its not mechanical to start with,

Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread TimTyler
On 2019-11-08 00:15:AM, TimTyler wrote: Another thread recently discussed Legg's 2007 definition of intelligence - i.e. "Intelligence measures an agent’s ability to achieve goals in a wide range of environments". I have never been able to swallow this proposed definition becau

Re: [agi] Re: Against Legg's 2007 definition of intelligence

2019-11-08 Thread TimTyler
On 2019-11-08 00:57:AM, WriterOfMinds wrote: Do you think the definition works any better if we modify or clarify it by defining all tools (including body parts) to be part of the environment? We could say that "the environment" is presumed to include all non-psychological attributes - such

[agi] Against Legg's 2007 definition of intelligence

2019-11-07 Thread TimTyler
Another thread recently discussed Legg's 2007 definition of intelligence - i.e. "Intelligence measures an agent’s ability to achieve goals in a wide range of environments". I have never been able to swallow this proposed definition because I think it leaves out something important, namely: the

[agi] Deviations from generality

2019-11-07 Thread TimTyler
Hi. I am giving a talk on machine intelligence next week. I have a slide about "generality" and I have an associated question that I thought I would try running by you guys. My question is basically: what do you think of this presentation, and how can I improve on it? The presentation goes

Re: [agi] Re: Whats everyones goal here?

2019-10-16 Thread TimTyler
On 2019-10-15 16:18:PM, Matt Mahoney wrote: Prediction markets are more interesting. In the last election, the markets gave Trump a 30% chance of winning even though every poll said he would lose. A more interesting result is that you can't predict future prices to profit in any market

Re: [agi] The Job market.

2019-10-06 Thread TimTyler
On 2019-10-06 06:05:AM, Matt Mahoney wrote: > On Sun, Oct 6, 2019, 2:59 AM Ben Goertzel > wrote or quoted Matt as writing: > It probably takes a few hundred bits to describe the laws of physics. > > Hmm, that seems very few, just taking a look at the Standard

Re: [agi] Genetic evolution of logic rules experiment

2019-09-24 Thread TimTyler
On 2019-09-24 18:44:PM, YKY (Yan King Yin, 甄景贤) wrote: On Wed, Sep 25, 2019 at 12:03 AM doddy > wrote: how effecient is it compared to self supervised learning? You mean unsupervised?  I am not seeing much of a difference between the 2 notions.

Re: [agi] Simulation

2019-09-20 Thread TimTyler
On 2019-09-02 16:39:PM, Matt Mahoney wrote: Here are at least 4 possibilities, listed in decreasing order of complexity, and therefore increasing likelihood if Occam's Razor holds outside the simulation. 1. Only your brain exists. All of your sensory inputs are simulated by a model of a

[agi] Unsolicited Idea Submission Policy

2019-02-28 Thread TimTyler
On 2019-02-27 10:49:AM, Matt Mahoney wrote: Nobody wants your amazing ideas. Microsoft has made this an explicit policy. From https://docs.microsoft.com/en-us/previous-versions/ms840423(v=msdn.10) Apple has a nearly identical policy.

[agi] Drexler: Reframing Superintelligence

2019-02-28 Thread TimTyler
Keith Henson recently drew my attention to this 210 page report by Eric Drexler:  * https://www.fhi.ox.ac.uk/reframing/ Title is: "Reframing Superintelligence: Comprehensive AI Services as General Intelligence" Drexler argues against the "AGI" terminology, promoting his own "Comprehensive AI

Re: [agi] The future of AGI

2019-02-13 Thread TimTyler
Resending: this message never made it to the list. On 2019-02-04 19:41:PM, TimTyler wrote: On 2019-02-03 18:28:PM, pe...@optimal.org wrote: I think all of the theoretical calculations of processing power are widely off the mark – we’re not trying to reverse-engineer a bird – just need

Re: [agi] The future of AGI

2019-02-13 Thread TimTyler
On 2019-02-13 13:25:PM, Matt Mahoney wrote: The tipping point would be where machines are earning half of the world's income (for their owners), seen as a doubling of world GDP from a baseline agricultural society. This happened sometime in the 19th century around the inventions of the railroad

Re: [agi] The future of AGI

2019-02-13 Thread TimTyler
On 2019-02-13 13:25:PM, Matt Mahoney wrote: Then where is our singularity? Well, we do have a super-exponential growth rate of world knowledge and computing power, and have for centuries. [...] I've been making this argument for the last decade - e.g. see:  *

Re: [agi] The future of AGI

2019-02-13 Thread TimTyler
Resending: this message never made it to the list. On 2019-02-04 19:32:PM, TimTyler wrote: On 2019-02-03 22:07:PM, Matt Mahoney wrote: Copying a bit requires deleting the old value. So Landauer's limit applies. That's not correct. Copying a bit doesn't require deleting the old bit

Re: [agi] The future of AGI

2019-02-03 Thread TimTyler
On 2019-02-03 10:19:AM, Matt Mahoney wrote: The problem is power consumption. Mechanical adding machines are older than vacuum tubes and would have very low power consumption if we could shrink them to molecular size. Copying bits in DNA, RNA, and protein costs less than a millionth as much