Matt,

We argue about lots of things, but here we see two sides of the very same
coin.

You can't predict, guide, or design a process, until you have a good grasp
on the basic principles that govern that process. I have been pointing out
that they don't have enough to run with yet, and I proffered things, like
the apparent need for dP/t representation to be able to implement temporal
learning, as "proof" that there just wasn't enough to run with yet.

My own interest has been to attract interest in discovering the missing
principles, as some have immediate value (e.g. as I recently applied to
reverse the glaucoma blind spots and save the vision in my own right eye).
Once there is a *process* in place to discover missing principles, the rest
should eventually follow, including the missing progress benchmarks you
mentioned. However, I have yet to find *anyone* else (apparently even
including you) who believes that this would be a fruitful enough area to
invest any of their own efforts on. Until this happens, there can be no AGI
of this sort.

Save your breath! These guys are on a "Rapture of the Nerds" religious
quest to create their very own God, so no simple in-your-face logic is
going to deter them from a path that they absolutely KNOW MUST SUCCEED
despite obvious evidence to the contrary. This apparently includes
willfully ignoring the lack of essential pieces of the puzzle, because
filling in those missing pieces might take away from the "inertia" of their
efforts. Instead, that same inertia is about to carry yet another bunch of
lemmings off the Perceptron cliff.


On the news front, our politics are different but our goals are similar. I
am preparing a patent to be filed in early March that shows how to deeply
parse text orders of magnitude faster than other methods, which is a MAJOR
missing piece to an Internet AGI. It also shows other important pieces of
technology needed to make it pervasive throughout the Internet, regardless
of whether others choose to play along with my plans.

Yes, I agree with you that in a perfect world the government would give me
a lab and staff to continue this, while implementing this for the benefit
of the world. However, until that perfect world comes about, we must work
within this, "the best government that money can buy". Hence, I plan to
turn the patent into money, and in the process guide its development in
rational directions.

I am looking for people to "kick the tires" on this, while keeping it
confidential for the next 3 months, whereupon I will simultaneously patent
and publish it everywhere I can, including here on this forum.

Would you (or anyone else here) be willing to sign an NDA to see what this
is all about, and maybe in the process help guide it in rational directions?

Any interest?

Steve
=================
On Mon, Dec 24, 2012 at 12:43 PM, Matt Mahoney <[email protected]>wrote:

> Ben's tweet of Eliezer Yudkowsky's dire forecast made 8 years ago
> seems rather humorous today.
>
> http://acceleratingfuture.com/sl4/archive/0501/10611.html
>
> As does Ben's response.
>
> http://acceleratingfuture.com/sl4/archive/0501/10613.html
>
> Really, OpenCog (then Novamente) is going to recursively self improve
> and kill us all?
>
> So what went wrong?
>
> I've been lurking on the OpenCog mailing list for a couple of years.
> There is a lot of software development being done. But it is hard to
> tell if any real progress is being made because there is still no (nor
> has there ever been) a test set by which progress could be measured.
> Ben has mentioned a few ideas for tests, like getting an online
> university degree, or playing with a box of toys. But we aren't there
> yet. I can imagine a potential investor asking when will we get there,
> and the answer will either be some made-up date or "I don't know". And
> we know what happens with made-up dates.
>
> Why don't we know? Because there are no tests of incremental progress.
> So as far as anyone can tell, there has been no progress since 2005.
> During the 3 years it took to build Watson, the team tested it on
> Jeopardy games and watched its precision (at 50% recall rate)
> gradually improve from 15% accuracy to 90%, the level they needed to
> beat the best humans. Every 3 months, they saw a 10% increase and knew
> they were on the right track and could even forecast a completion
> date. What does OpenCog have that is equivalent to this?
>
> Here is another example. How much more knowledge does Cyc need to add
> to its sea of assertions to "break the software brittleness
> bottleneck"? That was the goal in 1984 when the project was started.
> Of course, nobody knows. Why not? Because there is no test for
> measuring progress.
>
> I've made a rough draft of the cost of AGI which many of you have already
> read.
>
>
> https://docs.google.com/document/d/1cQiaH81rB5l9eLRYZFSi_tOLzRzOsY8wVruimPUWybg/edit
>
> If you think this is wrong, then please come up with some tests to
> prove it. Here are some simple ones for now:
>
> - Fill in missing words in text, with the goal of human level
> accuracy. (Can RelEx or MOSES do this)?
>
> - Recognize printed words or common objects in images with human level
> accuracy (can DeSTIN do this)?
>
> - Teach a robot to throw a ball.
>
> These are nice, simple tests that give a numeric answer. But you will
> notice in preparing the test set, that even this step is not trivial.
> Then maybe you can tell me what it will cost to solve these problems,
> in terms of hardware, software, and training data.
>
>
> --
> -- Matt Mahoney, [email protected]
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to