Re: [agi] Colin Hales mention in Salon editorial

2021-05-12 Thread John Rose
On Tuesday, May 11, 2021, at 7:06 PM, Colin Hales wrote: > Currently I am battling EM noise from the massive TV towers a few km from > here.  > > Kindly stop misrepresenting things.You have no clue what I am doing and are > not qualified to comment. > Advocatus Diaboli

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread immortal . discoveries
We may not be understanding, so, it would be justicful that you make a new thread and explain in under 100 words the whole thing in clear relatable words. I can do that easy. Many others and I don't connect to your path or forgot what you said, so it is better you make a small clear separate

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread Colin Hales
Matt, The EM fields are not noise. They are chaotic, complex and deeply entwined in function. Indeed central to function. I have theory. I have a hypothesis. I am doing the experiments. I have a concept design for the chip. The central device device is on the floor next to me and testing is in

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread immortal . discoveries
In the end, patterns are the most of AI. And if you use random, you must know why too. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T7c7052974ce450f1-M4c70efefa286bb98dc9bc3a9 Delivery options:

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread immortal . discoveries
We can only use patterns to build something useful at later timesDALL-E can answer trillions of inputs accuratelyhammers can solve hundreds of problems, ones that reoccur too Even acting random so a sniper can't target you easily is a pattern...disallowing his patterns to exist or

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread Mike Archbold
On 5/11/21, Matt Mahoney wrote: > Maybe electromagnetic noise from neurons is significant. So what? Matt: Look at your sentence: "maybe electromagnetic noise is significant." That's why we are talking about it. If it is part of the overall *structure* of the brain, thus then the mind, we need

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread Matt Mahoney
Maybe electromagnetic noise from neurons is significant. So what? If noise causes nearby neurons to fire, we can still model the effect using synaptic weights. Normal training will compensate for the effect. I don't know what Colin expects to find from his Xchip when he doesn't even know what it

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread Dorian Aur
"The first step is to shape/reshape the electromagnetic field and its interaction within a biological structure, see the general hybrid model https://arxiv.org/ftp/arxiv/papers/1411/1411.5224.pdf *That was a very pleasurable read, my qualia’s gettin all jazzed up on that one...*" Another

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread immortal . discoveries
One way would be wrong, other is the better way. This is made obvious above. Don't say vacation. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T7c7052974ce450f1-M9161a89f9387cfdc5f9c577b Delivery options:

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread immortal . discoveries
It's up to you what you do when see cat, do you predict the most common next thing, or say LETS GO ON A VACATION, oddly, outloud. You could also look at cat as 'at' or 'ca' or c_t' and combine predictions from them all. -- Artificial General Intelligence

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread John Rose
On Monday, May 10, 2021, at 11:13 PM, immortal.discoveries wrote: > And yeah, that's all a brain can do is predict, only patterns exist in data. Some patterns exist in the data and some exist in the perceiver like those patterns where one brain perceives one image and another brain perceives an

Re: [agi] Colin Hales mention in Salon editorial

2021-05-11 Thread John Rose
On Monday, May 10, 2021, at 10:48 PM, Mike Archbold wrote: > Plainly a lot happens at the cell level with electric field action. Ions are moving around, eg into cells, subject to electric fields. What happens at a macro brain level or the middle stages with EMF? Why is there a presumption that

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread immortal . discoveries
I showed above how prediction on text works. And yeah, that's all a brain can do is predict, only patterns exist in data. If Colin doesn't outline such mechanisms, then there is no intelligence/AI. It's that simple. -- Artificial General Intelligence List:

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread Mike Archbold
On 5/10/21, Matt Mahoney wrote: > On Mon, May 10, 2021, 4:16 PM Mike Archbold wrote: > >> I can't speak for Colin but I do know that he isn't implementing >> algorithms >> > > Exactly. He is proposing an "Xchip" that reproduces the electrical noise > produced by real neurons. What he isn't

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread Matt Mahoney
On Mon, May 10, 2021, 4:16 PM Mike Archbold wrote: > I can't speak for Colin but I do know that he isn't implementing > algorithms > Exactly. He is proposing an "Xchip" that reproduces the electrical noise produced by real neurons. What he isn't proposing is any sort of experiment, or any

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread John Rose
On Monday, May 10, 2021, at 1:47 PM, Dorian Aur wrote: > The first step is to shape/reshape the electromagnetic field and its > interaction within a  biological structure, see the general hybrid model    > https://arxiv.org/ftp/arxiv/papers/1411/1411.5224.pdf That was a very pleasurable read,

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread immortal . discoveries
Hardware is only part of the issue. AI improves if you make it find patterns like FREQ and TRANS and RENC. Only patterns exists in the universe. You can use only experiences to help you in the future aka by doing matches to memories. And you can only be a pattern, why can't I make you into

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread Mike Archbold
If you don't know what an immunity test is, you stick a device under test in a special chamber and subject it to intense electric fields. An intense electric field can drastically alter the behavior of electronics. The brain is generating intense electric fields (I've been led to understand at

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread Mike Archbold
> > You still haven't answered my questions. > > What algorithm are you going to implement using your replicated brain EM > fields? Or what signaling pattern, if you prefer. How are you going to get > these EM fields to *think*? I can't speak for Colin but I do know that he isn't implementing

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread Dorian Aur
Indeed, it is a hardware problem, so Colin is right - hard to emulate "general" intelligence, or our brain using digital computers. Importantly, one has to take small steps to get there. . Introduced as a "conscious" machine, this hybrid framework is the straight path to AGI The first step

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread immortal . discoveries
But we are experienced, I am essential guru lord of knowledge it is I that discovered what Life is and what our world will turn into - patterns. And I'm absolute sure. My Guide explains it all along with AGI. Even half of my mechanisms for AGI are working together, on first attempts to

Re: [agi] Colin Hales mention in Salon editorial

2021-05-10 Thread Jim Bromer
The proposition that my unproven speculation only needs some tweaking but your unproven speculation is completely wrong is a weak proposition that I have often seen in these AI discussion groups. Having seen it in others, I am wary of it popping up in my own thinking.

Re: [agi] Colin Hales mention in Salon editorial

2021-05-09 Thread WriterOfMinds
On Sunday, May 09, 2021, at 8:17 PM, Colin Hales wrote: > OK. I am going to shout. Ready? I AM NOT EMULATING BRAIN PHYSICS. There. That > feels better! :-).  > > I am REPLICATING brain physics. What you ended up describing is what I meant ... I just used the wrong word, apparently. And I've

Re: [agi] Colin Hales mention in Salon editorial

2021-05-09 Thread immortal . discoveries
Ah, so it's the not knowing what went wrong in a computer but in your design Colin you can see why something didn't work so well? You can say oh these waves didn't interact like they did in a brain? Interesting. You make the brain on a chip like computer system efficiently obviously and uses

Re: [agi] Colin Hales mention in Salon editorial

2021-05-09 Thread Colin Hales
Hi Mike and Folks, I had a long private conversation on zoom with Thomas Nail and have seen 2 of his talks. He did a deep dive, including all the supplementaries, on my neuromimetic chip paper: https://doi.org/10.36227/techrxiv.13298750.v4 As a result he's basically on-board with the ideas.

Re: [agi] Colin Hales mention in Salon editorial

2021-05-09 Thread John Rose
On Wednesday, May 05, 2021, at 2:15 PM, James Bowery wrote: > Notepad vs vi?  I thought the holy editor war was EMACS vs vi.  Do you mean > notepad++ or do you mean, literally, than POS from Microsoft? Notepad++ is a great utility if you're in Windows for the non-IDE coding experience... Do you

Re: [agi] Colin Hales mention in Salon editorial

2021-05-05 Thread James Bowery
On Tue, May 4, 2021 at 11:09 AM Matt Mahoney wrote: > ... > Hint: by consciousness, you probably mean what thinking feels like. It > feels like you want to keep doing it by not dying, which increases your > odds of passing on your DNA. > The real problem is people keep talking about

Re: [agi] Colin Hales mention in Salon editorial

2021-05-05 Thread James Bowery
Notepad vs vi? I thought the holy editor war was EMACS vs vi. Do you mean notepad++ or do you mean, literally, than POS from Microsoft? Anyone who uses anything but TECO should burn in Hell forever, although you can get out of Purgatory after 1000 years if you use pmate. On Tue, May 4, 2021

Re: [agi] Colin Hales mention in Salon editorial

2021-05-05 Thread Nanograte Knowledge Technologies
social power. From: John Rose Sent: Wednesday, 05 May 2021 16:32 To: AGI Subject: Re: [agi] Colin Hales mention in Salon editorial On Wednesday, May 05, 2021, at 3:50 AM, Nanograte Knowledge Technologies wrote: Anyone keeping an eye out for the socially-oriented

Re: [agi] Colin Hales mention in Salon editorial

2021-05-05 Thread immortal . discoveries
@Matt We are humans. The internet is a team of humans (multi-agent ensemble). Google is not AGI. AGI would be AGI, and a team of AGIs would be a multi agent ensemble of AGIs. Google search uses BERT but this is an AI. Looking all of Google Search, still not AGI either. So no, it's not going

Re: [agi] Colin Hales mention in Salon editorial

2021-05-05 Thread John Rose
On Wednesday, May 05, 2021, at 3:50 AM, Nanograte Knowledge Technologies wrote: > Anyone keeping an eye out for the socially-oriented, counter-balance > technologies, such as robot hunters/destroyers and robotic regenerators, > space makers, privacy services? A big push right now in telecom is

Re: [agi] Colin Hales mention in Salon editorial

2021-05-05 Thread Mike Archbold
On 5/5/21, keghnf...@gmail.com wrote: > If a scientist, or Edison, Wright brother clone, publishes a complete AGI > model in a scientific review, paper, > His model will be absorbed by big business. Then the big guy will come out > with model that will have a > few line of code changed and then

Re: [agi] Colin Hales mention in Salon editorial

2021-05-05 Thread keghnfeem
 If a scientist, or Edison, Wright brother clone, publishes a complete AGI model in a scientific review, paper, His model will be absorbed by big business. Then the big guy will come out with model that will have a few line of code changed and then say "We have done it!".   There no protection

Re: [agi] Colin Hales mention in Salon editorial

2021-05-05 Thread Nanograte Knowledge Technologies
/regenerators a lot. I think there's a lot of money to be made there, and it seems like a challenge. Anyone else have some interesting 10-year thoughts? From: Matt Mahoney Sent: Tuesday, 04 May 2021 21:40 To: AGI Subject: Re: [agi] Colin Hales mention in Salon

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread John Rose
On Tuesday, May 04, 2021, at 3:00 PM, Mike Archbold wrote: > they always devolve into "that approach won't work" along with a lot of chest puffing and remarks about the shortcomings (usually implied personal deficiencies) You mean like smart apes trying to figure out what smart is?

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread John Rose
On Tuesday, May 04, 2021, at 3:40 PM, Matt Mahoney wrote: > We do have something close to AGI, namely Alexa, Google, and Siri. The one > thing they have in common is they were developed by companies with trillion > dollar market caps. > You forgot to mention Cortana whose natural language

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread Matt Mahoney
We do have something close to AGI, namely Alexa, Google, and Siri. The one thing they have in common is they were developed by companies with trillion dollar market caps. I've been on this list since before these products existed. Has anyone here contributed to their development? Has anyone

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread Mike Archbold
(PS., my last post general and not aimed at anyone specifically) On 5/4/21, Mike Archbold wrote: > The problem with AGI forums in general is they always devolve into > "that approach won't work" along with a lot of chest puffing and > remarks about the shortcomings (usually implied personal

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread Mike Archbold
The problem with AGI forums in general is they always devolve into "that approach won't work" along with a lot of chest puffing and remarks about the shortcomings (usually implied personal deficiencies) while AT THE SAME time we don't have a working AGI. So the best course of action is just to get

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread John Rose
On Tuesday, May 04, 2021, at 2:33 PM, WriterOfMinds wrote: > I'd say that none of those things have anything to do with phenomenal > consciousness. If you look above the "bit and electron" level, they have to > do with information and symbols. Information and subjective first-person >

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread WriterOfMinds
On Tuesday, May 04, 2021, at 12:15 PM, immortal.discoveries wrote: > He wants you to read his *formal *papers WOM. I already did -- the last one he posted here, that is. I still have questions. On Tuesday, May 04, 2021, at 12:14 PM, John Rose wrote: > That's similar to saying consciousness is

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread immortal . discoveries
On Tuesday, May 04, 2021, at 2:26 PM, Mike Archbold wrote: > "science, science, science, science"! thats wheres the money at -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread Mike Archbold
On 5/4/21, WriterOfMinds wrote: > On Tuesday, May 04, 2021, at 11:31 AM, Mike Archbold wrote: >> Colin's methods are first and foremost scientific. You can't > fault that. > The scientific methods by which Colin hopes to test his claims remain pretty > cloudy to me. > > He has a proposed hardware

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread immortal . discoveries
He wants you to read his *formal *papers WOM. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T7c7052974ce450f1-M794172e7a819f11d8df59c59 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread John Rose
On Tuesday, May 04, 2021, at 12:07 PM, Matt Mahoney wrote: > Real AI researchers know that consciousness is irrelevant to AI. That's similar to saying consciousness is irrelevant to electronic/electromagnetic communications. Luckily Bell, Apple, and the thousands of related companies, the

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread WriterOfMinds
On Tuesday, May 04, 2021, at 11:31 AM, Mike Archbold wrote: > Colin's methods are first and foremost scientific. You can't fault that. The scientific methods by which Colin hopes to test his claims remain pretty cloudy to me. He has a proposed hardware device/architecture, which he believes does

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread immortal . discoveries
On Tuesday, May 04, 2021, at 1:31 PM, Mike Archbold wrote: > There isn't, to my knowledge, a working AGI, so it seems difficult to cite anybody as a "real AI researcher" on the grounds of producing a working AGI. What's left to resort to? Well, the researcher's methods along with a *subjective*,

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread Mike Archbold
I'm not sure what counts as a "real AI researcher" Matt. I think you mean "AGI" by that quip. There isn't, to my knowledge, a working AGI, so it seems difficult to cite anybody as a "real AI researcher" on the grounds of producing a working AGI. What's left to resort to? Well, the researcher's

Re: [agi] Colin Hales mention in Salon editorial

2021-05-04 Thread Matt Mahoney
"In my view, there will be no progress toward human-level AI until researchers stop trying to design computational slaves for capitalism and start taking the genuine source of intelligence seriously: fluctuating electric sheep." Plus more nonsense about neural fluctuations or EM fields or quantum

[agi] Colin Hales mention in Salon editorial

2021-05-03 Thread Mike Archbold
It's nice to see Colin, a regular on this list, in this editorial. https://www.salon.com/2021/04/30/why-artificial-intelligence-research-might-be-going-down-a-dead-end/ "Relatedly, Colin Hales, an artificial intelligence researcher at the University of Melbourne, has observed how strange it is