On Tuesday, May 11, 2021, at 7:06 PM, Colin Hales wrote:
> Currently I am battling EM noise from the massive TV towers a few km from
> here.
>
> Kindly stop misrepresenting things.You have no clue what I am doing and are
> not qualified to comment.
>
Advocatus Diaboli
We may not be understanding, so, it would be justicful that you make a new
thread and explain in under 100 words the whole thing in clear relatable words.
I can do that easy. Many others and I don't connect to your path or forgot what
you said, so it is better you make a small clear separate
Matt,
The EM fields are not noise. They are chaotic, complex and deeply entwined
in function. Indeed central to function.
I have theory. I have a hypothesis. I am doing the experiments. I have a
concept design for the chip. The central device device is on the floor next
to me and testing is in
In the end, patterns are the most of AI. And if you use random, you must know
why too.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T7c7052974ce450f1-M4c70efefa286bb98dc9bc3a9
Delivery options:
We can only use patterns to build something useful at later timesDALL-E can
answer trillions of inputs accuratelyhammers can solve hundreds of
problems, ones that reoccur too
Even acting random so a sniper can't target you easily is a
pattern...disallowing his patterns to exist or
On 5/11/21, Matt Mahoney wrote:
> Maybe electromagnetic noise from neurons is significant. So what?
Matt: Look at your sentence: "maybe electromagnetic noise is
significant." That's why we are talking about it. If it is part of
the overall *structure* of the brain, thus then the mind, we need
Maybe electromagnetic noise from neurons is significant. So what? If noise
causes nearby neurons to fire, we can still model the effect using synaptic
weights. Normal training will compensate for the effect.
I don't know what Colin expects to find from his Xchip when he doesn't even
know what it
"The first step is to shape/reshape the electromagnetic field and its
interaction within a biological structure, see the general hybrid model
https://arxiv.org/ftp/arxiv/papers/1411/1411.5224.pdf
*That was a very pleasurable read, my qualia’s gettin all jazzed up on that
one...*"
Another
One way would be wrong, other is the better way. This is made obvious above.
Don't say vacation.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T7c7052974ce450f1-M9161a89f9387cfdc5f9c577b
Delivery options:
It's up to you what you do when see cat, do you predict the most common next
thing, or say LETS GO ON A VACATION, oddly, outloud. You could also look at cat
as 'at' or 'ca' or c_t' and combine predictions from them all.
--
Artificial General Intelligence
On Monday, May 10, 2021, at 11:13 PM, immortal.discoveries wrote:
> And yeah, that's all a brain can do is predict, only patterns exist in data.
Some patterns exist in the data and some exist in the perceiver like those
patterns where one brain perceives one image and another brain perceives an
On Monday, May 10, 2021, at 10:48 PM, Mike Archbold wrote:
> Plainly a lot happens at the cell level with electric field action.
Ions are moving around, eg into cells, subject to electric fields.
What happens at a macro brain level or the middle stages with EMF? Why
is there a presumption that
I showed above how prediction on text works.
And yeah, that's all a brain can do is predict, only patterns exist in data.
If Colin doesn't outline such mechanisms, then there is no intelligence/AI.
It's that simple.
--
Artificial General Intelligence List:
On 5/10/21, Matt Mahoney wrote:
> On Mon, May 10, 2021, 4:16 PM Mike Archbold wrote:
>
>> I can't speak for Colin but I do know that he isn't implementing
>> algorithms
>>
>
> Exactly. He is proposing an "Xchip" that reproduces the electrical noise
> produced by real neurons. What he isn't
On Mon, May 10, 2021, 4:16 PM Mike Archbold wrote:
> I can't speak for Colin but I do know that he isn't implementing
> algorithms
>
Exactly. He is proposing an "Xchip" that reproduces the electrical noise
produced by real neurons. What he isn't proposing is any sort of
experiment, or any
On Monday, May 10, 2021, at 1:47 PM, Dorian Aur wrote:
> The first step is to shape/reshape the electromagnetic field and its
> interaction within a biological structure, see the general hybrid model
> https://arxiv.org/ftp/arxiv/papers/1411/1411.5224.pdf
That was a very pleasurable read,
Hardware is only part of the issue. AI improves if you make it find patterns
like FREQ and TRANS and RENC. Only patterns exists in the universe. You can use
only experiences to help you in the future aka by doing matches to memories.
And you can only be a pattern, why can't I make you into
If you don't know what an immunity test is, you stick a device under
test in a special chamber and subject it to intense electric fields.
An intense electric field can drastically alter the behavior of
electronics. The brain is generating intense electric fields (I've
been led to understand at
>
> You still haven't answered my questions.
>
> What algorithm are you going to implement using your replicated brain EM
> fields? Or what signaling pattern, if you prefer. How are you going to get
> these EM fields to *think*?
I can't speak for Colin but I do know that he isn't implementing
Indeed, it is a hardware problem, so Colin is right - hard to emulate
"general" intelligence, or our brain using digital computers.
Importantly, one has to take small steps to get there. .
Introduced as a "conscious" machine, this hybrid framework is the straight
path to AGI
The first step
But we are experienced, I am essential guru lord of knowledge it is I that
discovered what Life is and what our world will turn into - patterns. And I'm
absolute sure. My Guide explains it all along with AGI. Even half of my
mechanisms for AGI are working together, on first attempts to
The proposition that my unproven speculation only needs some tweaking but your
unproven speculation is completely wrong is a weak proposition that I have
often seen in these AI discussion groups. Having seen it in others, I am wary
of it popping up in my own thinking.
On Sunday, May 09, 2021, at 8:17 PM, Colin Hales wrote:
> OK. I am going to shout. Ready? I AM NOT EMULATING BRAIN PHYSICS. There. That
> feels better! :-).
>
> I am REPLICATING brain physics.
What you ended up describing is what I meant ... I just used the wrong word,
apparently. And I've
Ah, so it's the not knowing what went wrong in a computer but in your design
Colin you can see why something didn't work so well? You can say oh these waves
didn't interact like they did in a brain? Interesting. You make the brain on a
chip like computer system efficiently obviously and uses
Hi Mike and Folks,
I had a long private conversation on zoom with Thomas Nail and have seen 2
of his talks. He did a deep dive, including all the supplementaries, on my
neuromimetic chip paper:
https://doi.org/10.36227/techrxiv.13298750.v4
As a result he's basically on-board with the ideas.
On Wednesday, May 05, 2021, at 2:15 PM, James Bowery wrote:
> Notepad vs vi? I thought the holy editor war was EMACS vs vi. Do you mean
> notepad++ or do you mean, literally, than POS from Microsoft?
Notepad++ is a great utility if you're in Windows for the non-IDE coding
experience... Do you
On Tue, May 4, 2021 at 11:09 AM Matt Mahoney
wrote:
> ...
> Hint: by consciousness, you probably mean what thinking feels like. It
> feels like you want to keep doing it by not dying, which increases your
> odds of passing on your DNA.
>
The real problem is people keep talking about
Notepad vs vi? I thought the holy editor war was EMACS vs vi. Do you mean
notepad++ or do you mean, literally, than POS from Microsoft?
Anyone who uses anything but TECO should burn in Hell forever, although you
can get out of Purgatory after 1000 years if you use pmate.
On Tue, May 4, 2021
social power.
From: John Rose
Sent: Wednesday, 05 May 2021 16:32
To: AGI
Subject: Re: [agi] Colin Hales mention in Salon editorial
On Wednesday, May 05, 2021, at 3:50 AM, Nanograte Knowledge Technologies wrote:
Anyone keeping an eye out for the socially-oriented
@Matt
We are humans.
The internet is a team of humans (multi-agent ensemble).
Google is not AGI. AGI would be AGI, and a team of AGIs would be a multi agent
ensemble of AGIs.
Google search uses BERT but this is an AI. Looking all of Google Search, still
not AGI either.
So no, it's not going
On Wednesday, May 05, 2021, at 3:50 AM, Nanograte Knowledge Technologies wrote:
> Anyone keeping an eye out for the socially-oriented, counter-balance
> technologies, such as robot hunters/destroyers and robotic regenerators,
> space makers, privacy services?
A big push right now in telecom is
On 5/5/21, keghnf...@gmail.com wrote:
> If a scientist, or Edison, Wright brother clone, publishes a complete AGI
> model in a scientific review, paper,
> His model will be absorbed by big business. Then the big guy will come out
> with model that will have a
> few line of code changed and then
If a scientist, or Edison, Wright brother clone, publishes a complete AGI
model in a scientific review, paper,
His model will be absorbed by big business. Then the big guy will come out with
model that will have a
few line of code changed and then say "We have done it!".
There no protection
/regenerators a lot.
I think there's a lot of money to be made there, and it seems like a challenge.
Anyone else have some interesting 10-year thoughts?
From: Matt Mahoney
Sent: Tuesday, 04 May 2021 21:40
To: AGI
Subject: Re: [agi] Colin Hales mention in Salon
On Tuesday, May 04, 2021, at 3:00 PM, Mike Archbold wrote:
> they always devolve into
"that approach won't work" along with a lot of chest puffing and
remarks about the shortcomings (usually implied personal deficiencies)
You mean like smart apes trying to figure out what smart is?
On Tuesday, May 04, 2021, at 3:40 PM, Matt Mahoney wrote:
> We do have something close to AGI, namely Alexa, Google, and Siri. The one
> thing they have in common is they were developed by companies with trillion
> dollar market caps.
>
You forgot to mention Cortana whose natural language
We do have something close to AGI, namely Alexa, Google, and Siri. The one
thing they have in common is they were developed by companies with trillion
dollar market caps.
I've been on this list since before these products existed. Has anyone here
contributed to their development? Has anyone
(PS., my last post general and not aimed at anyone specifically)
On 5/4/21, Mike Archbold wrote:
> The problem with AGI forums in general is they always devolve into
> "that approach won't work" along with a lot of chest puffing and
> remarks about the shortcomings (usually implied personal
The problem with AGI forums in general is they always devolve into
"that approach won't work" along with a lot of chest puffing and
remarks about the shortcomings (usually implied personal deficiencies)
while AT THE SAME time we don't have a working AGI. So the best course
of action is just to get
On Tuesday, May 04, 2021, at 2:33 PM, WriterOfMinds wrote:
> I'd say that none of those things have anything to do with phenomenal
> consciousness. If you look above the "bit and electron" level, they have to
> do with information and symbols. Information and subjective first-person
>
On Tuesday, May 04, 2021, at 12:15 PM, immortal.discoveries wrote:
> He wants you to read his *formal *papers WOM.
I already did -- the last one he posted here, that is. I still have questions.
On Tuesday, May 04, 2021, at 12:14 PM, John Rose wrote:
> That's similar to saying consciousness is
On Tuesday, May 04, 2021, at 2:26 PM, Mike Archbold wrote:
> "science, science,
science, science"!
thats wheres the money at
--
Artificial General Intelligence List: AGI
Permalink:
On 5/4/21, WriterOfMinds wrote:
> On Tuesday, May 04, 2021, at 11:31 AM, Mike Archbold wrote:
>> Colin's methods are first and foremost scientific. You can't
> fault that.
> The scientific methods by which Colin hopes to test his claims remain pretty
> cloudy to me.
>
> He has a proposed hardware
He wants you to read his *formal *papers WOM.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T7c7052974ce450f1-M794172e7a819f11d8df59c59
Delivery options: https://agi.topicbox.com/groups/agi/subscription
On Tuesday, May 04, 2021, at 12:07 PM, Matt Mahoney wrote:
> Real AI researchers know that consciousness is irrelevant to AI.
That's similar to saying consciousness is irrelevant to
electronic/electromagnetic communications. Luckily Bell, Apple, and the
thousands of related companies, the
On Tuesday, May 04, 2021, at 11:31 AM, Mike Archbold wrote:
> Colin's methods are first and foremost scientific. You can't
fault that.
The scientific methods by which Colin hopes to test his claims remain pretty
cloudy to me.
He has a proposed hardware device/architecture, which he believes does
On Tuesday, May 04, 2021, at 1:31 PM, Mike Archbold wrote:
> There isn't, to my knowledge, a working AGI,
so it seems difficult to cite anybody as a "real AI researcher" on the
grounds of producing a working AGI. What's left to resort to? Well,
the researcher's methods along with a *subjective*,
I'm not sure what counts as a "real AI researcher" Matt. I think you
mean "AGI" by that quip. There isn't, to my knowledge, a working AGI,
so it seems difficult to cite anybody as a "real AI researcher" on the
grounds of producing a working AGI. What's left to resort to? Well,
the researcher's
"In my view, there will be no progress toward human-level AI until
researchers stop trying to design computational slaves for capitalism and
start taking the genuine source of intelligence seriously: fluctuating
electric sheep."
Plus more nonsense about neural fluctuations or EM fields or quantum
It's nice to see Colin, a regular on this list, in this editorial.
https://www.salon.com/2021/04/30/why-artificial-intelligence-research-might-be-going-down-a-dead-end/
"Relatedly, Colin Hales, an artificial intelligence researcher at the
University of Melbourne, has observed how strange it is
50 matches
Mail list logo