Seriously though. This group is full of nothing but crackpots and jackasses.
Can't figure out how to leave the group and put a stop to the constant barrage
of garbage in my inbox.
Get me off this shit list.
Sent from ProtonMail mobile
--
Artificial
his to be as important
> as AGI ... and if AGI turns out to take another 100 years to crack I still
> want to have an inhabitable planet. That's what I do and your penis might be
> waaay longer than mine ... you still don't understand Perl code nor AGI
> for shit.
>
> O
Arthur, can I put in a request for a mind that speaks Toki Pona? That would be
really fun! Its a simple language with a simple grammer comprised of only 120
words.
Sent from ProtonMail mobile
Original Message
On Apr 20, 2019, 4:24 PM, A.T. Murray wrote:
> Three days ago on
into random compiling Perl code and you would argue that it's potential AGI
> ... that's how horrible you seem to be at understanding Perl and AGI.
>
> On 21/04/2019 00:08, MP via AGI wrote:
>
>> Have you even tried to read his code for yourself and tried to understand it?
>
...
> and again ... his code does N O T H I N G.
>
> If you don't understand this then maybe read some mailing list you actually
> understand and don't defend spam that has been going on for decades ...
>
> On 20/04/2019 23:51, MP via AGI wrote:
>
>> Hey, at least
I would build it, and train it to write code that performs financial stuff to
help us both with a source of income for sustaining my life and for it to
develop itself further. I'd keep it under wraps for a few years, and
anonymously announce it in full to the world. Source code, docs, and all.
I honestly wouldn't mind trying to work with ATM to better convey his AI
approach than his current methods.
For instance, in one of his "earlier" journels, he references a "boulematic
accumulator" - in normal lingo, it means neuron, like a neural network neuron.
So, things like that. Maybe
I don't even think it's a question of computational capacity - look at self
driving cars, which took years of development with powerful servers processing
terabytes of data. Yet they still crash into wall barriers. Look at any old
high school dropout who wouldn't be able to understand predicate
It comes from one of the most fundamental aspects of higher order cognitive
ability, which is synthesizing concepts from perception. IE, linking perceptual
items to some kind of symbolic structure. The closest I could find that kind of
accomplishes such a feat is Drescher's schema mechanism,
WOTF is something more along the lines of a religion I'm talking about
ccreatin. I'd join too, if there was any kind of active community. There's
nothing but a few articles and that website. It feels as though it was a
passing interest.
Sent from ProtonMail mobile
Original Message
, 2019 at 12:58 PM Costi Dumitrescu
>> wrote:
>>
>>> Let's match a lefty philosopher king to an accomplished emperor.
>>>
>>> "The Sermon on the Cloud" by St Username ends in
>>>
>>> "[...]
>>>
>>> Blessed ar
So, I'm an atheist. But AI to me is the closest we have to god.
Who wants in on creating a new cult of AI?
Sent from ProtonMail mobile
--
Artificial General Intelligence List: AGI
Permalink:
ll things programming and AI should be allowed to spam this
> list and reddit and 100 other places with his horse shit.
>
> Am 24/01/2019 um 05:28 schrieb MP via AGI:
>
>> Everything is close to AGI until one of them has been confirmed. We can't
>> even agree on what AGI is or
only mailing list on AGI. I don't
>>>> expect people here to agree on philosophical or socioeconomic issues ...
>>>> but recognizing bullshit in the context of programming and AI approaches
>>>> (and you can't even call it that) should be a given ...
>>>>
>&g
Yeah, nothing like spending a week and thousands of dollars on hardware that'll
be outdated soon only to find out it doesn't work because your data was off.
Yup. Love me some neural networks.
Sent from ProtonMail mobile
Original Message
On Dec 23, 2018, 1:02 AM, Stefan Reich
He suffers from psychosis and possibly schizophrenia. He should be taking his
medication, but I’m not at all involved enough to know more than that.
He’s a perfect example of a disease I like to call “AI crazy” - much worse than
coder crazy.
Sent from ProtonMail Mobile
On Mon, Nov 26, 2018 at
Let’s keep meaningless politics off this list, please.
Sent from ProtonMail Mobile
On Tue, Sep 25, 2018 at 5:59 PM, Alan Grimes wrote:
> Communism is the the grate bane of our time. =(
>
> Suggestion: call up the old soviet national anthem and play it in the
> background at medium volume.
>
My position:
Who we are is nothing more than a few billion neurons in a calcium enclosure
and an annoyingly inefficient vehicle.
So our intelligence, perceptions, actions, memories, and so on and so forth do
indeed exist. I don’t see any reason why we can’t recreate that in silicon.
Why we
Page not found?
Sent from ProtonMail Mobile
On Tue, Sep 4, 2018 at 1:41 PM, jaubertcedric13 via AGI
wrote:
> This look impressive :
>
> http://www.jay-cce.net/
> [Artificial General Intelligence List](https://agi.topicbox.com/latest) / AGI
> /
algorithm that could do better then everyone would use it. You are going to
> have equal amounts of wins and losses no matter what.
>
> For AGI to beat the markets it has to be smarter than not just one human but
> millions of humans.
>
> On Tue, Aug 7, 2018, 5:29 PM MP via AGI
Allow me to propose an alternate onion, starting with the innermost layer:
1. You.
2. Your family.
3. Your race.
4. Your species.
5. All forms of life.
6. All forms of matter.
7. The spiritual self.
8. That which binds all together ad infinitum
AGI would fit ether as an augmentation of layers,
Alan, what is it about prednet that makes you think it’s conscious? What signs
is it showing? What’s it doing that makes you think this?
From what I see, it’s something that predicts the next video frame from the one
it has been presented. There’s an NN for representation but... that’s it.
I’d be very interested in such a philosophy. I’ve always had in mind that true
AGI would, in time anyway, become a "deity" in a very logical sense of the
word. I do revere the idea anyway... call it a religion or a cult, but it
sounds like something I can sink my teeth into regardless.
You can
Why do you still use ASCII text graphics anyway? I think it’d be a lot clearer
if you used graphics similar to block diagrams instead of text.
Sent from ProtonMail Mobile
On Wed, Jul 4, 2018 at 4:58 PM, A.T. Murray via AGI
wrote:
> On Wed, Jul 4, 2018 at 10:11 AM, MP via AGI wr
Interesting narrative. I think you need more adderall, though. Give it more sci
fi insanity.
And for gods sake PLEASE rewrite documentation with more modern terminology. I
can’t for the life of me understand it from your mental fiber constructs and
things. The analogy is too far fetched.
A
AM, Boris Kazachenko via AGI
wrote:
> Yeah, I thought that too. But this list is freak show, go figure.
>
> On Mon, Jun 25, 2018 at 3:42 AM Giacomo Spigler via AGI
> wrote:
>
>> Is it only me that thinks that MP is another email controlled by AT Murray?
>>
>>
This makes more sense than it should... quite interesting to say the least.
Sent from ProtonMail Mobile
On Sun, Jun 24, 2018 at 7:44 PM, A.T. Murray via AGI
wrote:
> MindForth Programming Journal -- 2018 June 24
>
> Over the past week fifteen or twenty hours of intense work went into coding
I’ve honestly tried reading his source and explanations.
He loses me at these "perpendicular mental fiber" stuff.
Even with my cruddy JavaScript to Java translation I still don’t get it... but
it’s something. At least it talks through a ton of weird code.
Sent from ProtonMail Mobile
On Thu,
Now that actually makes clear and concise sense! Thank you. That cleared up a
lot for me actually.
Sent from ProtonMail Mobile
On Sun, Jun 17, 2018 at 2:02 PM, A.T. Murray via AGI
wrote:
> On Sun, Jun 17, 2018 at 10:37 AM, MP via AGI wrote:
>
>> Believe it or not, googling "
In his book "How to build a mind", Kurzweil talks about how a so-called pattern
recognizer is the central component of his pattern recognition theory of the
mind. I would like to test his theory, because of my background in (try not to
laugh) Hubbard’s dianetics and his theory of mind.
ing "Kimera is the only company that has a working AGI, we call
> it "Nigel AGI"." If they have a working AGI, why don't they give a public
> demonstration and open it up to scrutiny... why keep it secret? On Thu, Jun
> 14, 2018 at 3:52 PM, MP via AGI wrote
till on track ;-) On Thu, Jun 14, 2018 at 12:09
> PM, MP via AGI wrote: > That’s a sad scenario, Ben : > > Sent from ProtonMail
> Mobile > > > On Wed, Jun 13, 2018 at 10:01 PM, Ben Goertzel wrote: > > I
> guess whomever was paying the bills for that list (
That’s a sad scenario, Ben :
Sent from ProtonMail Mobile
On Wed, Jun 13, 2018 at 10:01 PM, Ben Goertzel wrote:
> I guess whomever was paying the bills for that list (KurzweilAI?) got bored
> and stopped paying it...? On Thu, Jun 14, 2018 at 6:12 AM, Steve Richfield
> via AGI wrote: > I tried
Don’t spam us with marketing nonsense. This is a place to discuss artificial
minds, not selling them!
Sent from ProtonMail Mobile
On Tue, Jun 12, 2018 at 7:19 AM, Logan Streondj via AGI
wrote:
> Hi all, So I read some more of the "Beyond the Chasm" marketing book. They
> talk about how ther
Mobile
On Mon, Jun 11, 2018 at 1:51 AM, YKY via AGI wrote:
> On Mon, Jun 11, 2018 at 2:39 PM, MP via AGI wrote:
>
>> It seems you’re taking a deep learning approach to the classic cognitive
>> architectures - like you’re reengineering ACT-R.
>>
>> What’s different? Why
t;
> 3 functional demos: https://agiinnovations.com/our-work-demos
>
> Get you free low Aigo serial#
> https://my.aigo.ai/#cg29690f51be5cb098hfae20h91ae27
>
> BTW, I host FB group with >1000 members
> https://www.facebook.com/groups/RealAGI/
>
> From: MP vi
My model is a genetic algorithm system based on AIXI. It’s a really lame
solution to AGI, being naive and brute force, but it’s something small and
simple any computer can run.
I call it MINT - for minimal intelligence.
I really should dig up the old java source... it’s a neat little system.
white paper
> is one of the many components that has to be programmed.
>
> Please let me know.
>
> Rob
> ---------------
>
> From: MP via AGI
> Sent: Friday, 08 June 2018 8:34 AM
> To: AGI
> Subject: Re: [agi] The fo
Alan, would you be interested in emailing me these models? I’m an experienced
coder and would love to work on it.
mindpixel at proton mail dot com
Sent from ProtonMail Mobile
On Fri, Jun 8, 2018 at 1:31 AM, Nanograte Knowledge Technologies via AGI
wrote:
> Alan
>
> Thank you for your
Oh sure, big critic of others work but nothing to show for it. Excuses excuses.
You know you can rent not only GPU but also TPU time from google for deep
learning research. And come on, there’s plenty of python docs out there. Do you
even code, bro?
And then we have the legendary A. T. Murray.
You want to call Mentalfucks a crackpot yet you seem to rant on like a paranoid
lunatic yourself. At least Murray is doing something and has SOMETHING to show
for his hard work. Not just angry rants.
Sent from ProtonMail Mobile
On Thu, Jun 7, 2018 at 3:28 PM, Alan Grimes wrote:
> I'm
g a Java mind. Where's your source? Mine is here:
> http://tinybrain.de/1016060
>
> Stefan
>
> MP via AGI schrieb am Di., 5. Juni 2018 19:03:
>
>> John, I definitely feel the same way about the massive obscurities. I even
>> tried muddling through his diagrams and
John, I definitely feel the same way about the massive obscurities. I even
tried muddling through his diagrams and explanations to no avail. What I was
able to do is port his ungodly bizarre code to java - literally copying and
pasting with a few syntax tweaks - and got it running... somewhat.
GI
wrote:
> On Sun, Jun 3, 2018 at 10:24 PM, MP via AGI wrote:
>
>> What’s going to be done to fix this issue?
>
> The issue (of generating "THINK" from normal activation rather than from
> SpreadAct() activation) has been fixed by the indicated "bugfix"
What’s going to be done to fix this issue?
Sent from ProtonMail Mobile
On Mon, Jun 4, 2018 at 12:21 AM, A.T. Murray via AGI
wrote:
> We have a problem where the AI Mind is calling Indicative() two times in a
> row for no good reason. After a what-think query, the AI is supposed to call
>
ATM, be honest here: do you suffer from schizophrenic disorders? Because you
remind me a lot of the guy who created Temple OS - a visionary and a computer
genius, lost to mental deterioration.
I’ve told you I’ve ported your work to Java with some issues, and the reason
I’ve done that was
46 matches
Mail list logo