Immortal,
On Sun, Nov 24, 2019, 11:18 PM wrote:
> What are you the inventer of here? ;D What...
>
The entire field of ad hoc AI analysis of text was abandoned because no one
could make it work on a full human vocabulary - it would slow to a complete
stop due to combinatorial explosion.
The
What are you the inventer of here? ;D What...
I'll give it a read soon.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-Mcd1985afe350c435df5fe279
Delivery options:
Immortal,
On Thu, Nov 21, 2019 at 9:10 PM wrote:
> I need a full visual drawing of the system,
>
The best explanations are in the patents. The earlier patent concentrates
on the parsing methodology, while the later patent concentrates on what to
do with it.
*Natural language processing for
I need a full visual drawing of the system, I'm not seeing it... How's it
different from GPT-2? GPT-2 is taught data, learns relationships, and can
predict a word based on the data. "Reflection of data to get the new data."
--
Artificial General
Immortal,
On Thu, Nov 21, 2019, 1:44 AM wrote:
> Not I, but us all are looking for it :). Ok, but I need to know how it
> works to generate new data from the old data we have currently.
>
Illness, like other types of malfunctions, have cause-and-effect chains,
around half of the links have
Not I, but us all are looking for it :). Ok, but I need to know how it works to
generate new data from the old data we have currently. And the results better
be good lol.
--
Artificial General Intelligence List: AGI
Permalink:
DrEliza was even capable of finding new cures for conditions that were NOT
coded in it's knowledge base, but it was too slow due to combinatorial
explosion to take over medicine.
Then, I patented an incredibly fast way of doing the AI, but getting from
the old text-based programming paradigm to
I didn't mean any data or anything. Only what the universe does. I only meant
patterns. The universe is made of patterns. When it learns data, it learns more
about what the universe is, everything that the universe is.
Did you mean the quantum computer proof? I saw your post.
There is no algorithm that can predict all predictable data sequences.
Would you like to see the proof again?
On Wed, Nov 20, 2019, 10:11 PM wrote:
> To be clear, I'm looking to seriously help AGI projects that are
> unsupervised generative models like GPT-2 that are taught data, learn
>
To be clear, I'm looking to seriously help AGI projects that are unsupervised
generative models like GPT-2 that are taught data, learn relationships, and can
extract/generate new data by reflecting on prior data. Regardless of what world
it is in, it can predict any data it chooses to. OpenAI's
Keghn, u sound almost on the mark, except everything is repetition. once the
difference from cross 1 time step to the next, the universe only is doing the
one spacial task over and over.
I dont know if its true, but it is a good theory, that it could be true,
depressing or not.
It's possible that the beginning was a massive clump of randomly or
symmetrically placed particles that big banged, or the expanding wall is what
spawns particles as expandsthere was no bitstreambut what matters is
the laws of particles, there's only a few, there're in every particle in
You need not compress that big movie, only need store the first frame with laws
of physics. It makes you wonder, what was the starting condition...
--
Artificial General Intelligence List: AGI
Permalink:
Yes keghn, I said that 2 weeks back, its one big movie roll, lossless
compression emerges from nothing, to here, we came from nothin
Yes I said this too:
" Compression of static data is biasing perceptrion with the illusion, cooking
the books, lossy data no mater what. "
Just like the failing of chat bots, Compressor heads fail to believe you can
take the logic to a more
detailed level. There is the perceptdron that detects the the letter ore the
data.
The universe is one bit stream from a beginning of time, or when a AGI
become conscious, to the
end of
On Tuesday, November 19, 2019, at 9:44 PM, Matt Mahoney wrote:
> What results does your compressor have on some benchmarks?
I didn't make one. I was only giving my understanding of the current best by
Alex or what may likely be done by him if not already.
Since he didn't open source, he must
compression does lead to better survivial!!! whole thing runs better with it
Keghn, its very important.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M1ee6fe026ac6a19c43a61b6b
Delivery
I can not be a compressor head. AGI science is about patterns. Repeating
patterns and repeating
functions is nice but if the compressing dose not lead to better better
survival then kicked out
Like in the continuous activation of a paint code. If pain value is active
for 256 cycles then
What results does your compressor have on some benchmarks?
On Tue, Nov 19, 2019, 4:19 PM wrote:
> On Tuesday, November 19, 2019, at 11:42 AM, Matt Mahoney wrote:
>
> The best compressors are very complex. They use hundreds or thousands of
> independent context models and adaptively combine
On Tuesday, November 19, 2019, at 11:42 AM, Matt Mahoney wrote:
> The best compressors are very complex. They use hundreds or thousands of
> independent context models and adaptively combine their bit predictions and
> encodes the prediction error. The decompressor uses an exact copy of the
>
I get the feeling that the people in this thread who are saying "compression is
faster" might really be thinking about levels of abstraction ... the idea of
"compressing" low-level concepts into high-level ones by eliminating detail.
If you do all your work at a high level of abstraction, then
The best compressors are very complex. They use hundreds or thousands of
independent context models and adaptively combine their bit predictions and
encodes the prediction error. The decompressor uses an exact copy of the
model trained on previous output to reconstruct the original data. Most of
If you were matching the text in groups, it would be quicker than matching it
at the letter level, but yes, thats only if its made with speed in mind.
--
Artificial General Intelligence List: AGI
Permalink:
Matt is right because you MUST decompress it to work with it. Even for text
entailment discovery, you'll throw away high-level nodes that require re-making
from lower ones. They aren't high-level nodes literally, but, are made only in
that order! If you want to save on space plus speed, you
i meant to say transparent sorry.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M6c5b2b4664be70506218cac9
Delivery options: https://agi.topicbox.com/groups/agi/subscription
wrong ey? :)
What about runlength encoded bitmaps, they get to skip the translucent pixels
in runs.
Goes to show the industry is full of utter experts isnt it... everyone is
missing basic information/knowledge.
--
Artificial General Intelligence
On Sun, Nov 17, 2019, 12:37 AM wrote:
> compression gives you more data plus it makes you go FASTER. but it
> depends how you do it.
>
Wrong. Compression saves space but you trade off time to compress and
decompress. Better compression requires more time and more memory. It is a
3 way
Compression is a subset of communication protocol. One to one, one to many,
many to one, and many to many. Including one to itself and even, none to none?
No communication is in fact communication. Why? Being conscious of no
communication is communication especially in a quantum sense.
Hi James, as I'm sure you are aware I was referring to sensory salience, and
while some may not consider/ understand it as 'science' it never the less is
still relevant/ applicable to this model.
I'm not really concerned about 'political bias' at this stage in the systems
development,
Salience is not value neutral hence is not properly characterized as
"science".
Nor is my pointing this out pedantic as evidenced by all the rancor, noise
and subterfuge surrounding the notion of "politically biased AI".
On Sun, Nov 17, 2019 at 11:37 AM korrelan wrote:
> IMO compression... or
IMO compression... or to be more precise... salient spatio-temporal compression
is a key/ major factor in mammalian intelligence... it gets less-lossy/ more
focused though exposure/ repetition/ experience.
https://youtu.be/OO8lR3j1Vfc
:)
--
Artificial
compression gives you more data plus it makes you go FASTER. but it depends
how you do it.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M5cfd81ac1c7ecbce950e101d
Delivery options:
True, lots and lots of data, to get patterns. But that lots and lots of data
can be losslessly compressed all while *allowing you to extract patterns.
Compressing, tons of data, allows you to Learn the fundamental elements.
--
Artificial General
“I think the brain isn’t concerned with squeezing a lot of knowledge into a few
connections, it’s concerned with extracting knowledge quickly using lots of
connections.” Geoff Hinton.
--
Artificial General Intelligence List: AGI
Permalink:
the guillotine is for chopping the kings head off.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M51cfc8a83c5090be409c348f
Delivery options: https://agi.topicbox.com/groups/agi/subscription
The guillotine meant social policy, not literally firing/ridding of others
didn't it? Hmm, so you have 100 humans and compress their social beliefs and
find patterns, norms, that can reconstruct productivity and security using few
social norms, by 'ignoring' some beliefs. However if we can't
what looming bloodshed, out of control jealous monkeys?
They do as much damage as god wants, me I could give less of a turd about it
if god wants to cheat that much.
--
Artificial General Intelligence List: AGI
Permalink:
First of all, Matt's comments apply more to natural language text than they
do even to, say, to image compression. The vast majority of images are, in
some sense, related to the physical world and while many would claim
physics is just a bunch of incomprehensible tricks to compress experimental
just seeing the results is all you should get as an outsider, its clue enough
already as it is.
To see something working removes the fear of unknown, so its already a big gift
to give ppl, for them to then think for themselves.
--
Artificial General
But the compression prize is meant to make new discoveries, to teach the others
in the public. If no one can find a new technique (that works on other
datasets), nor can explain them to the public, why is the contest still such a
big thing towards AGI?? Is it making any more point? Is it
Alexander Rhatusnyak wasn't required to publish source code for his winning
Hutter prize entry at the time he submitted it, but I have seen some of his
other code, a lot of it based on my work, so I have some insight.
Optimizing data compression code is a highly experimental process, sort of
like
quotes always need extra context to make them true, a sentence by itself is
always a lie in some way or another. and they are probably the most surfaceful
thing when you go put quotes in some other guys mouth, which all quotes
probably are. bullshit if anyone ever said what people said
Oops, by common I mean physics made human fighting/murder move over to the
ideas side, and firing employees too, not simply it's just 'uncommon'.
Cooperation today has greater benefit.
--
Artificial General Intelligence List: AGI
Permalink:
“To attain knowledge, add things every day. To attain wisdom, remove things
every day.” ~Lao Tzu
"To attain workforce, hire employees every day. To attain productivity, fire
employees every day" ~Immortal Discoveries
https://steemit.com/philosophy/@garyzmcgee1/mcgee-s-guillotine
See! Change is
Well, we can try it, we do employee job firing (task changing, whether you are
in the building or another building, same same), not requiring any death.
That's the way things work these days.
You didn't answer my questions. Maybe we need Matt Mahoney.
--
Never mind. :)
On Sat, Nov 16, 2019 at 3:55 PM wrote:
> Ah, so chopping off some employees makes for a great team... You remove
> useless data and get the same thing anyhow once decompressed. The model for
> employee pattern learning learns patterns and can add back employees
> correctly in
why not just fire the world and do the whole thing yourself.
Thats what a real man would do.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-Mc0a360cc4d4afd57e8b20cce
Delivery options:
Say you have 100 people each that know diverse knowledge. They all help each
other like a quantum computer's qubits or like Glove word meaning understanding
context learning. But say a few team members have redundant information or
tasks, you can do the same productivity with 8 of them fired
Whats the point in a diverse business, if they are all just white people
crossed with monkeys?
Isnt it better and more simply logical (Ockhams razor) just to get rid of the
monkey?
--
Artificial General Intelligence List: AGI
Permalink:
Aka it's the same thing, remove data in text, but it's in the brains of humans
:)
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M04cc8138bad87673e8de4e68
Delivery options:
Ah, so chopping off some employees makes for a great team... You remove useless
data and get the same thing anyhow once decompressed. The model for employee
pattern learning learns patterns and can add back employees correctly in
diverse businesses. Wait insn't that a GPT-2 Task to say who is
The Hutter Prize targets automated language modeling. Ockham's Guillotine
targets social modeling whether automated or not. The difference is in the
kind of data sets and, secondarily, in the Hutter Prize's requirement that
a compressor be published along with the self-extracting archive.
My
I really like what this guy is doing.
here is a link to a reddit post about it.
https://www.reddit.com/r/agi/comments/dkoqct/how_these_selfaware_robots_are_redefining/
On Sat, Nov 16, 2019 at 2:57 PM doddy wrote:
> i agree with rouncer 81.open ai is doing great things in ai.
>
> On Sat, Nov
eventually you hit the noise floor, once u get to 10% id say the job is done
well enough, move onto other things.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M284027b53d33912f53af1577
James, I know compression is AGI but what is your AGI project or research idea
to spend money on? To compress it further? What will it teach us further that
Alexander Rhatushnyak (current long-time winner) didn't already teach us? We
already know context mixing was needed, multiple predictive
i agree with rouncer 81.open ai is doing great things in ai.
On Sat, Nov 16, 2019 at 2:24 PM James Bowery wrote:
>
> http://jimbowery.blogspot.com/2019/04/ockhams-guillotine-minimizing-argument.html
>
>
> On Sat, Nov 16, 2019 at 12:06 AM wrote:
>
>> What is your project or idea? If you want
http://jimbowery.blogspot.com/2019/04/ockhams-guillotine-minimizing-argument.html
On Sat, Nov 16, 2019 at 12:06 AM wrote:
> What is your project or idea? If you want you can email me (hover over my
> name).
>
> Depending on the project, my support would vary from hundreds to thousands
> of US
I doubt they dont have the whole thing handled, if it seems they arent trying
or something thats probably some pr secret thing. I think they are
trustworthy to just wait and see if they do it, but if you cant wait, well do
it yourself as well.
--
Open AI is implementing Unsupervised Learning though:
https://openai.com/blog/better-language-models/
Unsupervised Learning is auto sorting, you have a compression/error goal still
but that's it.
Supervised Learning is when you give it many labels and tell it to match some
data or gather new
Well just wait for them to take over then. Just give away your sister to the
devils, ur right, im being an evil bastard... sorry...
--
Artificial General Intelligence List: AGI
Permalink:
I'm going to be leaving this group because of the language used by rouncer81.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M5be9683ea3a81eb69b9cbb20
Delivery options:
I know your are serious . Open ai is going in the wrong direction. The have no
concept of unsupervised learning.
No concept of temporal mechanics. No concept of grounding of symbols.
But you like them.
They are only good at supervised learning, and that it, with a bunch of money
to make a
No I'm very serious about supporting someone here. It doesn't have to be
exactly an AGI project, it can be something new that is towards AGI or even
just research. For example OpenAI's achievements are such projects. I am
thinking about helping them.
--
Yeh I know... but if it wasnt for god the little cunts would take over,
what fuckin crap in their heads to go with their skin!
Ok... ill shutup now.
--
Artificial General Intelligence List: AGI
Permalink:
Ya there was this Indian man poaching my work for defunct ai-dream web side.
From way back from
2014. They do not like me on there web site because of this. Got kick off for
very minor thing. Lots
of drama over there.
Let us stop talking about race and people of other color on this form. I
For me, ten million up front i will give it as open source. A hundred million
and i well keep
the tech between us.
I felt sorry for Numenta, Going no were for the longest time at the ai wall I
tried helping them out. What big mistake. Now there implementing all the tip i
gave them and
Yes I wouldnt believe him, I think hes being a little silly. :)
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M71fb2630b6a783efa4e3698b
Delivery options:
Cash or hash.
Sure.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M2818c1b62bb24b7e76d9f267
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Bleh, I have ~$15k earmarked for AI work. Trouble is, the ante is
between 3-5 million for even a pretty bare bones operation.
Because getting really good VR is very difficult, I thought it would be
easier going the robotics route. I wanted a NAO, not sure if they still
exist.
The real problem is
What is your project or idea? If you want you can email me (hover over my name).
Depending on the project, my support would vary from hundreds to thousands of
US dollars.
--
Artificial General Intelligence List: AGI
Permalink:
70 matches
Mail list logo