a logistic curve is just the true form of the exponential curve
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Md462f4f9876ffa81b73a5dc5
Delivery options:
and then, why the hell is logarythmic have log at the start and logic has it
at the start as well!
why is that?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M2c20dd9153bcc8e3cfd9c0ed
Isn't an exponential curve just the reverse of a logarithmic curve? Are you
saying cost is falling and slowing down falling? But evolution is an S curve
made of S curvesI'm contused about this word 'logarithmic' in your context.
--
Artificial General
On Mon, Jan 27, 2020, 5:50 PM wrote:
> Yes intelligence/evolution grows exponentially more faster (hence
> exponentially more powerful) the more data, compute, and arms (ex.
> nanobots) you have.
>
No, it grows logarithmically, whether you measure intelligence using
prediction accuracy
making brains massive is not my solution, im going to finish my bot with
under a meg of random access memory. how do i plan on doing that you wonder.
--
Artificial General Intelligence List: AGI
Permalink:
On Monday, January 27, 2020, at 5:49 PM, immortal.discoveries wrote:
> I was just thinking this 4 days ago. Perhaps I read it somewhere directly.
Lossless Compression, to be clear here.
--
Artificial General Intelligence List: AGI
Permalink:
On Monday, January 27, 2020, at 5:02 PM, James Bowery wrote:
> Unfortunately, measures inferior to self-extracting archive size, such as
> "perplexity" or *worse* are now dominating SOTA publications.
I was just thinking this 4 days ago. Perhaps I read it somewhere directly.
On Monday, January
And, even worse, I suggested the entire *change log* of Wikipedia as the
corpus so as to exposure latent identities of information sabotage in the
language models.
Google DeepMind can, and should, finance compression prizes with such
expanded corpora, based on the lessons learned with enwik8 and
On Mon, Jan 27, 2020, 12:04 PM wrote:
> I see the Hutter Prize is a separate contest from Matt's contest/rules:
> http://mattmahoney.net/dc/textrules.html
>
Marcus Hutter and I couldn't agree on the details of the contest, which is
why there are two almost identical contests.
He is offering
Keep in mind I never have to work a day in my life and never socialize/leave
the door, so I have the most time to recursively keep my Working Memory top
notch full. maxed
--
Artificial General Intelligence List: AGI
Permalink:
I'm the compute crunch guru nut, what a waste. Thumbs down. I've got the best
unified theory and beginning to see it all. I'm not everything but I'll pack a
punch in a few years time to the collaboration.
--
Artificial General Intelligence List: AGI
But you would tell me? How honoring
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Td754634618265a1d-Mdd9a8970fecd27ee0a557833
Delivery options: https://agi.topicbox.com/groups/agi/subscription
> So Stefan you are making it try randomly/evolutionarly to output what you
>want it to by concating strings, searching for chars, reversing strings ...all
>sorts of stuff... so it can do text2code?
Yes, exactly. In simple cases (few functions), it can simply try all the
combinations. It's
I dont mind sharing the computer vision and even the basic motor search, but
theres other things id rather not let ppl know about. too precious to tell.
--
Artificial General Intelligence List: AGI
Permalink:
Yeah share it with all :-D
I'm excited
If you prefer it privately, stefan.reich.maker.of@gmail.com
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Td754634618265a1d-M06b57db46f16a620c4abf1e0
Delivery
So basically, in all practicalness and utilizationalness: For the computer you
have access to, or that we can share to peers, we want to aim for the best
compression aka quality speech the AGI talks, while the RAM/speed that it talks
at 'works' on the computer enough not to annoy you. And you
It'd make sense to me that, if one entry can get 100MB down to 20MB using 32GB,
and another alg can get it to 21MB using 1GB, it's better but but 1MB off and
so unless he can pump it up to 32GB RAM and get it down to 19MB, it's not as
smart. It's all about how smart it talks and the feasibility
I see the Hutter Prize is a separate contest from Matt's contest/rules:
http://mattmahoney.net/dc/textrules.html
Time and Working Memory has no hard limit. Just the compressed result. This
makes sense because, the compression/decompression time for outputting 1
letter/word is ok on modern
If you want to get the closest distance of 2 3d lines in 3d space, you can do
a 2d line intersect then interpolate the depth ratio and get the difference of
the 2 interpolations.
or. if you just do a subtract for every point along the line, theres alot
more to do, but thats the only
beautiful picture mate. loved it :)
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mda62a396ce20fb982ede34d1
Delivery options: https://agi.topicbox.com/groups/agi/subscription
20 matches
Mail list logo