His central thesis does not sound particularly new to most of those
who are associated with the AGI movement. Is AGI hard? Yeah, we
know...
On 11/18/19, rounce...@hotmail.com wrote:
> The fact that artificial intelligence isnt actually intelligence is probably
> better that way. Its not
Metaphysics envy?
On 11/18/19, rounce...@hotmail.com wrote:
> do u even understand this? ockahms razor is a METRIC.
> you put ockhams razor on an evolving neural network because it fell swooped
> the task with less cells, it runs higher performance, PLUS! its actually a
> better
Oh shit... hang on I think ive made a mistake about the p=np thing.
I was just going crazy... shit i think i smoked too many cigarettes.
--
Artificial General Intelligence List: AGI
Permalink:
OpenAI says they have an alignment on creating AGI together, so a cooperative
discussion board on understanding AGI is probably possible!
If yous are all over the place, its because yous can't even grasp it yet :))
All yous have to do is start discussing AGI for once and we'll all join in.
If one does not know what needs to be built, how would one be able to ascribe a
feature to it? That is an engineering problem to be solved.
"Singularity" is a feature of a phenomenon. In this case, the phenomenon is
AGI. Which characteristics would define AGI to have 'singularity' as a feature?
I ain't got no reply from Matt, nanobots solve the singularity on Earth:
On Monday, November 18, 2019, at 2:50 PM, immortal.discoveries wrote:
> Correct Matt, we need AGIs, nanotechnologies, data. Self-improvement isn't
> the key. Data-recursion is. (although you can get better at doing it)
>
>
once ppl get set in the right direction, you lose sight of the man that
originates it. If a guy ever came up with a quantum computer that worked, and
it was simple, the group would be another 100 years away every 100 years if
it wasnt for him. But when he finally does it, then anyone
On Mon, Nov 18, 2019, 2:04 PM wrote:
> I always thought AGI WAS the singularity.
> If there is anything the robot can invent that we cant already I thought
> it was an absolute NO anyway. :)
>
We keep renaming things that are not singularities to singularities to make
the problem easier. Just
We
On Mon, Nov 18, 2019, 1:12 PM wrote:
> 1)
> It sounds arrogant to say, but I think its true -> As a whole, human
> civilization is there to present problems for itself!
> Only a few are smart enough to solve them, as a whole civilization is a
> wrecked deficient mess.Its only going to be
do u even understand this? ockahms razor is a METRIC.
you put ockhams razor on an evolving neural network because it fell swooped the
task with less cells, it runs higher performance, PLUS! its actually a better
understanding to generate further from.
Call it physics envy if you want, but you just labelled the best currently
working applied ai with an insult!
It *could* be computable, ppl dont know if p=np or not!
you seem to understand how occams razor can avoid just restoring the input
frames, via picking simplest model, i dont know if it
"Physics envy" is what we call the quest for a simple theory of AGI,
analogous to the simple set of equations in physics that explain everything
in the universe. Alas, Legg proved otherwise. There is no universal
prediction algorithm. If there were, it would be Occam's Razor, but it is
not
hang on - i thought Korrellan was talking about me? shit im getting paranoid...
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T28a97a3966a63cca-M7e08883946a35b8d70943b96
Delivery options:
Imagine if his standard model actually worked, it'd floor others once they got
it.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T28a97a3966a63cca-M9c39f89bbec41e9d90dac559
Delivery options:
Absolutely the opposite. Even if I think something, I never rely apon it when
im with others.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T28a97a3966a63cca-M046ccdf0fb78a6bbfdf42a3d
Delivery options:
If no one speaks up he will quote that everyone on this site agrees... semantic
games.
:)
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T28a97a3966a63cca-M8c2b5893180ea5842db6a337
Delivery options:
Hes saying that the physical world hasnt enough energy for the amount of
computation required to simulate the brain. even tho its in our heads already,
working fine.
--
Artificial General Intelligence List: AGI
Permalink:
Correct Matt, we need AGIs, nanotechnologies, data. Self-improvement isn't the
key. Data-recursion is. (although you can get better at doing it)
What makes you so sure physics doesn't have an exponential singularity curve
for approx. Earth? Nanobots will solve the data need, the computing need,
If you were matching the text in groups, it would be quicker than matching it
at the letter level, but yes, thats only if its made with speed in mind.
--
Artificial General Intelligence List: AGI
Permalink:
Matt is right because you MUST decompress it to work with it. Even for text
entailment discovery, you'll throw away high-level nodes that require re-making
from lower ones. They aren't high-level nodes literally, but, are made only in
that order! If you want to save on space plus speed, you
I always thought AGI WAS the singularity.
If there is anything the robot can invent that we cant already I thought it was
an absolute NO anyway. :)
--
Artificial General Intelligence List: AGI
Permalink:
I have some thoughts, but ... isn't this discussion going to become yet another
distraction? The question of whether AGI will result in a technological
singularity doesn't seem to have a lot of relevance to the question of *how* to
build AGI. So the disciples of the Singularity can believe
The fact that artificial intelligence isnt actually intelligence is probably
better that way. Its not creating "LIFE" which seems to be mans/childs
fantasy with having an extra "friend" to have around but to technological
advancement its 100% useless. If you want something thats ALIVE, I
The (social) singularity is just the incremental cybernetic augmentations
we made along the way...
On Mon, Nov 18, 2019 at 9:47 AM Matt Mahoney
wrote:
> The premise of the Singularity is that if humans can create smarter than
> human intelligence (meaning faster or more successful at achieving
1)
It sounds arrogant to say, but I think its true -> As a whole, human
civilization is there to present problems for itself!
Only a few are smart enough to solve them, as a whole civilization is a wrecked
deficient mess. Its only going to be 1 or a few ppl that ever invent a
quantum
I would recommend checking out this book by Brian Cantwell Smith.
https://mitpress.mit.edu/books/promise-artificial-intelligence
--
Artificial General Intelligence List: AGI
Permalink:
The premise of the Singularity is that if humans can create smarter than
human intelligence (meaning faster or more successful at achieving goals),
then so can it, only faster. That will lead to an intelligence explosion
because each iteration will be faster. We cannot say how this will happen
Yes for a AGI system pick off the data that is easy to compress and thinks
that have the same amount
of change, in a gradient of decent fashion. And open ended gradient of decent.
Announcement: Face Editor Download:
https://www.youtube.com/watch?v=p7k86wwUoJg
Vectors are bound to the slide
What do you mean it isnt beat writing, theres no intelligence involved in
doing it. Definitely... beat writing.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M3cfc3990597e8ece4445807c
Things can only be standardized after the conception work is done. Since AGI
is highly experimental, this is trying to standardize something ppl cant even
do properly yet. Whats good about standards anyway, all it does is make us all
do everything the exact same way anyhow.
i meant to say transparent sorry.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M6c5b2b4664be70506218cac9
Delivery options: https://agi.topicbox.com/groups/agi/subscription
wrong ey? :)
What about runlength encoded bitmaps, they get to skip the translucent pixels
in runs.
Goes to show the industry is full of utter experts isnt it... everyone is
missing basic information/knowledge.
--
Artificial General Intelligence
On Sun, Nov 17, 2019, 12:37 AM wrote:
> compression gives you more data plus it makes you go FASTER. but it
> depends how you do it.
>
Wrong. Compression saves space but you trade off time to compress and
decompress. Better compression requires more time and more memory. It is a
3 way
On Monday, November 18, 2019, at 8:21 AM, A.T. Murray wrote:
> If anyone here assembled feels that the http://ai.neocities.org/Ghost.html in
> the machine should not be universally acknowledged as the Standard Model, let
> them speak up now.
It's just so hard for us mere mortals to read the
https://www.researchgate.net/publication/322123676_A_Standard_Model_of_the_Mind_Toward_a_Common_Computational_Framework_across_Artificial_Intelligence_Cognitive_Science_Neuroscience_and_Robotics
?
On Mon, Nov 18, 2019 at 8:22 AM A.T. Murray wrote:
> If physics can have a standard model, then
If physics can have a standard model, then AGI should have one, too.
http://ai.neocities.org/AiTree.html is a candidate for Standard Model on
the basis of demonstrated existence-proof functionality such as
http://en.wikipedia.org/wiki/Natural-language_understanding and
Compression is a subset of communication protocol. One to one, one to many,
many to one, and many to many. Including one to itself and even, none to none?
No communication is in fact communication. Why? Being conscious of no
communication is communication especially in a quantum sense.
Errors are input, are ideas, and are an intelligence component. Optimal
intelligence has some error threshold and it's not always zero. In fact errors
in complicated environments enhance intelligence by adding a complexity
reference or sort of a modulation feed...
Hi James, as I'm sure you are aware I was referring to sensory salience, and
while some may not consider/ understand it as 'science' it never the less is
still relevant/ applicable to this model.
I'm not really concerned about 'political bias' at this stage in the systems
development,
39 matches
Mail list logo