It's about time.

"Lightmatter is building the engines that will power discoveries, drive 
progress, and reduce our impact on the planet."

Lol....maybe other way around. More impact. The future world will be a 3D 
gravity-less, air-less, nanobot-cell, predictable fractal that morphs. A cold 
dark place that doesn't need much light to know where you are going. The big 
bang was hot/ chaos/ isn't immortality.


I was just thinking hard about this domain. I haven't ventured too much into 
Parallel processors. Correct me if wrong but Transformers do batch learning and 
don't update the network every single token it steps along a dataset to train 
on, they first take the batch and then backpropagate - to adjust ALL the 
weights. Whereas mine updates each step it hears new data - only the memory 
nodes it should update get updated too. I could still do some sort of batch 
learning in mine I think, like putting orange socks with orange and then put 
all orange into the net where belong. As for updating all weights (store 
'hello' (or hello[s] and or goodbyes) not letter by letter but rather all at 
once keeping order stored as well of the letters) I think this can be done 
maybe, our brain has actual parallel compute/memory, as for how my architecture 
can do this, is a big question, I think it isn't so hard a problem though, and 
should be easier if you want to store not ex. h e l l o at the same time but 
ex. h>e>l>l>o and g>o>o>d>b>y>e at the same time but each sequentially since 
they don't interact much in the network. I'm not saying batch learning is a 
good thing though, this may only be for if you don't do a hash-like function 
for high-breadth early layers of a network to decrease search cost.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2a90af180c5dbd58-M7b75a4570277ce3bcf5bb482
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to