cool vid!
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tecb9c0c21d65fcb2-M34d92fe3eb112d0556f8c64c
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Yep. labels first, actual understanding later on.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8f7f05f86e62415a-Mcb4fa9e98e050133e8e97495
Delivery options: https://agi.topicbox.com/groups/agi/subscription
I know some of my generated knowledge is on the edge but that extra context
helps me make deeper farther answers and am able to verify later which is truly
correct. Even if some is wrong is doesn't affect the main pile.
--
Artificial General Intelligence
I agree rouncer81, the visual cortex classifies objects like words first, then
it will see them next to each other ex. car>road. This is the higher "sentence"
temporal network. You don't recognize them as a joined object, but rather parts
next to parts that make up a part next to another part
@Matt We think the same
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mca07bd6601dd56a5ab108f57
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Matt Mahoney I have an argument against that, computer vision ends before
symbolic relations start.
Your saying that your eye invents jokes with what it sees, I say no, the
vision just classifies the visible aspect alone.
The rest of the derivation of the eye is the rest of the brain.
I doubt segmentation will help with image recognition. You lose context.
You recognize people not just by their faces but by when and where you see
them, who they are with, and what they say. It is easier to recognize a car
on a road than a car or a road on a white background.
We tried word
On Thu, Aug 29, 2019, 7:39 AM wrote:
> On Thursday, August 29, 2019, at 1:49 AM, WriterOfMinds wrote:
>
> Like I said when I first posted on this thread, phenomenal consciousness
> is neither necessary nor sufficient for an intelligent system.
>
>
> This is the premise that you are misguided by.
Clarified:
AGI={I,C,M,PSI}={I,UCP+OR,M,BB}; BB=Black Box
John
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M3849c56767c291ea6a534cf9
Delivery options:
On Thursday, August 29, 2019, at 6:32 AM, Nanograte Knowledge Technologies
wrote:
> Qualia are communicable.
> As such, I propose a new research methodology, which pertains to one-off
> valid and reliable experimentation when dealing with the "unseen". The
> "public" and repeat" tests for
Dont forget, we need DNA samples as well to go with your amazing theories. :)
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tb2c56499bd62ee8a-Ma0c503fa5aa8a8153756dd15
Delivery options:
Good to here u still sound like your on top of things.
I think theres no need for bad confidence, theres going to be a simple
solution for this singularity business.
--
Artificial General Intelligence List: AGI
Permalink:
On Thursday, August 29, 2019, at 1:49 AM, WriterOfMinds wrote:
> Like I said when I first posted on this thread, phenomenal consciousness is
> neither necessary nor sufficient for an intelligent system.
This is the premise that you are misguided by. Who is building the intelligent
systems?
"Qualia are personal and incommunicable *by definition,*..."
I tend to disagree with the assertion how qualia are "incommunicable". Shall
we we revisit the definition for absolute proof?
Qualia are communicable. I have proven that using a scientific method. I'm
referring to qualia here in the
14 matches
Mail list logo