On Fri, Dec 28, 2012 at 9:30 PM, Jim Bromer <[email protected]> wrote:

> On Fri, Dec 28, 2012 at 6:09 PM, Logan Streondj <[email protected]>wrote:
>
>> It's good to get the ideas into something more tangible like programs.
>>
>
> Once you start to create an actual program your original options close in
> on your plans. One of the problems in developing a viable AGI program is
> that it has to be kept simple.  You might have different methods that are
> called in for separate cases but that does not always work so well.
>
>
>>
>> If you find that after whichever period of time you aren't getting much
>> anywhere with your chosen route, perhaps you'll choose to contribute to
>> another AGI project, perhaps my own.
>> I did a version release today, now have support for primitive variables
>> :-). By next version release quite possibly will be able to do factorial or
>> some other simple procedures.
>> And likely by next year will have English grammar,
>> allowing for easier verification by others with smaller learning curve
>>
>
> What does that mean?  How does it support primitive variables.
>

It's an incremental support to full support of variables:
Here I made a blog post:
http://weyounet.info/2012/12/0-4-8-3-1-varname-su-value-be-ya-sysh-hspl/


> And how would you change your plans if something did not work.  For
> instance, if it did not have English grammar how would that affect your
> concepts about AGI?  The most important thing is to identify problems that
> can be solved and problems that you don't have an answer for.  After
> working on logical satisfiability I have come to the conclusion that I
> don't have a solution to logical complexity.  So then in order to make my
> AGI program work it would have to work by finding a way to overcome the
> problem of logical complexity by some other means. It would have to acquire
> a great deal of information by serendipity and then make
> 'intuitive' guesses about relations that can only be structured through
> correlation and the recognition that if process X could be applied to
> situation Y then it suggest a path toward finding a solution.
>
> But what if my ideas did not work?  Then it would tell me that I had been
> making some mistake.  If I could find good candidates that kept the program
> from working then I should be able to test them pretty quickly.  Perhaps
> basic correlation is not good enough.  Perhaps the program has to rely on
> some kind of enhanced correlation where there are numerous reasons to
> believe the process X *can* be applied to situation Y and that it *will*
> lead to a path toward a solution.
> Jim Bromer
>
>
>
>
> On Fri, Dec 28, 2012 at 6:09 PM, Logan Streondj <[email protected]>wrote:
>
>> Hey I'm just offering you support to do some real coding.
>> It's good to get the ideas into something more tangible like programs.
>>
>> If you find that after whichever period of time you aren't getting much
>> anywhere with your chosen route, perhaps you'll choose to contribute to
>> another AGI project, perhaps my own.
>>
>> I did a version release today, now have support for primitive variables
>> :-).  By next version release quite possibly will be able to do  factorial
>> or some other simple procedures.
>> And likely by next year will have English grammar,
>> allowing for easier verification by others with smaller learning curve.
>>
>> I'm programming in Assembly, but it is quite simple,
>> only 16 assembly commands used, all register-machine,
>> makes it easy to port and that kinda stuff.
>>
>> You would certainly have the capacity to improve upon current AGI
>> programs,  can look at the current roadmap and see where your ideas might
>> fit in
>> https://sourceforge.net/p/rpoku/code/ci/dc0d7886965d5cab645a4d5a220391b316c7c388/tree/roadmap.txt?format=raw
>>
>>
>> On Fri, Dec 28, 2012 at 10:53 AM, Jim Bromer <[email protected]> wrote:
>>
>>> The most important case would be the one where it does show some
>>> capability of learning a crude simplistic language but where it either
>>> lacks subtlety or where it shows a wide variation of depth.  In some cases,
>>> for example, it might seem to be working but then it just cannot continue
>>> to learn new things about a particular subject or where other subjects
>>> which are comparably as easy seem to be totally beyond it. This is along
>>> the lines of how other AI projects have fared.  Let's say that my project
>>> did turn out like this.  Then in order to show that it was a valid concept
>>> I would have to advance the program so that it was able to go further than
>>> it had.  The thing is that although the various AI methods are able to do
>>> some tasks better than others they all fail at a level below what we need
>>> to see in order to compare them to children.  So being human like is not
>>> the immediate goal, and being really smart is not the immediate goal. But I
>>> would need to show that I could improve on contemporary AGI programs in
>>> order to demonstrate that my ideas were workable and since my program would
>>> be limited I would need to show that some improvements could be made to my
>>> program.
>>>
>>> Jim Bromer
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to