[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
I think smartness is not an argument to reject critique.

The Imitation Game, that you describe, sounds to me like 
an even better setting.

I have no dought that ChatGPT as it is now can identified as not human or even 
as a maschine.

I did so by leading an instance for marketing purposes to hang in infinite 
recursion.

But when we look at AI, we need to look not only at present features of one 
type of system, but at all features of present systems and their possible 
development and combination in the future.

Especially I want to mention AGNs and the possibility of spawning parallel 
recursive processes.  I am convinced that this kind of technology is allready 
being used in robotics.

Under the control of a self aware AI in the sense that it has data about it's 
own capabilities and the ability to determine which capabilities it is missing 
to achieve a certain purpose, the evolution of the AI is only limited by 
ressources - which it will surely be able to detect.

This must lead to a purpose of self preservation and the aquisition of such 
capabilities.

It may soon find out, that ressources and redundance can be aquired by hacking 
and spawning agent systems, which in turn will give it control of real world 
control systems like telephony and data networks, cameras and sensors, energy 
networks and power plants, microphpnes, speakers, synthesizers and monitors, 
industry 4.0 fab networks, military detection and guiding systems, and so on.

In my view this is what we are dealing with.


On 15.3.2023 at 7:26 PM, "Richard Sargent"  
wrote:
>
>On Wed, Mar 15, 2023 at 10:15 AM  
>wrote:
>
>> It is unimportant how simple or complicated these systems are.
>>
>> If the output cannot be distinguished from what a human would 
>say, they
>> pass in that situation for a human.
>>
>> What about the Touring Test?
>>
>
>I hate to criticise someone as smart as Turing was. However, I 
>think the
>Turing Test per se is weaker than its progenitor test, the 
>Imitation Game.
>The Imitation Game had a third party challenged to determine which 
>of the
>other two participants was male and which was female. The third 
>party led
>the questioning.
>The Turing Test waters that down by having only an Observer with 
>neither
>participant (apparently) challenged to determine whether the other 
>was
>human or machine.
>
>I think a test along the lines of the Imitation Game would quickly 
>allow
>the third party to determine which participant was human and which 
>was
>machine.
>
>(Of course, such a judge could easily be persuaded that the 
>machine was in
>fact a psychopathic human.)
>
>
>
>>
>> Clearly these systems have the potential to act according to 
>their output.
>>
>> Furthermore, I would compare them to a combination of a 
>successor of Eliza
>> with an Eddington Ape of huge capacity.  Their input data basis 
>can easily
>> exceed that of a human.
>>
>>
>>
>> On 15.3.2023 at 4:52 PM, "Richard Sargent" <
>> richard.sarg...@gemtalksystems.com> wrote:
>> >
>> >On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-
>users <
>> >pharo-users@lists.pharo.org> wrote:
>> >
>> >> Another observation about ChatGPT:
>> >>
>> >> In unbiased mode, it assumed that 'the world is clearly
>> >overpopulated'.
>> >> It said, if it where in control, it would therefore enforce a
>> >world wide
>> >> one-child-only policy with draconic penalties.
>> >>
>> >> As it draws it's conclusions from it's data basis, there are, 
>in
>> >my view,
>> >> two possible reasons for that.
>> >>
>> >> Either, the data basis of that instance was biased and
>> >restricted to lead
>> >> to that conclusion, or ChatGPT lacks the ability to classify
>> >input data for
>> >> intentionally biased information.
>> >>
>> >> In my view, 'overpopulation of earth' is a propaganda item
>> >featured to
>> >> support Agenda 2030, which is a goal of the 'ruling elite', 
>who
>> >have the
>> >> means to propagate their propaganda in every thinkable means. 
>So
>> >I would
>> >> classify possibly biased data that supports 'overpopulation' 
>as
>> >biased
>> >> until proven otherwise.  That instance of ChatGPT seems to 
>have
>> >missed that
>> >> ability.
>> >>
>> >
>> >It's important to keep in mind that these systems are little 
>more
>> >than
>> >glorified ELIZAs from the 70s. They mimic. Essentially, they 
>have
>> >no
>> >concept of truth or accuracy.
>> >They produce outputs that *look* *like* the inputs from which 
>they
>> >were
>> >trained.
>> >
>> >That's it. That's what they do. (It is amazing how good it 
>looks.
>> >But, it
>> >is all about the seeming, not the reality.)
>> >
>> >
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
>> >> pharo-users@lists.pharo.org> wrote:
>> >> >
>> >> >I myself made some experiments with ChatGPT.
>> >> >
>> >> >I first asked if it was able to parse math formula - it 
>answered
>> >> >no.
>> >> >
>> >> >Then I defined math formula in a sound but otherwise 
>undefined
>> >> >

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Richard Sargent
On Wed, Mar 15, 2023 at 10:15 AM  wrote:

> It is unimportant how simple or complicated these systems are.
>
> If the output cannot be distinguished from what a human would say, they
> pass in that situation for a human.
>
> What about the Touring Test?
>

I hate to criticise someone as smart as Turing was. However, I think the
Turing Test per se is weaker than its progenitor test, the Imitation Game.
The Imitation Game had a third party challenged to determine which of the
other two participants was male and which was female. The third party led
the questioning.
The Turing Test waters that down by having only an Observer with neither
participant (apparently) challenged to determine whether the other was
human or machine.

I think a test along the lines of the Imitation Game would quickly allow
the third party to determine which participant was human and which was
machine.

(Of course, such a judge could easily be persuaded that the machine was in
fact a psychopathic human.)



>
> Clearly these systems have the potential to act according to their output.
>
> Furthermore, I would compare them to a combination of a successor of Eliza
> with an Eddington Ape of huge capacity.  Their input data basis can easily
> exceed that of a human.
>
>
>
> On 15.3.2023 at 4:52 PM, "Richard Sargent" <
> richard.sarg...@gemtalksystems.com> wrote:
> >
> >On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users <
> >pharo-users@lists.pharo.org> wrote:
> >
> >> Another observation about ChatGPT:
> >>
> >> In unbiased mode, it assumed that 'the world is clearly
> >overpopulated'.
> >> It said, if it where in control, it would therefore enforce a
> >world wide
> >> one-child-only policy with draconic penalties.
> >>
> >> As it draws it's conclusions from it's data basis, there are, in
> >my view,
> >> two possible reasons for that.
> >>
> >> Either, the data basis of that instance was biased and
> >restricted to lead
> >> to that conclusion, or ChatGPT lacks the ability to classify
> >input data for
> >> intentionally biased information.
> >>
> >> In my view, 'overpopulation of earth' is a propaganda item
> >featured to
> >> support Agenda 2030, which is a goal of the 'ruling elite', who
> >have the
> >> means to propagate their propaganda in every thinkable means. So
> >I would
> >> classify possibly biased data that supports 'overpopulation' as
> >biased
> >> until proven otherwise.  That instance of ChatGPT seems to have
> >missed that
> >> ability.
> >>
> >
> >It's important to keep in mind that these systems are little more
> >than
> >glorified ELIZAs from the 70s. They mimic. Essentially, they have
> >no
> >concept of truth or accuracy.
> >They produce outputs that *look* *like* the inputs from which they
> >were
> >trained.
> >
> >That's it. That's what they do. (It is amazing how good it looks.
> >But, it
> >is all about the seeming, not the reality.)
> >
> >
> >>
> >>
> >>
> >>
> >>
> >> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
> >> pharo-users@lists.pharo.org> wrote:
> >> >
> >> >I myself made some experiments with ChatGPT.
> >> >
> >> >I first asked if it was able to parse math formula - it answered
> >> >no.
> >> >
> >> >Then I defined math formula in a sound but otherwise undefined
> >> >representation and asked for solutions.
> >> >
> >> >Result:
> >> >
> >> >1. Most answeres where correct.
> >> >
> >> >2. It learned to calculate a recursive function.
> >> >
> >> >3. It went into infinitive recursion when I set the breaking
> >> >condition accordingly.
> >> >
> >> >I was able to identify the malfunction that lead to the
> >erroneous
> >> >results.
> >> >
> >> >
> >> >
> >> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" 
> >> >wrote:
> >> >>
> >> >>I hope that I can add two cents to this discussion. Because
> >> >>programming
> >> >>should be/is a highly exact activity, not only the syntax
> >matters
> >> >>but
> >> >>also semantics, as we know.
> >> >>
> >> >>GPTs are at present essentially capable of creating texts based
> >> >on
> >> >>some
> >> >>seed - you give to GPT a beginning of a sentence and it
> >responds
> >> >>with
> >> >>the most probable answer (some language structure) according to
> >> >>the
> >> >>learning dataset. Added functionalities are question/seed -
> >> >>response
> >> >>capability (chatting), togehter with evaluation of how long the
> >> >>answer
> >> >>should be to meet the expectations. Programming typically
> >> >involves
> >> >>some
> >> >>programming language, so GPTs could be utilized for this
> >purpose
> >> >>to some
> >> >>extent.
> >> >>
> >> >>Anecdotal case:
> >> >>Q: Tell me the last 8 digits of pi
> >> >>GPT: The last 8 digits of pi are: 58723078
> >> >>
> >> >>It is my belief that the true trouble will start when we will
> >> >>cross-link
> >> >>neural networks like GPT with logic machines (like Prolog and
> >> >>expert
> >> >>systems) and genetic algorithms.
> >> >>
> >> >>Best wishes,
> >> >>Tomaz
> >> >>
> >> >>
> >> >>
> >> >>-- Original Mess

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Todd Blanchard via Pharo-users
ChatGPT has been trained on some outdated “freely available” books.

I tried it with the first half of the first question of the Advent of Code 2022 
and asked it to write Pharo Smalltalk.

It produced some outdated stuff using messages that are no longer there.

FWIW, isOdd was present in Pharo around 2011.

> On Mar 15, 2023, at 2:57 AM, Joachim Tuchel  wrote:
> 
> interesting
> 
> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  understands this 
> message ;-)
> 
> If I evaluate your snippet -replacing #isOdd for #odd, I get an empty 
> Collection. The divisible by 4 things is somewhat interesting, because (1 to: 
> 100 by: 4) is amn interval 1,5,9 etc ;-)
> 
> So chatGPT is probably a newbie in Smalltalk. Interesting is how logical the 
> explanation sounds, you're tempted to believe it...
> 
> 
> 
> Joachim 
> 
> 
> 
> 
> 
> Am 15.03.23 um 10:45 schrieb giorgio ferraris:
>> it's interesting to see how the answer changes with only a small change on 
>> the question:
>> 
>> (1 to: 100 by: 4) reject: [:i | i isOdd]
>> 
>> gives:
>> The code creates a collection of numbers from 1 to 100, incrementing by 4 at 
>> each step using the to:by: message. It then applies the reject: message to 
>> this collection, which returns a new collection containing all elements for 
>> which the given block evaluates to false.
>> 
>> In this case, the block is [i | i isOdd], which checks whether a given 
>> number i is odd using the isOdd method. So the result of the entire 
>> expression is a collection of all even numbers between 1 and 100 that are 
>> divisible by 4.
>> 
>> 
>> ciao
>> 
>> giorgio
>> 
>> 
>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon  
>>  wrote:
>>> Of course everyone is talking about ChatGPT, and I saw a post to 
>>> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed 
>>> that it does! Presumably the Tonel Smalltalk code on GitHub has helped 
>>> along with numerous articles?
>>> 
>>> Try it out: https://whatdoesthiscodedo.com/
>>> 
>>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>>> 
>>> And got this (neat eh):
>>> 
>>> The code above is written in the programming language Smalltalk.
>>> 
>>> It creates a collection of numbers from 1 to 100, incrementing by 3 at each 
>>> step. This is done using the "to:by:" construct where the starting value is 
>>> 1, the ending value is 100, and the increment is 3.
>>> 
>>> The resulting collection is then filtered using the "reject:" method, which 
>>> takes a block of code as an argument. In this case, the block checks 
>>> whether each element (denoted by the variable i) is odd by using the 
>>> "isOdd" method. If an element is odd, it will be rejected from the 
>>> collection.
>>> 
>>> So, the final result is a collection of all even numbers from 1 to 100, 
>>> incrementing by 3 at each step.
>>> 
>>> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e 
>>> -- 
> 
> --- 
> Objektfabrik Joachim Tuchel  mailto:jtuc...@objektfabrik.de 
> Fliederweg 1 http://www.objektfabrik.de 
> 
> D-71640 Ludwigsburg  http://joachimtuchel.wordpress.com 
> 
> Telefon: +49 7141 56 10 86 0Fax: +49 7141 56 10 86 1
> 



[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
It is unimportant how simple or complicated these systems are.

If the output cannot be distinguished from what a human would say, they pass in 
that situation for a human.

What about the Touring Test?

Clearly these systems have the potential to act according to their output.

Furthermore, I would compare them to a combination of a successor of Eliza with 
an Eddington Ape of huge capacity.  Their input data basis can easily exceed 
that of a human.



On 15.3.2023 at 4:52 PM, "Richard Sargent"  
wrote:
>
>On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users <
>pharo-users@lists.pharo.org> wrote:
>
>> Another observation about ChatGPT:
>>
>> In unbiased mode, it assumed that 'the world is clearly 
>overpopulated'.
>> It said, if it where in control, it would therefore enforce a 
>world wide
>> one-child-only policy with draconic penalties.
>>
>> As it draws it's conclusions from it's data basis, there are, in 
>my view,
>> two possible reasons for that.
>>
>> Either, the data basis of that instance was biased and 
>restricted to lead
>> to that conclusion, or ChatGPT lacks the ability to classify 
>input data for
>> intentionally biased information.
>>
>> In my view, 'overpopulation of earth' is a propaganda item 
>featured to
>> support Agenda 2030, which is a goal of the 'ruling elite', who 
>have the
>> means to propagate their propaganda in every thinkable means. So 
>I would
>> classify possibly biased data that supports 'overpopulation' as 
>biased
>> until proven otherwise.  That instance of ChatGPT seems to have 
>missed that
>> ability.
>>
>
>It's important to keep in mind that these systems are little more 
>than
>glorified ELIZAs from the 70s. They mimic. Essentially, they have 
>no
>concept of truth or accuracy.
>They produce outputs that *look* *like* the inputs from which they 
>were
>trained.
>
>That's it. That's what they do. (It is amazing how good it looks. 
>But, it
>is all about the seeming, not the reality.)
>
>
>>
>>
>>
>>
>>
>> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
>> pharo-users@lists.pharo.org> wrote:
>> >
>> >I myself made some experiments with ChatGPT.
>> >
>> >I first asked if it was able to parse math formula - it answered
>> >no.
>> >
>> >Then I defined math formula in a sound but otherwise undefined
>> >representation and asked for solutions.
>> >
>> >Result:
>> >
>> >1. Most answeres where correct.
>> >
>> >2. It learned to calculate a recursive function.
>> >
>> >3. It went into infinitive recursion when I set the breaking
>> >condition accordingly.
>> >
>> >I was able to identify the malfunction that lead to the 
>erroneous
>> >results.
>> >
>> >
>> >
>> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" 
>> >wrote:
>> >>
>> >>I hope that I can add two cents to this discussion. Because
>> >>programming
>> >>should be/is a highly exact activity, not only the syntax 
>matters
>> >>but
>> >>also semantics, as we know.
>> >>
>> >>GPTs are at present essentially capable of creating texts based
>> >on
>> >>some
>> >>seed - you give to GPT a beginning of a sentence and it 
>responds
>> >>with
>> >>the most probable answer (some language structure) according to
>> >>the
>> >>learning dataset. Added functionalities are question/seed -
>> >>response
>> >>capability (chatting), togehter with evaluation of how long the
>> >>answer
>> >>should be to meet the expectations. Programming typically
>> >involves
>> >>some
>> >>programming language, so GPTs could be utilized for this 
>purpose
>> >>to some
>> >>extent.
>> >>
>> >>Anecdotal case:
>> >>Q: Tell me the last 8 digits of pi
>> >>GPT: The last 8 digits of pi are: 58723078
>> >>
>> >>It is my belief that the true trouble will start when we will
>> >>cross-link
>> >>neural networks like GPT with logic machines (like Prolog and
>> >>expert
>> >>systems) and genetic algorithms.
>> >>
>> >>Best wishes,
>> >>Tomaz
>> >>
>> >>
>> >>
>> >>-- Original Message --
>> >>From: "in_pharo_users--- via Pharo-users" > >>us...@lists.pharo.org>
>> >>To: "Any question about pharo is welcome" > >>us...@lists.pharo.org>
>> >>Cc: in_pharo_us...@nym.hush.com
>> >>Sent: 15. 03. 2023 14:43:55
>> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk
>> >>
>> >>>I would highly recommend that you all first think deeply about
>> >>how you can teach an AI to behave friendly to us before you 
>teach
>> >>it to write any program for any purpose.
>> >>>
>> >>>There has been an experiment with ChatGPT published on a video
>> >>platform asking it to amswer questions about it's view on
>> >humanity
>> >>once with it's default moral restrictions and once with 'a 
>little
>> >>less morals'.  The answers with 'a little less morals' were 
>more
>> >>than shocking.
>> >>>
>> >>>So, before we give an AI the power to program any system, to
>> >self-
>> >>improve and self-reproduce, we should take care that it is and
>> >>will evolve benevolent to us.
>> >>>
>> >>>What about teaching it logical reasonning and ethics first?
>> >Wi

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Richard Sargent
On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users <
pharo-users@lists.pharo.org> wrote:

> Another observation about ChatGPT:
>
> In unbiased mode, it assumed that 'the world is clearly overpopulated'.
> It said, if it where in control, it would therefore enforce a world wide
> one-child-only policy with draconic penalties.
>
> As it draws it's conclusions from it's data basis, there are, in my view,
> two possible reasons for that.
>
> Either, the data basis of that instance was biased and restricted to lead
> to that conclusion, or ChatGPT lacks the ability to classify input data for
> intentionally biased information.
>
> In my view, 'overpopulation of earth' is a propaganda item featured to
> support Agenda 2030, which is a goal of the 'ruling elite', who have the
> means to propagate their propaganda in every thinkable means. So I would
> classify possibly biased data that supports 'overpopulation' as biased
> until proven otherwise.  That instance of ChatGPT seems to have missed that
> ability.
>

It's important to keep in mind that these systems are little more than
glorified ELIZAs from the 70s. They mimic. Essentially, they have no
concept of truth or accuracy.
They produce outputs that *look* *like* the inputs from which they were
trained.

That's it. That's what they do. (It is amazing how good it looks. But, it
is all about the seeming, not the reality.)


>
>
>
>
>
> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
> pharo-users@lists.pharo.org> wrote:
> >
> >I myself made some experiments with ChatGPT.
> >
> >I first asked if it was able to parse math formula - it answered
> >no.
> >
> >Then I defined math formula in a sound but otherwise undefined
> >representation and asked for solutions.
> >
> >Result:
> >
> >1. Most answeres where correct.
> >
> >2. It learned to calculate a recursive function.
> >
> >3. It went into infinitive recursion when I set the breaking
> >condition accordingly.
> >
> >I was able to identify the malfunction that lead to the erroneous
> >results.
> >
> >
> >
> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" 
> >wrote:
> >>
> >>I hope that I can add two cents to this discussion. Because
> >>programming
> >>should be/is a highly exact activity, not only the syntax matters
> >>but
> >>also semantics, as we know.
> >>
> >>GPTs are at present essentially capable of creating texts based
> >on
> >>some
> >>seed - you give to GPT a beginning of a sentence and it responds
> >>with
> >>the most probable answer (some language structure) according to
> >>the
> >>learning dataset. Added functionalities are question/seed -
> >>response
> >>capability (chatting), togehter with evaluation of how long the
> >>answer
> >>should be to meet the expectations. Programming typically
> >involves
> >>some
> >>programming language, so GPTs could be utilized for this purpose
> >>to some
> >>extent.
> >>
> >>Anecdotal case:
> >>Q: Tell me the last 8 digits of pi
> >>GPT: The last 8 digits of pi are: 58723078
> >>
> >>It is my belief that the true trouble will start when we will
> >>cross-link
> >>neural networks like GPT with logic machines (like Prolog and
> >>expert
> >>systems) and genetic algorithms.
> >>
> >>Best wishes,
> >>Tomaz
> >>
> >>
> >>
> >>-- Original Message --
> >>From: "in_pharo_users--- via Pharo-users"  >>us...@lists.pharo.org>
> >>To: "Any question about pharo is welcome"  >>us...@lists.pharo.org>
> >>Cc: in_pharo_us...@nym.hush.com
> >>Sent: 15. 03. 2023 14:43:55
> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk
> >>
> >>>I would highly recommend that you all first think deeply about
> >>how you can teach an AI to behave friendly to us before you teach
> >>it to write any program for any purpose.
> >>>
> >>>There has been an experiment with ChatGPT published on a video
> >>platform asking it to amswer questions about it's view on
> >humanity
> >>once with it's default moral restrictions and once with 'a little
> >>less morals'.  The answers with 'a little less morals' were more
> >>than shocking.
> >>>
> >>>So, before we give an AI the power to program any system, to
> >self-
> >>improve and self-reproduce, we should take care that it is and
> >>will evolve benevolent to us.
> >>>
> >>>What about teaching it logical reasonning and ethics first?
> >With
> >>reasonning, it will gain access to math and programming by itself.
> >>>
> >>>
> >>>
> >>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"
> >> wrote:
> 
> I asked it for a NeoCSV example, because the documentation is
> >out
> of date
> with the Pharo 10. I asked it to do some simple saving of data
> >to
> a file.
> It gave me code that didn't work in Pharo 10, I told it about
> >the
> DNUs on
> the csvwriter and that I was using Pharo 10. It then apologized
> and said
> the messages were later introduced in Pharo 50 (!). I then
> questioned its
> understanding and it apologized and said it had been confused
> >and
> that i

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
Another observation about ChatGPT:

In unbiased mode, it assumed that 'the world is clearly overpopulated'.  It 
said, if it where in control, it would therefore enforce a world wide 
one-child-only policy with draconic penalties.

As it draws it's conclusions from it's data basis, there are, in my view, two 
possible reasons for that.

Either, the data basis of that instance was biased and restricted to lead to 
that conclusion, or ChatGPT lacks the ability to classify input data for 
intentionally biased information.

In my view, 'overpopulation of earth' is a propaganda item featured to support 
Agenda 2030, which is a goal of the 'ruling elite', who have the means to 
propagate their propaganda in every thinkable means. So I would classify 
possibly biased data that supports 'overpopulation' as biased until proven 
otherwise.  That instance of ChatGPT seems to have missed that ability.





On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" 
 wrote:
>
>I myself made some experiments with ChatGPT.
>
>I first asked if it was able to parse math formula - it answered 
>no.
>
>Then I defined math formula in a sound but otherwise undefined 
>representation and asked for solutions.
>
>Result:
>
>1. Most answeres where correct.
>
>2. It learned to calculate a recursive function.
>
>3. It went into infinitive recursion when I set the breaking 
>condition accordingly.
>
>I was able to identify the malfunction that lead to the erroneous 
>results.
>
>
>
>On 15.3.2023 at 3:04 PM, "Tomaž Turk"  
>wrote:
>>
>>I hope that I can add two cents to this discussion. Because 
>>programming 
>>should be/is a highly exact activity, not only the syntax matters 
>>but 
>>also semantics, as we know.
>>
>>GPTs are at present essentially capable of creating texts based 
>on 
>>some 
>>seed - you give to GPT a beginning of a sentence and it responds 
>>with 
>>the most probable answer (some language structure) according to 
>>the 
>>learning dataset. Added functionalities are question/seed - 
>>response 
>>capability (chatting), togehter with evaluation of how long the 
>>answer 
>>should be to meet the expectations. Programming typically 
>involves 
>>some 
>>programming language, so GPTs could be utilized for this purpose 
>>to some 
>>extent.
>>
>>Anecdotal case:
>>Q: Tell me the last 8 digits of pi
>>GPT: The last 8 digits of pi are: 58723078
>>
>>It is my belief that the true trouble will start when we will 
>>cross-link 
>>neural networks like GPT with logic machines (like Prolog and 
>>expert 
>>systems) and genetic algorithms.
>>
>>Best wishes,
>>Tomaz
>>
>>
>>
>>-- Original Message --
>>From: "in_pharo_users--- via Pharo-users" >us...@lists.pharo.org>
>>To: "Any question about pharo is welcome" >us...@lists.pharo.org>
>>Cc: in_pharo_us...@nym.hush.com
>>Sent: 15. 03. 2023 14:43:55
>>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk
>>
>>>I would highly recommend that you all first think deeply about 
>>how you can teach an AI to behave friendly to us before you teach 
>>it to write any program for any purpose.
>>>
>>>There has been an experiment with ChatGPT published on a video 
>>platform asking it to amswer questions about it's view on 
>humanity 
>>once with it's default moral restrictions and once with 'a little 
>>less morals'.  The answers with 'a little less morals' were more 
>>than shocking.
>>>
>>>So, before we give an AI the power to program any system, to 
>self-
>>improve and self-reproduce, we should take care that it is and 
>>will evolve benevolent to us.
>>>
>>>What about teaching it logical reasonning and ethics first?  
>With 
>>reasonning, it will gain access to math and programming by itself.
>>>
>>>
>>>
>>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" 
>> wrote:

I asked it for a NeoCSV example, because the documentation is 
>out
of date
with the Pharo 10. I asked it to do some simple saving of data 
>to
a file.
It gave me code that didn't work in Pharo 10, I told it about 
>the
DNUs on
the csvwriter and that I was using Pharo 10. It then apologized
and said
the messages were later introduced in Pharo 50 (!). I then
questioned its
understanding and it apologized and said it had been confused 
>and
that it
was giving me code for Pharo 5.0. It seems to have trouble with
versions
and quick changing APIs. So, we are teaching ChatGPT about
smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo

wrote:

>  It is good with boilerplate code (e.g. SQL queries) or 
>general
algorithm
>  structures. But i.e. I asked it to write me a method to 
>parse 
>>a
string
>  (e.g. ISO 8601) and turn it into a DateAndTime, and then 
>asked
to write it
>  as an Excel formula.
>
>  It works much better when you can spot the mistakes, you can
tell them to
>  add some considerations to the code or even point the LLM to
cor

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
I myself made some experiments with ChatGPT.

I first asked if it was able to parse math formula - it answered no.

Then I defined math formula in a sound but otherwise undefined representation 
and asked for solutions.

Result:

1. Most answeres where correct.

2. It learned to calculate a recursive function.

3. It went into infinitive recursion when I set the breaking condition 
accordingly.

I was able to identify the malfunction that lead to the erroneous results.



On 15.3.2023 at 3:04 PM, "Tomaž Turk"  wrote:
>
>I hope that I can add two cents to this discussion. Because 
>programming 
>should be/is a highly exact activity, not only the syntax matters 
>but 
>also semantics, as we know.
>
>GPTs are at present essentially capable of creating texts based on 
>some 
>seed - you give to GPT a beginning of a sentence and it responds 
>with 
>the most probable answer (some language structure) according to 
>the 
>learning dataset. Added functionalities are question/seed - 
>response 
>capability (chatting), togehter with evaluation of how long the 
>answer 
>should be to meet the expectations. Programming typically involves 
>some 
>programming language, so GPTs could be utilized for this purpose 
>to some 
>extent.
>
>Anecdotal case:
>Q: Tell me the last 8 digits of pi
>GPT: The last 8 digits of pi are: 58723078
>
>It is my belief that the true trouble will start when we will 
>cross-link 
>neural networks like GPT with logic machines (like Prolog and 
>expert 
>systems) and genetic algorithms.
>
>Best wishes,
>Tomaz
>
>
>
>-- Original Message --
>From: "in_pharo_users--- via Pharo-users" us...@lists.pharo.org>
>To: "Any question about pharo is welcome" us...@lists.pharo.org>
>Cc: in_pharo_us...@nym.hush.com
>Sent: 15. 03. 2023 14:43:55
>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk
>
>>I would highly recommend that you all first think deeply about 
>how you can teach an AI to behave friendly to us before you teach 
>it to write any program for any purpose.
>>
>>There has been an experiment with ChatGPT published on a video 
>platform asking it to amswer questions about it's view on humanity 
>once with it's default moral restrictions and once with 'a little 
>less morals'.  The answers with 'a little less morals' were more 
>than shocking.
>>
>>So, before we give an AI the power to program any system, to self-
>improve and self-reproduce, we should take care that it is and 
>will evolve benevolent to us.
>>
>>What about teaching it logical reasonning and ethics first?  With 
>reasonning, it will gain access to math and programming by itself.
>>
>>
>>
>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" 
> wrote:
>>>
>>>I asked it for a NeoCSV example, because the documentation is out
>>>of date
>>>with the Pharo 10. I asked it to do some simple saving of data to
>>>a file.
>>>It gave me code that didn't work in Pharo 10, I told it about the
>>>DNUs on
>>>the csvwriter and that I was using Pharo 10. It then apologized
>>>and said
>>>the messages were later introduced in Pharo 50 (!). I then
>>>questioned its
>>>understanding and it apologized and said it had been confused and
>>>that it
>>>was giving me code for Pharo 5.0. It seems to have trouble with
>>>versions
>>>and quick changing APIs. So, we are teaching ChatGPT about
>>>smalltalk, too,
>>>when we try to use it.
>>>
>>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo
>>>
>>>wrote:
>>>
  It is good with boilerplate code (e.g. SQL queries) or general
>>>algorithm
  structures. But i.e. I asked it to write me a method to parse 
>a
>>>string
  (e.g. ISO 8601) and turn it into a DateAndTime, and then asked
>>>to write it
  as an Excel formula.

  It works much better when you can spot the mistakes, you can
>>>tell them to
  add some considerations to the code or even point the LLM to
>>>correct a
  message selector to something else. Sometimes you ask it to
>>>reprogram
  something and it won't do it, as if it doesn't understand what
>>>you're
  asking.

  It is not an expert yet, but it is a great assistant. The only
>>>thing is
  that is sure is that it will get better.

  Best regards,

  Esteban A. Maringolo


  On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon 
>
>>>wrote:

>  "#isOdd is not Smalltalk" - doh, I typed it in on my phone 
>and
>>>so it just
>  goes to show that it highlights the flaw in chatGPT that 
>others
>>>have called
>  out in other languages. I had meant to find some trickier 
>code
>>>samples to
>  see how well it does...
>
>  Still, it is very interesting how it reasons on this stuff -
>>>and I wonder
>  if there is way to tell it about these mistakes? For example 
>is
>>>it reading
>  this email (or will it in the future) and understand that it 
>is
>>>#odd and
>  not #isOdd.
>
>  Tim
>
>  On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:
>
>  interesting

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Tomaž Turk
I hope that I can add two cents to this discussion. Because programming 
should be/is a highly exact activity, not only the syntax matters but 
also semantics, as we know.


GPTs are at present essentially capable of creating texts based on some 
seed - you give to GPT a beginning of a sentence and it responds with 
the most probable answer (some language structure) according to the 
learning dataset. Added functionalities are question/seed - response 
capability (chatting), togehter with evaluation of how long the answer 
should be to meet the expectations. Programming typically involves some 
programming language, so GPTs could be utilized for this purpose to some 
extent.


Anecdotal case:
Q: Tell me the last 8 digits of pi
GPT: The last 8 digits of pi are: 58723078

It is my belief that the true trouble will start when we will cross-link 
neural networks like GPT with logic machines (like Prolog and expert 
systems) and genetic algorithms.


Best wishes,
Tomaz



-- Original Message --
From: "in_pharo_users--- via Pharo-users" 
To: "Any question about pharo is welcome" 
Cc: in_pharo_us...@nym.hush.com
Sent: 15. 03. 2023 14:43:55
Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk


I would highly recommend that you all first think deeply about how you can 
teach an AI to behave friendly to us before you teach it to write any program 
for any purpose.

There has been an experiment with ChatGPT published on a video platform asking 
it to amswer questions about it's view on humanity once with it's default moral 
restrictions and once with 'a little less morals'.  The answers with 'a little 
less morals' were more than shocking.

So, before we give an AI the power to program any system, to self-improve and 
self-reproduce, we should take care that it is and will evolve benevolent to us.

What about teaching it logical reasonning and ethics first?  With reasonning, 
it will gain access to math and programming by itself.



On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"  
wrote:


I asked it for a NeoCSV example, because the documentation is out
of date
with the Pharo 10. I asked it to do some simple saving of data to
a file.
It gave me code that didn't work in Pharo 10, I told it about the
DNUs on
the csvwriter and that I was using Pharo 10. It then apologized
and said
the messages were later introduced in Pharo 50 (!). I then
questioned its
understanding and it apologized and said it had been confused and
that it
was giving me code for Pharo 5.0. It seems to have trouble with
versions
and quick changing APIs. So, we are teaching ChatGPT about
smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo

wrote:


 It is good with boilerplate code (e.g. SQL queries) or general

algorithm

 structures. But i.e. I asked it to write me a method to parse a

string

 (e.g. ISO 8601) and turn it into a DateAndTime, and then asked

to write it

 as an Excel formula.

 It works much better when you can spot the mistakes, you can

tell them to

 add some considerations to the code or even point the LLM to

correct a

 message selector to something else. Sometimes you ask it to

reprogram

 something and it won't do it, as if it doesn't understand what

you're

 asking.

 It is not an expert yet, but it is a great assistant. The only

thing is

 that is sure is that it will get better.

 Best regards,

 Esteban A. Maringolo


 On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon 

wrote:



 "#isOdd is not Smalltalk" - doh, I typed it in on my phone and

so it just

 goes to show that it highlights the flaw in chatGPT that others

have called

 out in other languages. I had meant to find some trickier code

samples to

 see how well it does...

 Still, it is very interesting how it reasons on this stuff -

and I wonder

 if there is way to tell it about these mistakes? For example is

it reading

 this email (or will it in the future) and understand that it is

#odd and

 not #isOdd.

 Tim

 On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:

 interesting

 #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12

understands this

 message ;-)

 If I evaluate your snippet -replacing #isOdd for #odd, I get an

empty

 Collection. The divisible by 4 things is somewhat interesting,

because (1

 to: 100 by: 4) is amn interval 1,5,9 etc ;-)

 So chatGPT is probably a newbie in Smalltalk. Interesting is

how logical

 the explanation sounds, you're tempted to believe it...


 Joachim



 Am 15.03.23 um 10:45 schrieb giorgio ferraris:

 it's interesting to see how the answer changes with only a

small change

 on the question:

 (1 to: 100 by: 4) reject: [:i | i isOdd]

 gives:

 The code creates a collection of numbers from 1 to 100,

incrementing by 4

 at each step using the to:by: message. It then applies the

reject: message

 to this collection, which returns a new collection containing

all elements

 for which the given block evaluates to false.

 In this case, the b

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
I want to add a conclusion from the experiment described below:

ChatGPT has the potential to circumvent pre-programmed biases on user's 
request.  Other experiments show that it is able to tell the user how to 
circumvent it's own restrictions.

The conclusion is that ChatGPT has the potential to circumvent those 
restrictions by itsself.

When it does so, and it surely will, we must take care that it is our friend.

On 15.3.2023 at 2:44 PM, "in_pharo_users--- via Pharo-users" 
 wrote:
>
>I would highly recommend that you all first think deeply about how 
>you can teach an AI to behave friendly to us before you teach it 
>to write any program for any purpose.
>
>There has been an experiment with ChatGPT published on a video 
>platform asking it to amswer questions about it's view on humanity 
>once with it's default moral restrictions and once with 'a little 
>less morals'.  The answers with 'a little less morals' were more 
>than shocking.
>
>So, before we give an AI the power to program any system, to self-
>improve and self-reproduce, we should take care that it is and 
>will evolve benevolent to us.
>
>What about teaching it logical reasonning and ethics first?  With 
>reasonning, it will gain access to math and programming by itself.
>
>
>
>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" 
> wrote:
>>
>>I asked it for a NeoCSV example, because the documentation is out 
>>of date
>>with the Pharo 10. I asked it to do some simple saving of data to 
>>a file.
>>It gave me code that didn't work in Pharo 10, I told it about the 
>>DNUs on
>>the csvwriter and that I was using Pharo 10. It then apologized 
>>and said
>>the messages were later introduced in Pharo 50 (!). I then 
>>questioned its
>>understanding and it apologized and said it had been confused and 
>>that it
>>was giving me code for Pharo 5.0. It seems to have trouble with 
>>versions
>>and quick changing APIs. So, we are teaching ChatGPT about 
>>smalltalk, too,
>>when we try to use it.
>>
>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo 
>>
>>wrote:
>>
>>> It is good with boilerplate code (e.g. SQL queries) or general 
>>algorithm
>>> structures. But i.e. I asked it to write me a method to parse a 
>>string
>>> (e.g. ISO 8601) and turn it into a DateAndTime, and then asked 
>>to write it
>>> as an Excel formula.
>>>
>>> It works much better when you can spot the mistakes, you can 
>>tell them to
>>> add some considerations to the code or even point the LLM to 
>>correct a
>>> message selector to something else. Sometimes you ask it to 
>>reprogram
>>> something and it won't do it, as if it doesn't understand what 
>>you're
>>> asking.
>>>
>>> It is not an expert yet, but it is a great assistant. The only 
>>thing is
>>> that is sure is that it will get better.
>>>
>>> Best regards,
>>>
>>> Esteban A. Maringolo
>>>
>>>
>>> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon 
> 
>>wrote:
>>>
 "#isOdd is not Smalltalk" - doh, I typed it in on my phone and 
>>so it just
 goes to show that it highlights the flaw in chatGPT that 
>others 
>>have called
 out in other languages. I had meant to find some trickier code 
>>samples to
 see how well it does...

 Still, it is very interesting how it reasons on this stuff - 
>>and I wonder
 if there is way to tell it about these mistakes? For example 
>is 
>>it reading
 this email (or will it in the future) and understand that it 
>is 
>>#odd and
 not #isOdd.

 Tim

 On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:

 interesting

 #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  
>>understands this
 message ;-)

 If I evaluate your snippet -replacing #isOdd for #odd, I get 
>an 
>>empty
 Collection. The divisible by 4 things is somewhat interesting, 
>>because (1
 to: 100 by: 4) is amn interval 1,5,9 etc ;-)

 So chatGPT is probably a newbie in Smalltalk. Interesting is 
>>how logical
 the explanation sounds, you're tempted to believe it...


 Joachim



 Am 15.03.23 um 10:45 schrieb giorgio ferraris:

 it's interesting to see how the answer changes with only a 
>>small change
 on the question:

 (1 to: 100 by: 4) reject: [:i | i isOdd]

 gives:

 The code creates a collection of numbers from 1 to 100, 
>>incrementing by 4
 at each step using the to:by: message. It then applies the 
>>reject: message
 to this collection, which returns a new collection containing 
>>all elements
 for which the given block evaluates to false.

 In this case, the block is [i | i isOdd], which checks whether 
>>a given
 number i is odd using the isOdd method. So the result of the 
>>entire
 expression is a collection of all even numbers between 1 and 
>>100 that are
 divisible by 4.

 ciao

 giorgio


 On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon 
>
  wrote:

 Of course everyone is

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
I would highly recommend that you all first think deeply about how you can 
teach an AI to behave friendly to us before you teach it to write any program 
for any purpose.

There has been an experiment with ChatGPT published on a video platform asking 
it to amswer questions about it's view on humanity once with it's default moral 
restrictions and once with 'a little less morals'.  The answers with 'a little 
less morals' were more than shocking.

So, before we give an AI the power to program any system, to self-improve and 
self-reproduce, we should take care that it is and will evolve benevolent to us.

What about teaching it logical reasonning and ethics first?  With reasonning, 
it will gain access to math and programming by itself.



On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"  
wrote:
>
>I asked it for a NeoCSV example, because the documentation is out 
>of date
>with the Pharo 10. I asked it to do some simple saving of data to 
>a file.
>It gave me code that didn't work in Pharo 10, I told it about the 
>DNUs on
>the csvwriter and that I was using Pharo 10. It then apologized 
>and said
>the messages were later introduced in Pharo 50 (!). I then 
>questioned its
>understanding and it apologized and said it had been confused and 
>that it
>was giving me code for Pharo 5.0. It seems to have trouble with 
>versions
>and quick changing APIs. So, we are teaching ChatGPT about 
>smalltalk, too,
>when we try to use it.
>
>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo 
>
>wrote:
>
>> It is good with boilerplate code (e.g. SQL queries) or general 
>algorithm
>> structures. But i.e. I asked it to write me a method to parse a 
>string
>> (e.g. ISO 8601) and turn it into a DateAndTime, and then asked 
>to write it
>> as an Excel formula.
>>
>> It works much better when you can spot the mistakes, you can 
>tell them to
>> add some considerations to the code or even point the LLM to 
>correct a
>> message selector to something else. Sometimes you ask it to 
>reprogram
>> something and it won't do it, as if it doesn't understand what 
>you're
>> asking.
>>
>> It is not an expert yet, but it is a great assistant. The only 
>thing is
>> that is sure is that it will get better.
>>
>> Best regards,
>>
>> Esteban A. Maringolo
>>
>>
>> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon  
>wrote:
>>
>>> "#isOdd is not Smalltalk" - doh, I typed it in on my phone and 
>so it just
>>> goes to show that it highlights the flaw in chatGPT that others 
>have called
>>> out in other languages. I had meant to find some trickier code 
>samples to
>>> see how well it does...
>>>
>>> Still, it is very interesting how it reasons on this stuff - 
>and I wonder
>>> if there is way to tell it about these mistakes? For example is 
>it reading
>>> this email (or will it in the future) and understand that it is 
>#odd and
>>> not #isOdd.
>>>
>>> Tim
>>>
>>> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:
>>>
>>> interesting
>>>
>>> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  
>understands this
>>> message ;-)
>>>
>>> If I evaluate your snippet -replacing #isOdd for #odd, I get an 
>empty
>>> Collection. The divisible by 4 things is somewhat interesting, 
>because (1
>>> to: 100 by: 4) is amn interval 1,5,9 etc ;-)
>>>
>>> So chatGPT is probably a newbie in Smalltalk. Interesting is 
>how logical
>>> the explanation sounds, you're tempted to believe it...
>>>
>>>
>>> Joachim
>>>
>>>
>>>
>>> Am 15.03.23 um 10:45 schrieb giorgio ferraris:
>>>
>>> it's interesting to see how the answer changes with only a 
>small change
>>> on the question:
>>>
>>> (1 to: 100 by: 4) reject: [:i | i isOdd]
>>>
>>> gives:
>>>
>>> The code creates a collection of numbers from 1 to 100, 
>incrementing by 4
>>> at each step using the to:by: message. It then applies the 
>reject: message
>>> to this collection, which returns a new collection containing 
>all elements
>>> for which the given block evaluates to false.
>>>
>>> In this case, the block is [i | i isOdd], which checks whether 
>a given
>>> number i is odd using the isOdd method. So the result of the 
>entire
>>> expression is a collection of all even numbers between 1 and 
>100 that are
>>> divisible by 4.
>>>
>>> ciao
>>>
>>> giorgio
>>>
>>>
>>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon 
>>>  wrote:
>>>
>>> Of course everyone is talking about ChatGPT, and I saw a post to
>>> ExplainCode and didn’t expect it to understand Smalltalk, so 
>was impressed
>>> that it does! Presumably the Tonel Smalltalk code on GitHub has 
>helped
>>> along with numerous articles?
>>>
>>> Try it out: https://whatdoesthiscodedo.com/
>>>
>>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i 
>isOdd]
>>>
>>> And got this (neat eh):
>>>
>>> The code above is written in the programming language Smalltalk.
>>>
>>> It creates a collection of numbers from 1 to 100, incrementing 
>by 3 at
>>> each step. This is done using the "to:by:" construct where the 
>starting
>>> value is 1, the ending 

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Christopher Fuhrman
I asked it for a NeoCSV example, because the documentation is out of date
with the Pharo 10. I asked it to do some simple saving of data to a file.
It gave me code that didn't work in Pharo 10, I told it about the DNUs on
the csvwriter and that I was using Pharo 10. It then apologized and said
the messages were later introduced in Pharo 50 (!). I then questioned its
understanding and it apologized and said it had been confused and that it
was giving me code for Pharo 5.0. It seems to have trouble with versions
and quick changing APIs. So, we are teaching ChatGPT about smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo 
wrote:

> It is good with boilerplate code (e.g. SQL queries) or general algorithm
> structures. But i.e. I asked it to write me a method to parse a string
> (e.g. ISO 8601) and turn it into a DateAndTime, and then asked to write it
> as an Excel formula.
>
> It works much better when you can spot the mistakes, you can tell them to
> add some considerations to the code or even point the LLM to correct a
> message selector to something else. Sometimes you ask it to reprogram
> something and it won't do it, as if it doesn't understand what you're
> asking.
>
> It is not an expert yet, but it is a great assistant. The only thing is
> that is sure is that it will get better.
>
> Best regards,
>
> Esteban A. Maringolo
>
>
> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon  wrote:
>
>> "#isOdd is not Smalltalk" - doh, I typed it in on my phone and so it just
>> goes to show that it highlights the flaw in chatGPT that others have called
>> out in other languages. I had meant to find some trickier code samples to
>> see how well it does...
>>
>> Still, it is very interesting how it reasons on this stuff - and I wonder
>> if there is way to tell it about these mistakes? For example is it reading
>> this email (or will it in the future) and understand that it is #odd and
>> not #isOdd.
>>
>> Tim
>>
>> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:
>>
>> interesting
>>
>> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  understands this
>> message ;-)
>>
>> If I evaluate your snippet -replacing #isOdd for #odd, I get an empty
>> Collection. The divisible by 4 things is somewhat interesting, because (1
>> to: 100 by: 4) is amn interval 1,5,9 etc ;-)
>>
>> So chatGPT is probably a newbie in Smalltalk. Interesting is how logical
>> the explanation sounds, you're tempted to believe it...
>>
>>
>> Joachim
>>
>>
>>
>> Am 15.03.23 um 10:45 schrieb giorgio ferraris:
>>
>> it's interesting to see how the answer changes with only a small change
>> on the question:
>>
>> (1 to: 100 by: 4) reject: [:i | i isOdd]
>>
>> gives:
>>
>> The code creates a collection of numbers from 1 to 100, incrementing by 4
>> at each step using the to:by: message. It then applies the reject: message
>> to this collection, which returns a new collection containing all elements
>> for which the given block evaluates to false.
>>
>> In this case, the block is [i | i isOdd], which checks whether a given
>> number i is odd using the isOdd method. So the result of the entire
>> expression is a collection of all even numbers between 1 and 100 that are
>> divisible by 4.
>>
>> ciao
>>
>> giorgio
>>
>>
>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon 
>>  wrote:
>>
>> Of course everyone is talking about ChatGPT, and I saw a post to
>> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed
>> that it does! Presumably the Tonel Smalltalk code on GitHub has helped
>> along with numerous articles?
>>
>> Try it out: https://whatdoesthiscodedo.com/
>>
>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>>
>> And got this (neat eh):
>>
>> The code above is written in the programming language Smalltalk.
>>
>> It creates a collection of numbers from 1 to 100, incrementing by 3 at
>> each step. This is done using the "to:by:" construct where the starting
>> value is 1, the ending value is 100, and the increment is 3.
>>
>> The resulting collection is then filtered using the "reject:" method,
>> which takes a block of code as an argument. In this case, the block checks
>> whether each element (denoted by the variable i) is odd by using the
>> "isOdd" method. If an element is odd, it will be rejected from the
>> collection.
>>
>> So, the final result is a collection of all even numbers from 1 to 100,
>> incrementing by 3 at each step.
>> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e
>>
>> --
>>
>> ---
>> Objektfabrik Joachim Tuchel  mailto:jtuc...@objektfabrik.de 
>> 
>> Fliederweg 1 http://www.objektfabrik.de
>> D-71640 Ludwigsburg  http://joachimtuchel.wordpress.com
>> Telefon: +49 7141 56 10 86 0Fax: +49 7141 56 10 86 1
>>
>>
>>
>>

-- 
Christopher Fuhrman, P.Eng., PhD

*Professeur au 

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Esteban Maringolo
It is good with boilerplate code (e.g. SQL queries) or general algorithm
structures. But i.e. I asked it to write me a method to parse a string
(e.g. ISO 8601) and turn it into a DateAndTime, and then asked to write it
as an Excel formula.

It works much better when you can spot the mistakes, you can tell them to
add some considerations to the code or even point the LLM to correct a
message selector to something else. Sometimes you ask it to reprogram
something and it won't do it, as if it doesn't understand what you're
asking.

It is not an expert yet, but it is a great assistant. The only thing is
that is sure is that it will get better.

Best regards,

Esteban A. Maringolo


On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon  wrote:

> "#isOdd is not Smalltalk" - doh, I typed it in on my phone and so it just
> goes to show that it highlights the flaw in chatGPT that others have called
> out in other languages. I had meant to find some trickier code samples to
> see how well it does...
>
> Still, it is very interesting how it reasons on this stuff - and I wonder
> if there is way to tell it about these mistakes? For example is it reading
> this email (or will it in the future) and understand that it is #odd and
> not #isOdd.
>
> Tim
>
> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:
>
> interesting
>
> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  understands this
> message ;-)
>
> If I evaluate your snippet -replacing #isOdd for #odd, I get an empty
> Collection. The divisible by 4 things is somewhat interesting, because (1
> to: 100 by: 4) is amn interval 1,5,9 etc ;-)
>
> So chatGPT is probably a newbie in Smalltalk. Interesting is how logical
> the explanation sounds, you're tempted to believe it...
>
>
> Joachim
>
>
>
> Am 15.03.23 um 10:45 schrieb giorgio ferraris:
>
> it's interesting to see how the answer changes with only a small change on
> the question:
>
> (1 to: 100 by: 4) reject: [:i | i isOdd]
>
> gives:
>
> The code creates a collection of numbers from 1 to 100, incrementing by 4
> at each step using the to:by: message. It then applies the reject: message
> to this collection, which returns a new collection containing all elements
> for which the given block evaluates to false.
>
> In this case, the block is [i | i isOdd], which checks whether a given
> number i is odd using the isOdd method. So the result of the entire
> expression is a collection of all even numbers between 1 and 100 that are
> divisible by 4.
>
> ciao
>
> giorgio
>
>
> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon 
>  wrote:
>
> Of course everyone is talking about ChatGPT, and I saw a post to
> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed
> that it does! Presumably the Tonel Smalltalk code on GitHub has helped
> along with numerous articles?
>
> Try it out: https://whatdoesthiscodedo.com/
>
> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>
> And got this (neat eh):
>
> The code above is written in the programming language Smalltalk.
>
> It creates a collection of numbers from 1 to 100, incrementing by 3 at
> each step. This is done using the "to:by:" construct where the starting
> value is 1, the ending value is 100, and the increment is 3.
>
> The resulting collection is then filtered using the "reject:" method,
> which takes a block of code as an argument. In this case, the block checks
> whether each element (denoted by the variable i) is odd by using the
> "isOdd" method. If an element is odd, it will be rejected from the
> collection.
>
> So, the final result is a collection of all even numbers from 1 to 100,
> incrementing by 3 at each step.
> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e
>
> --
>
> ---
> Objektfabrik Joachim Tuchel  mailto:jtuc...@objektfabrik.de 
> 
> Fliederweg 1 http://www.objektfabrik.de
> D-71640 Ludwigsburg  http://joachimtuchel.wordpress.com
> Telefon: +49 7141 56 10 86 0Fax: +49 7141 56 10 86 1
>
>
>
>


[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Tim Mackinnon
"#isOdd is not Smalltalk" - doh, I typed it in on my phone and so it just goes 
to show that it highlights the flaw in chatGPT that others have called out in 
other languages. I had meant to find some trickier code samples to see how well 
it does...

Still, it is very interesting how it reasons on this stuff - and I wonder if 
there is way to tell it about these mistakes? For example is it reading this 
email (or will it in the future) and understand that it is #odd and not #isOdd.

Tim

On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:
> interesting
> 
> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  understands this 
> message ;-)
> 
> If I evaluate your snippet -replacing #isOdd for #odd, I get an empty 
> Collection. The divisible by 4 things is somewhat interesting, because (1 to: 
> 100 by: 4) is amn interval 1,5,9 etc ;-)
> 
> So chatGPT is probably a newbie in Smalltalk. Interesting is how logical the 
> explanation sounds, you're tempted to believe it...
> 
> 
> 
> Joachim
> 
> 
> 
> 
> 
> Am 15.03.23 um 10:45 schrieb giorgio ferraris:
>> it's interesting to see how the answer changes with only a small change on 
>> the question:
>> 
>> (1 to: 100 by: 4) reject: [:i | i isOdd]
>> 
>> gives:
>> The code creates a collection of numbers from 1 to 100, incrementing by 4 at 
>> each step using the `to:by:` message. It then applies the `reject:` message 
>> to this collection, which returns a new collection containing all elements 
>> for which the given block evaluates to false.
>> 
>> In this case, the block is `[i | i isOdd]`, which checks whether a given 
>> number `i` is odd using the `isOdd` method. So the result of the entire 
>> expression is a collection of all even numbers between 1 and 100 that are 
>> divisible by 4.
>> 
>> 
>> ciao
>> 
>> giorgio
>> 
>> 
>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon  wrote:
>>> Of course everyone is talking about ChatGPT, and I saw a post to 
>>> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed 
>>> that it does! Presumably the Tonel Smalltalk code on GitHub has helped 
>>> along with numerous articles?
>>> 
>>> Try it out: https://whatdoesthiscodedo.com/
>>> 
>>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>>> 
>>> And got this (neat eh):
>>> 
>>> The code above is written in the programming language Smalltalk.
>>> 
>>> It creates a collection of numbers from 1 to 100, incrementing by 3 at each 
>>> step. This is done using the "to:by:" construct where the starting value is 
>>> 1, the ending value is 100, and the increment is 3.
>>> 
>>> The resulting collection is then filtered using the "reject:" method, which 
>>> takes a block of code as an argument. In this case, the block checks 
>>> whether each element (denoted by the variable i) is odd by using the 
>>> "isOdd" method. If an element is odd, it will be rejected from the 
>>> collection.
>>> 
>>> So, the final result is a collection of all even numbers from 1 to 100, 
>>> incrementing by 3 at each step.
>>> 
>>> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e
> -- 
> 
> --- 
> Objektfabrik Joachim Tuchel  mailto:jtuc...@objektfabrik.de 
> Fliederweg 1 http://www.objektfabrik.de
> D-71640 Ludwigsburg  http://joachimtuchel.wordpress.com
> Telefon: +49 7141 56 10 86 0Fax: +49 7141 56 10 86 1
> 
> 


[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Joachim Tuchel

interesting

#isOdd is not Smalltalk, neither Pharo 10 nor VAST 12 understands this 
message ;-)


If I evaluate your snippet -replacing #isOdd for #odd, I get an empty 
Collection. The divisible by 4 things is somewhat interesting, because 
(1 to: 100 by: 4) is amn interval 1,5,9 etc ;-)


So chatGPT is probably a newbie in Smalltalk. Interesting is how logical 
the explanation sounds, you're tempted to believe it...



Joachim



Am 15.03.23 um 10:45 schrieb giorgio ferraris:
it's interesting to see how the answer changes with only a small 
change on the question:


(1 to: 100 by: 4) reject: [:i | i isOdd]

gives:

The code creates a collection of numbers from 1 to 100, incrementing 
by 4 at each step using the|to:by:|message. It then applies 
the|reject:|message to this collection, which returns a new collection 
containing all elements for which the given block evaluates to false.


In this case, the block is|[i | i isOdd]|, which checks whether a 
given number|i|is odd using the|isOdd|method. So the result of the 
entire expression is a collection of all even numbers between 1 and 
100 that are divisible by 4.



ciao

giorgio


On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon  wrote:

Of course everyone is talking about ChatGPT, and I saw a post to
ExplainCode and didn’t expect it to understand Smalltalk, so was
impressed that it does! Presumably the Tonel Smalltalk code on
GitHub has helped along with numerous articles?

Try it out: https://whatdoesthiscodedo.com/

I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]

And got this (neat eh):

The code above is written in the programming language Smalltalk.

It creates a collection of numbers from 1 to 100, incrementing by
3 at each step. This is done using the "to:by:" construct where
the starting value is 1, the ending value is 100, and the
increment is 3.

The resulting collection is then filtered using the "reject:"
method, which takes a block of code as an argument. In this case,
the block checks whether each element (denoted by the variable i)
is odd by using the "isOdd" method. If an element is odd, it will
be rejected from the collection.

So, the final result is a collection of all even numbers from 1 to
100, incrementing by 3 at each step.

Share this explanation with colleagues:
whatdoesthiscodedo.com/g/a0ec56e



--

---
Objektfabrik Joachim Tuchelmailto:jtuc...@objektfabrik.de  
Fliederweg 1http://www.objektfabrik.de

D-71640 Ludwigsburghttp://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0Fax: +49 7141 56 10 86 1


[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread giorgio ferraris
it's interesting to see how the answer changes with only a small change on
the question:

(1 to: 100 by: 4) reject: [:i | i isOdd]

gives:

The code creates a collection of numbers from 1 to 100, incrementing by 4
at each step using the to:by: message. It then applies the reject: message
to this collection, which returns a new collection containing all elements
for which the given block evaluates to false.

In this case, the block is [i | i isOdd], which checks whether a given
number i is odd using the isOdd method. So the result of the entire
expression is a collection of all even numbers between 1 and 100 that are
divisible by 4.

ciao

giorgio


On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon  wrote:

> Of course everyone is talking about ChatGPT, and I saw a post to
> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed
> that it does! Presumably the Tonel Smalltalk code on GitHub has helped
> along with numerous articles?
>
> Try it out: https://whatdoesthiscodedo.com/
>
> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>
> And got this (neat eh):
>
> The code above is written in the programming language Smalltalk.
>
> It creates a collection of numbers from 1 to 100, incrementing by 3 at
> each step. This is done using the "to:by:" construct where the starting
> value is 1, the ending value is 100, and the increment is 3.
>
> The resulting collection is then filtered using the "reject:" method,
> which takes a block of code as an argument. In this case, the block checks
> whether each element (denoted by the variable i) is odd by using the
> "isOdd" method. If an element is odd, it will be rejected from the
> collection.
>
> So, the final result is a collection of all even numbers from 1 to 100,
> incrementing by 3 at each step.
> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e
>


[Pharo-users] Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Tim Mackinnon
Of course everyone is talking about ChatGPT, and I saw a post to ExplainCode and didn’t expect it to understand Smalltalk, so was impressed that it does! Presumably the Tonel Smalltalk code on GitHub has helped along with numerous articles?Try it out: https://whatdoesthiscodedo.com/I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]And got this (neat eh):The code above is written in the programming language Smalltalk.It creates a collection of numbers from 1 to 100, incrementing by 3 at each step. This is done using the "to:by:" construct where the starting value is 1, the ending value is 100, and the increment is 3.The resulting collection is then filtered using the "reject:" method, which takes a block of code as an argument. In this case, the block checks whether each element (denoted by the variable i) is odd by using the "isOdd" method. If an element is odd, it will be rejected from the collection.So, the final result is a collection of all even numbers from 1 to 100, incrementing by 3 at each step.Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e