It's progress. I think a lot of the nattering nabobs of AI negativity
are blowing fuses criticizing it, but they miss the point. It still
delivers on relatively mundane inquiries pretty reliably (seemingly).
It's as reliable as the crap you read on anonymous message boards --
right now trust it no more than that and you're fine. It's progress!
Also, some people are rolling out the well used criticism that it
doesn't "understand." It depends again how you define understanding,
and my survey shows (in addition to reading sources such as Thorisson
et al) that most people take understanding to have levels, the crudest
being just the ability to respond/predict/right-answer. So if all it
does is produce the correct answer that is the first level of
"understanding." It needs to advance, yes.

On 12/12/22, Matt Mahoney via AGI <agi@agi.topicbox.com> wrote:
> It is interesting how many times I've seen examples of ChatGPT getting
> something wrong but defending its answers with plausible arguments. In one
> example it gives a "proof" that all odd numbers are prime. It requires some
> thought to find the mistake. In another thread I saw on Twitter the user
> asks for an anagram. It gives a wrong answer (missing one letter) and the
> argument boils down to it insisting that the word "chat" does not have the
> letter "h". But instead of admitting it is wrong, it sticks to it's guns.
> Humans don't like to be wrong either.
> In 1950, Turing gave an example of a computer playing the imitation game
> giving the wrong answer to an arithmetic problem. I think if he saw GPT-3 he
> would would say that AI has arrived.
> 
> Sent from Yahoo Mail on Android
> 
> On Sun, Dec 11, 2022 at 4:37 AM,
> immortal.discover...@gmail.com<immortal.discover...@gmail.com> wrote:   On
> Sunday, December 11, 2022, at 1:34 AM, WriterOfMinds wrote:
> 
> If I tried to generate multiple e-mails on the same topic (which would be
> the goal - I like to bother my representatives on the regular), they started
> looking very similar. Telling GPT to "rewrite that in different words" just
> produced another copy of the same output.
> 
> I found yesterday that codex has it's temp in the OpenAI Playground set to 0
> as if it is something working different than in GPT. It seems codex at 0
> predicts somewhat the same thing yes. This is so the code works right I
> think. I know sometimes a weird prediction can be the answer, but it seems
> to like a more frozen setting of "cold more stable" prediction so things are
> kept in order more, mostly every time. Perhaps it's because they know the
> things it may try to say are directly word by word from a human, and that
> makes them quite a likely correct thing to be saying (though again many
> prompts call for new completions overall). Anyway Idk but ya it does seem to
> complete with the same thing like codex, very close actually at least for
> the first 2 sentences I seen were exact to a story completion! Lol.
> 
> BTW chatGPt seems to use Dialogue and Instruct and Code now, which makes it
> different I mean that GPT-3. It is a GPT-3.5 BTW they call it. It basically
> makes up facts less often and tries to act like a human / assistant, and
> know code and math more better - something tricky GPT-3 fails at easy. And
> Dialogue IDK what exactly if these are all differently applied but Dialogue
> seems to be the goals and beliefs it thinks/ says to make it try to obey
> OpenAI's laws and act useful. So this is part of why you see less outputs
> like "I have a dog>it's a robot dog!!! Tuesday Ramholt said why not
> just...". It's less random in one sense. More frozen (and aligned as they
> call it).Artificial General Intelligence List / AGI / seediscussions
> +participants +delivery optionsPermalink

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T357de2f46d742838-Mc996523cfc5ded0a5a20eced
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to