[Wikimedia-l] Call for submissions deadline for Wikimedia CEE Meeting 2023

2023-05-21 Thread Kiril Simeonovski
Dear all,

I would like to remind you that the deadline for the call for submissions

 for Wikimedia CEE Meeting 2023
, which will
take place in the period 15–17 September, expires today. If you plan to
share your learning and experience with the others at this online event,
the time for submitting your session proposal is now.

Best regards,
Kiril Simeonovski
Chair of the Wikimedia CEE Meeting 2023 programme committee
___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/KZUXWEPCMTHRVYPSWC72XIDV6UUYQEWM/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-21 Thread Felipe Schenone
FYI, yesterday I stumbled upon Perplexity , an
AI that cites its sources for its answers. After a couple tests, I'm not
convinced on how tight the connection is between the generated text and the
sources, but they seem at least to broadly support the claims.

On Thu, May 18, 2023 at 8:30 AM Peter Southwood <
peter.southw...@telkomsa.net> wrote:

> It depends on how much you know about the topic, Both methods have their
> advantages.
>
> Cheers,
>
> Peter
>
>
>
> *From:* Todd Allen [mailto:toddmal...@gmail.com]
> *Sent:* 17 May 2023 20:10
> *To:* Wikimedia Mailing List
> *Subject:* [Wikimedia-l] Re: ChatGPT as a reliable source
>
>
>
> Though, this does run the risk of encouraging people to take the
> "backwards" approach to writing an article--writing some stuff, and then
> (hopefully at least) trying to come up with sources for it.
>
>
>
> The much superior approach is to locate the available sources first, and
> then to develop the article based upon what those sources say.
>
>
>
> Todd
>
>
>
> On Wed, May 17, 2023 at 12:06 PM Samuel Klein  wrote:
>
>
>
> First: Wikipedia style for dense inline citations is one of the most
> granular and articulate around, so we're pushing the boundaries in some
> cases of research norms for clarity in sourcing.  That's great; also means
> sometimes we are considering nuances that may be new.
>
>
>
> Second: We're approaching a topic close to my heart, which is
> distinguishing reference-sources from process-sources.  Right now we often
> capture process sources (for an edit) in the edit summary, and this is not
> visible anywhere on the resulting article.  Translations via a translate
> tool; updates by a script that does a particular class of work (like
> spelling or grammer checking); applying a detailed diff that was
> workshopped on some other page.  An even better interface might allow for
> that detail to be visible to readers of the article [w/o traversing the
> edit history], and linked to the sections/paragraphs/sentences affected.
>
>
>
> I think any generative tools used to rewrite a section or article, or to
> produce a sibling version for a different reading-level, or to generate a
> timeline or other visualization that is then embedded in the article,
> should all be cited somehow.  To Jimbo's point, that doesn't belong in a
> References section as we currently have them.  But I'd like to see us
> develop a way to capture these process notes in a more legible way, so
> readers can discover them without browsing the revision history.
>
>
>
> People using generative tools to draft new material should find reliable
> sources for every claim in that material, much more densely than you would
> when summarizing a series of sources yourself.
>
> However, as we approach models that can discover sources and check facts,
> a combination of those with current generative tools could produce things
> closer to what we'd consider acceptable drafts, and at scale could generate
> reference works in languages that lack them.  I suggest a separate project
> for those as the best way to explore the implications of being able to do
> this at scale, and should capture the full model/tuning/prompt details of
> how each edit was generated.  Such an automatically-updated resource would
> not be a good reliable source, just as we avoid citing any tertiary
> sources, but could be a research tool for WP editors and modelers alike.
>
>
>
> SJ
>
>
>
> On Wed, May 17, 2023 at 9:27 AM Jimmy Wales 
> wrote:
>
> One way I think we can approach this is to think of it as being the latest
> in this progression:
>
> spellchecker -> grammar checker -> text generation support
>
> We wouldn't have any sort of footnote or indication of any kind that a
> spellchecker or grammar checker was
> used by an editor, it's just built-in to many writing tools.  Similarly,
> if writing a short prompt to generate a longer
> text is used, then we have no reason to cite that.
>
> What we do have, though, is a responsibility to check the output.
> Spellcheckers can be wrong (suggesting the correct
> spelling of the wrong word for example).  Grammar checkers can be wrong
> (trying to correct the grammar of a direct quote
> for example).  Generative AI models can be wrong - often simply making
> things up out of thin air that sound plausible.
>
> If someone uses a generative AI to help them write some text, that's not a
> big deal.  If they upload text without checking
> the facts and citing a real source, that's very bad.
>
>
>
> On 2023-05-17 11:51, The Cunctator wrote:
>
> Again at no point should even an improved version be considered a source;
> at best it would be a research or editing tool.
>
>
>
> On Wed, May 17, 2023, 4:40 AM Lane Chance  wrote:
>
> Keep in mind how fast these tools change. ChatGPT, Bard and
> competitors understand well the issues with lack of sources, and Bard
> does sometimes put a suitable source in a footnote, even if it
> (somewhat disappointingly) just