On Fri, Aug 7, 2009 at 6:40 AM, Andrew Gray<andrew.g...@dunelm.org.uk> wrote:
More broadly, there's a good side and a bad side to this. The bad
side, yes, a lot of our existing references will break, and it'll be a
bit harder to write good, robustly cited, articles in the future. On
the plus side, it might help wean us off an over-reliance on news
stories and (often slapdash) journalism as our preferred sources, and
that's got to be beneficial.


What worries me on the topic is that as newspapers shift to online content - content which is *not* mirrored in the dead-tree version (this is increasingly common eg. I read that the Wall Street Journal does this now) - their pages become less and less reliably accessible.
For example, it wasn't that long ago that the NYT merged with another paper and 
broke all the links, and those missing articles (referenced by Wikipedia 
articles) couldn't be refound in the NYT website. If the original paper's 
domain blocked WebCitation and Internet Archive with robots.txt (as is very 
likely), then any of those articles which were online only are basically *gone*.

The set of newspapers that block caching/archiving of their webpages is large; as is the set that is moving online; we can also expect the set of newspapers that will fail or merge in coming years to be large as well. The union of these 3 sets is, I think, nonzero.
And that is a problem our conventional solutions (treat it as a print-ref; use 
an archived copy) don't address. I don't really have a solution, but I can 
predict that editors will continue to use them at their convenience, and that 
our articles will be damaged by those references' link-rot.

(A pity that the big archivers are so damn ethical!)

--
gwern

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
WikiEN-l mailing list
WikiEN-l@lists.wikimedia.org
To unsubscribe from this mailing list, visit:
https://lists.wikimedia.org/mailman/listinfo/wikien-l

Reply via email to