On 5/16/2011 11:07 AM, Chris McKenna wrote:
> I don't know how well such a censored subset would work, given that every
> organisation's content policies I am aware of are different to each other,
> and the technical challenges associated with censorship, but I am no
> expert.
     I've actually implemented a "censored subset" of Wikimedia Commons 
so I've got some insight into this.  One of my projects has partners 
that won't work with web sites that have nudity,  so I've had to remove 
potentially offensive content from a sample of nearly a million images.  
This site has also has a large audience in K-12 education so I'm 
sensitive to people's concerns in that area.

     Considering images that are used in actually Wikipedia,  I'd say 
that a bit less than 0.1% (about 1 in 1000) of images contain nudity 
that "somebody" could find offensive.  That includes pictures of ancient 
pots from Persia that show couples having intercourse,  pictures taken 
at nude beaches that aren't conceivably lascivious as well as pictures 
of body modifications that you might bot  believe until you saw them.

     Oddly,  people tend to think of Wikipedia as a place that's good 
for K-12 use despite the fact that it's not officially "family 
friendly."  A lot of that is because you can use Wikipedia for a very 
long time and not find anything offensive,  unless you go looking for it.

     If the picture of the day was truly a random sample of what was in 
Wikipedia,  I suppose we'd get something offensive and a big argument 
about it every three years or so -- so maybe this is all just par for 
the course.

_______________________________________________
Commons-l mailing list
Commons-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l

Reply via email to